[PURIFY] Remove docs directory (#3)
This commit removes the doc directory Signed-off-by: Peter Nied <petern@amazon.com>
This commit is contained in:
parent
b7138c88e8
commit
0d1e9a7b64
|
@ -1,133 +0,0 @@
|
||||||
The Elasticsearch docs are in AsciiDoc format and can be built using the
|
|
||||||
Elasticsearch documentation build process.
|
|
||||||
|
|
||||||
See: https://github.com/elastic/docs
|
|
||||||
|
|
||||||
=== Backporting doc fixes
|
|
||||||
|
|
||||||
* Doc changes should generally be made against master and backported through to the current version
|
|
||||||
(as applicable).
|
|
||||||
|
|
||||||
* Changes can also be backported to the maintenance version of the previous major version.
|
|
||||||
This is typically reserved for technical corrections, as it can require resolving more complex
|
|
||||||
merge conflicts, fixing test failures, and figuring out where to apply the change.
|
|
||||||
|
|
||||||
* Avoid backporting to out-of-maintenance versions.
|
|
||||||
Docs follow the same policy as code and fixes are not ordinarily merged to
|
|
||||||
versions that are out of maintenance.
|
|
||||||
|
|
||||||
* Do not backport doc changes to https://www.elastic.co/support/eol[EOL versions].
|
|
||||||
|
|
||||||
=== Snippet testing
|
|
||||||
|
|
||||||
Snippets marked with `[source,console]` are automatically annotated with
|
|
||||||
"VIEW IN CONSOLE" and "COPY AS CURL" in the documentation and are automatically
|
|
||||||
tested by the command `./gradlew -pdocs check`. To test just the docs from a
|
|
||||||
single page, use e.g. `./gradlew -pdocs integTest --tests "\*rollover*"`.
|
|
||||||
|
|
||||||
By default each `[source,console]` snippet runs as its own isolated test. You
|
|
||||||
can manipulate the test execution in the following ways:
|
|
||||||
|
|
||||||
* `// TEST`: Explicitly marks a snippet as a test. Snippets marked this way
|
|
||||||
are tests even if they don't have `[source,console]` but usually `// TEST` is
|
|
||||||
used for its modifiers:
|
|
||||||
* `// TEST[s/foo/bar/]`: Replace `foo` with `bar` in the generated test. This
|
|
||||||
should be used sparingly because it makes the snippet "lie". Sometimes,
|
|
||||||
though, you can use it to make the snippet more clear. Keep in mind that
|
|
||||||
if there are multiple substitutions then they are applied in the order that
|
|
||||||
they are defined.
|
|
||||||
* `// TEST[catch:foo]`: Used to expect errors in the requests. Replace `foo`
|
|
||||||
with `request` to expect a 400 error, for example. If the snippet contains
|
|
||||||
multiple requests then only the last request will expect the error.
|
|
||||||
* `// TEST[continued]`: Continue the test started in the last snippet. Between
|
|
||||||
tests the nodes are cleaned: indexes are removed, etc. This prevents that
|
|
||||||
from happening between snippets because the two snippets are a single test.
|
|
||||||
This is most useful when you have text and snippets that work together to
|
|
||||||
tell the story of some use case because it merges the snippets (and thus the
|
|
||||||
use case) into one big test.
|
|
||||||
* You can't use `// TEST[continued]` immediately after `// TESTSETUP` or
|
|
||||||
`// TEARDOWN`.
|
|
||||||
* `// TEST[skip:reason]`: Skip this test. Replace `reason` with the actual
|
|
||||||
reason to skip the test. Snippets without `// TEST` or `// CONSOLE` aren't
|
|
||||||
considered tests anyway but this is useful for explicitly documenting the
|
|
||||||
reason why the test shouldn't be run.
|
|
||||||
* `// TEST[setup:name]`: Run some setup code before running the snippet. This
|
|
||||||
is useful for creating and populating indexes used in the snippet. The setup
|
|
||||||
code is defined in `docs/build.gradle`. See `// TESTSETUP` below for a
|
|
||||||
similar feature.
|
|
||||||
* `// TEST[warning:some warning]`: Expect the response to include a `Warning`
|
|
||||||
header. If the response doesn't include a `Warning` header with the exact
|
|
||||||
text then the test fails. If the response includes `Warning` headers that
|
|
||||||
aren't expected then the test fails.
|
|
||||||
* `[source,console-result]`: Matches this snippet against the body of the
|
|
||||||
response of the last test. If the response is JSON then order is ignored. If
|
|
||||||
you add `// TEST[continued]` to the snippet after `[source,console-result]`
|
|
||||||
it will continue in the same test, allowing you to interleave requests with
|
|
||||||
responses to check.
|
|
||||||
* `// TESTRESPONSE`: Explicitly marks a snippet as a test response even without
|
|
||||||
`[source,console-result]`. Similarly to `// TEST` this is mostly used for
|
|
||||||
its modifiers.
|
|
||||||
* You can't use `[source,console-result]` immediately after `// TESTSETUP`.
|
|
||||||
Instead, consider using `// TEST[continued]` or rearrange your snippets.
|
|
||||||
|
|
||||||
NOTE: Previously we only used `// TESTRESPONSE` instead of
|
|
||||||
`[source,console-result]` so you'll see that a lot in older branches but we
|
|
||||||
prefer `[source,console-result]` now.
|
|
||||||
|
|
||||||
* `// TESTRESPONSE[s/foo/bar/]`: Substitutions. See `// TEST[s/foo/bar]` for
|
|
||||||
how it works. These are much more common than `// TEST[s/foo/bar]` because
|
|
||||||
they are useful for eliding portions of the response that are not pertinent
|
|
||||||
to the documentation.
|
|
||||||
* One interesting difference here is that you often want to match against
|
|
||||||
the response from Elasticsearch. To do that you can reference the "body" of
|
|
||||||
the response like this: `// TESTRESPONSE[s/"took": 25/"took": $body.took/]`.
|
|
||||||
Note the `$body` string. This says "I don't expect that 25 number in the
|
|
||||||
response, just match against what is in the response." Instead of writing
|
|
||||||
the path into the response after `$body` you can write `$_path` which
|
|
||||||
"figures out" the path. This is especially useful for making sweeping
|
|
||||||
assertions like "I made up all the numbers in this example, don't compare
|
|
||||||
them" which looks like `// TESTRESPONSE[s/\d+/$body.$_path/]`.
|
|
||||||
* `// TESTRESPONSE[non_json]`: Add substitutions for testing responses in a
|
|
||||||
format other than JSON. Use this after all other substitutions so it doesn't
|
|
||||||
make other substitutions difficult.
|
|
||||||
* `// TESTRESPONSE[skip:reason]`: Skip the assertions specified by this
|
|
||||||
response.
|
|
||||||
* `// TESTSETUP`: Marks this snippet as the "setup" for all other snippets in
|
|
||||||
this file. This is a somewhat natural way of structuring documentation. You
|
|
||||||
say "this is the data we use to explain this feature" then you add the
|
|
||||||
snippet that you mark `// TESTSETUP` and then every snippet will turn into
|
|
||||||
a test that runs the setup snippet first. See the "painless" docs for a file
|
|
||||||
that puts this to good use. This is fairly similar to `// TEST[setup:name]`
|
|
||||||
but rather than the setup defined in `docs/build.gradle` the setup is defined
|
|
||||||
right in the documentation file. In general, we should prefer `// TESTSETUP`
|
|
||||||
over `// TEST[setup:name]` because it makes it more clear what steps have to
|
|
||||||
be taken before the examples will work. Tip: `// TESTSETUP` can only be used
|
|
||||||
on the first snippet of a document.
|
|
||||||
* `// TEARDOWN`: Ends and cleans up a test series started with `// TESTSETUP` or
|
|
||||||
`// TEST[setup:name]`. You can use `// TEARDOWN` to set up multiple tests in
|
|
||||||
the same file.
|
|
||||||
* `// NOTCONSOLE`: Marks this snippet as neither `// CONSOLE` nor
|
|
||||||
`// TESTRESPONSE`, excluding it from the list of unconverted snippets. We
|
|
||||||
should only use this for snippets that *are* JSON but are *not* responses or
|
|
||||||
requests.
|
|
||||||
|
|
||||||
In addition to the standard CONSOLE syntax these snippets can contain blocks
|
|
||||||
of yaml surrounded by markers like this:
|
|
||||||
|
|
||||||
```
|
|
||||||
startyaml
|
|
||||||
- compare_analyzers: {index: thai_example, first: thai, second: rebuilt_thai}
|
|
||||||
endyaml
|
|
||||||
```
|
|
||||||
|
|
||||||
This allows slightly more expressive testing of the snippets. Since that syntax
|
|
||||||
is not supported by `[source,console]` the usual way to incorporate it is with a
|
|
||||||
`// TEST[s//]` marker like this:
|
|
||||||
|
|
||||||
```
|
|
||||||
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: thai_example, first: thai, second: rebuilt_thai}\nendyaml\n/]
|
|
||||||
```
|
|
||||||
|
|
||||||
Any place you can use json you can use elements like `$body.path.to.thing`
|
|
||||||
which is replaced on the fly with the contents of the thing at `path.to.thing`
|
|
||||||
in the last response.
|
|
|
@ -1,79 +0,0 @@
|
||||||
|
|
||||||
include::{docs-root}/shared/versions/stack/{source_branch}.asciidoc[]
|
|
||||||
|
|
||||||
:lucene_version: 8.7.0
|
|
||||||
:lucene_version_path: 8_7_0
|
|
||||||
:jdk: 1.8.0_131
|
|
||||||
:jdk_major: 8
|
|
||||||
:build_flavor: default
|
|
||||||
:build_type: tar
|
|
||||||
|
|
||||||
:docker-repo: docker.elastic.co/elasticsearch/elasticsearch
|
|
||||||
:docker-image: {docker-repo}:{version}
|
|
||||||
:plugin_url: https://artifacts.elastic.co/downloads/elasticsearch-plugins
|
|
||||||
|
|
||||||
///////
|
|
||||||
Javadoc roots used to generate links from Painless's API reference
|
|
||||||
///////
|
|
||||||
:java11-javadoc: https://docs.oracle.com/en/java/javase/11/docs/api
|
|
||||||
:lucene-core-javadoc: https://lucene.apache.org/core/{lucene_version_path}/core
|
|
||||||
|
|
||||||
ifeval::["{release-state}"=="unreleased"]
|
|
||||||
:elasticsearch-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/elasticsearch/{version}-SNAPSHOT
|
|
||||||
:transport-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/client/transport/{version}-SNAPSHOT
|
|
||||||
:rest-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client/{version}-SNAPSHOT
|
|
||||||
:rest-client-sniffer-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client-sniffer/{version}-SNAPSHOT
|
|
||||||
:rest-high-level-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-high-level-client/{version}-SNAPSHOT
|
|
||||||
:mapper-extras-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/mapper-extras-client/{version}-SNAPSHOT
|
|
||||||
:painless-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/painless/lang-painless/{version}-SNAPSHOT
|
|
||||||
:parent-join-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/parent-join-client/{version}-SNAPSHOT
|
|
||||||
:percolator-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/percolator-client/{version}-SNAPSHOT
|
|
||||||
:matrixstats-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/aggs-matrix-stats-client/{version}-SNAPSHOT
|
|
||||||
:rank-eval-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/rank-eval-client/{version}-SNAPSHOT
|
|
||||||
:version_qualified: {bare_version}-SNAPSHOT
|
|
||||||
endif::[]
|
|
||||||
|
|
||||||
ifeval::["{release-state}"!="unreleased"]
|
|
||||||
:elasticsearch-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/{version}
|
|
||||||
:transport-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/transport/{version}
|
|
||||||
:rest-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client/{version}
|
|
||||||
:rest-client-sniffer-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client-sniffer/{version}
|
|
||||||
:rest-high-level-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-high-level-client/{version}
|
|
||||||
:mapper-extras-client-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/plugin/mapper-extras-client/{version}
|
|
||||||
:painless-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/painless/lang-painless/{version}
|
|
||||||
:parent-join-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/plugin/parent-join-client/{version}
|
|
||||||
:percolator-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/plugin/percolator-client/{version}
|
|
||||||
:matrixstats-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/plugin/aggs-matrix-stats-client/{version}
|
|
||||||
:rank-eval-client-javadoc: https://artifacts.elastic.co/javadoc/org/elasticsearch/plugin/rank-eval-client/{version}
|
|
||||||
:version_qualified: {bare_version}
|
|
||||||
endif::[]
|
|
||||||
|
|
||||||
:javadoc-client: {rest-high-level-client-javadoc}/org/elasticsearch/client
|
|
||||||
:javadoc-xpack: {rest-high-level-client-javadoc}/org/elasticsearch/protocol/xpack
|
|
||||||
:javadoc-license: {rest-high-level-client-javadoc}/org/elasticsearch/protocol/xpack/license
|
|
||||||
:javadoc-watcher: {rest-high-level-client-javadoc}/org/elasticsearch/protocol/xpack/watcher
|
|
||||||
|
|
||||||
///////
|
|
||||||
Permanently unreleased branches (master, n.X)
|
|
||||||
///////
|
|
||||||
ifeval::["{source_branch}"=="master"]
|
|
||||||
:permanently-unreleased-branch:
|
|
||||||
endif::[]
|
|
||||||
ifeval::["{source_branch}"=="{major-version}"]
|
|
||||||
:permanently-unreleased-branch:
|
|
||||||
endif::[]
|
|
||||||
|
|
||||||
///////
|
|
||||||
Shared attribute values are pulled from elastic/docs
|
|
||||||
///////
|
|
||||||
|
|
||||||
include::{docs-root}/shared/attributes.asciidoc[]
|
|
||||||
|
|
||||||
///////
|
|
||||||
APM does not build n.x documentation. Links from .x branches should point to master instead
|
|
||||||
///////
|
|
||||||
ifeval::["{source_branch}"=="7.x"]
|
|
||||||
:apm-server-ref: {apm-server-ref-m}
|
|
||||||
:apm-server-ref-v: {apm-server-ref-m}
|
|
||||||
:apm-overview-ref-v: {apm-overview-ref-m}
|
|
||||||
endif::[]
|
|
1467
docs/build.gradle
1467
docs/build.gradle
File diff suppressed because it is too large
Load Diff
|
@ -1,223 +0,0 @@
|
||||||
= Community Contributed Clients
|
|
||||||
|
|
||||||
[preface]
|
|
||||||
== Preface
|
|
||||||
:client: https://www.elastic.co/guide/en/elasticsearch/client
|
|
||||||
|
|
||||||
[NOTE]
|
|
||||||
====
|
|
||||||
This is a list of clients submitted by members of the Elastic community.
|
|
||||||
Elastic does not support or endorse these clients.
|
|
||||||
|
|
||||||
If you'd like to add a new client to this list, please
|
|
||||||
https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md#contributing-code-and-documentation-changes[open a pull request].
|
|
||||||
====
|
|
||||||
|
|
||||||
Besides the link:/guide[officially supported Elasticsearch clients], there are
|
|
||||||
a number of clients that have been contributed by the community for various languages:
|
|
||||||
|
|
||||||
* <<b4j>>
|
|
||||||
* <<cpp>>
|
|
||||||
* <<clojure>>
|
|
||||||
* <<coldfusion>>
|
|
||||||
* <<erlang>>
|
|
||||||
* <<go>>
|
|
||||||
* <<haskell>>
|
|
||||||
* <<java>>
|
|
||||||
* <<javascript>>
|
|
||||||
* <<kotlin>>
|
|
||||||
* <<lua>>
|
|
||||||
* <<dotnet>>
|
|
||||||
* <<perl>>
|
|
||||||
* <<php>>
|
|
||||||
* <<python>>
|
|
||||||
* <<r>>
|
|
||||||
* <<ruby>>
|
|
||||||
* <<rust>>
|
|
||||||
* <<scala>>
|
|
||||||
* <<smalltalk>>
|
|
||||||
* <<vertx>>
|
|
||||||
|
|
||||||
[[b4j]]
|
|
||||||
== B4J
|
|
||||||
* https://www.b4x.com/android/forum/threads/server-jelasticsearch-search-and-text-analytics.73335/
|
|
||||||
B4J client based on the official Java REST client.
|
|
||||||
|
|
||||||
[[cpp]]
|
|
||||||
== C++
|
|
||||||
* https://github.com/seznam/elasticlient[elasticlient]: simple library for simplified work with Elasticsearch in C++
|
|
||||||
|
|
||||||
[[clojure]]
|
|
||||||
== Clojure
|
|
||||||
|
|
||||||
* https://github.com/mpenet/spandex[Spandex]:
|
|
||||||
Clojure client, based on the new official low level rest-client.
|
|
||||||
|
|
||||||
* https://github.com/clojurewerkz/elastisch[Elastisch]:
|
|
||||||
Clojure client.
|
|
||||||
|
|
||||||
[[coldfusion]]
|
|
||||||
== ColdFusion (CFML)
|
|
||||||
|
|
||||||
* https://www.forgebox.io/view/cbelasticsearch[cbElasticSearch]
|
|
||||||
Native ColdFusion (CFML) support for the ColdBox MVC Platform which provides you with a fluent search interface for Elasticsearch, in addition to a CacheBox Cache provider and a Logbox Appender for logging.
|
|
||||||
|
|
||||||
[[erlang]]
|
|
||||||
== Erlang
|
|
||||||
|
|
||||||
* https://github.com/tsloughter/erlastic_search[erlastic_search]:
|
|
||||||
Erlang client using HTTP.
|
|
||||||
|
|
||||||
* https://github.com/datahogs/tirexs[Tirexs]:
|
|
||||||
An https://github.com/elixir-lang/elixir[Elixir] based API/DSL, inspired by
|
|
||||||
https://github.com/karmi/tire[Tire]. Ready to use in pure Erlang
|
|
||||||
environment.
|
|
||||||
|
|
||||||
* https://github.com/sashman/elasticsearch_elixir_bulk_processor[Elixir Bulk Processor]:
|
|
||||||
Dynamically configurable Elixir port of the [Bulk Processor](https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/java-docs-bulk-processor.html). Implemented using GenStages to handle backpressure.
|
|
||||||
|
|
||||||
[[go]]
|
|
||||||
== Go
|
|
||||||
|
|
||||||
Also see the {client}/go-api/current/index.html[official Elasticsearch Go client].
|
|
||||||
|
|
||||||
* https://github.com/mattbaird/elastigo[elastigo]:
|
|
||||||
Go client.
|
|
||||||
|
|
||||||
* https://github.com/olivere/elastic[elastic]:
|
|
||||||
Elasticsearch client for Google Go.
|
|
||||||
|
|
||||||
* https://github.com/softctrl/elk[elk]
|
|
||||||
Golang lib for Elasticsearch client.
|
|
||||||
|
|
||||||
|
|
||||||
[[haskell]]
|
|
||||||
== Haskell
|
|
||||||
* https://github.com/bitemyapp/bloodhound[bloodhound]:
|
|
||||||
Haskell client and DSL.
|
|
||||||
|
|
||||||
|
|
||||||
[[java]]
|
|
||||||
== Java
|
|
||||||
|
|
||||||
Also see the {client}/java-api/current/index.html[official Elasticsearch Java client].
|
|
||||||
|
|
||||||
* https://github.com/otto-de/flummi[Flummi]:
|
|
||||||
Java Rest client with comprehensive query DSL API
|
|
||||||
* https://github.com/searchbox-io/Jest[Jest]:
|
|
||||||
Java Rest client.
|
|
||||||
|
|
||||||
[[javascript]]
|
|
||||||
== JavaScript
|
|
||||||
|
|
||||||
Also see the {client}/javascript-api/current/index.html[official Elasticsearch JavaScript client].
|
|
||||||
|
|
||||||
[[kotlin]]
|
|
||||||
== Kotlin
|
|
||||||
|
|
||||||
* https://github.com/mbuhot/eskotlin[ES Kotlin]:
|
|
||||||
Elasticsearch Query DSL for kotlin based on the {client}/java-api/current/index.html[official Elasticsearch Java client].
|
|
||||||
|
|
||||||
* https://github.com/jillesvangurp/es-kotlin-wrapper-client[ES Kotlin Wrapper Client]:
|
|
||||||
Kotlin extension functions and abstractions for the {client}/java-api/current/index.html[official Elasticsearch Highlevel Client]. Aims to reduce the amount of boilerplate needed to do searches, bulk indexing and other common things users do with the client.
|
|
||||||
|
|
||||||
[[lua]]
|
|
||||||
== Lua
|
|
||||||
|
|
||||||
* https://github.com/DhavalKapil/elasticsearch-lua[elasticsearch-lua]:
|
|
||||||
Lua client for Elasticsearch
|
|
||||||
|
|
||||||
[[dotnet]]
|
|
||||||
== .NET
|
|
||||||
|
|
||||||
Also see the {client}/net-api/current/index.html[official Elasticsearch .NET client].
|
|
||||||
|
|
||||||
[[perl]]
|
|
||||||
== Perl
|
|
||||||
|
|
||||||
Also see the {client}/perl-api/current/index.html[official Elasticsearch Perl client].
|
|
||||||
|
|
||||||
* https://metacpan.org/pod/Elastijk[Elastijk]: A low level minimal HTTP client.
|
|
||||||
|
|
||||||
|
|
||||||
[[php]]
|
|
||||||
== PHP
|
|
||||||
|
|
||||||
Also see the {client}/php-api/current/index.html[official Elasticsearch PHP client].
|
|
||||||
|
|
||||||
* https://github.com/ruflin/Elastica[Elastica]:
|
|
||||||
PHP client.
|
|
||||||
|
|
||||||
* https://github.com/nervetattoo/elasticsearch[elasticsearch] PHP client.
|
|
||||||
|
|
||||||
* https://github.com/madewithlove/elasticsearcher[elasticsearcher] Agnostic lightweight package on top of the Elasticsearch PHP client. Its main goal is to allow for easier structuring of queries and indices in your application. It does not want to hide or replace functionality of the Elasticsearch PHP client.
|
|
||||||
|
|
||||||
[[python]]
|
|
||||||
== Python
|
|
||||||
|
|
||||||
Also see the {client}/python-api/current/index.html[official Elasticsearch Python client].
|
|
||||||
|
|
||||||
[[r]]
|
|
||||||
== R
|
|
||||||
|
|
||||||
* https://github.com/ropensci/elastic[elastic]:
|
|
||||||
A low-level R client for Elasticsearch.
|
|
||||||
|
|
||||||
* https://github.com/ropensci/elasticdsl[elasticdsl]:
|
|
||||||
A high-level R DSL for Elasticsearch, wrapping the elastic R client.
|
|
||||||
|
|
||||||
* https://github.com/UptakeOpenSource/uptasticsearch[uptasticsearch]:
|
|
||||||
An R client tailored to data science workflows.
|
|
||||||
|
|
||||||
[[ruby]]
|
|
||||||
== Ruby
|
|
||||||
|
|
||||||
Also see the {client}/ruby-api/current/index.html[official Elasticsearch Ruby client].
|
|
||||||
|
|
||||||
* https://github.com/printercu/elastics-rb[elastics]:
|
|
||||||
Tiny client with built-in zero-downtime migrations and ActiveRecord integration.
|
|
||||||
|
|
||||||
* https://github.com/toptal/chewy[chewy]:
|
|
||||||
Chewy is an ODM and wrapper for the official Elasticsearch client
|
|
||||||
|
|
||||||
* https://github.com/ankane/searchkick[Searchkick]:
|
|
||||||
Intelligent search made easy
|
|
||||||
|
|
||||||
* https://github.com/artsy/estella[Estella]:
|
|
||||||
Make your Ruby models searchable
|
|
||||||
|
|
||||||
[[rust]]
|
|
||||||
== Rust
|
|
||||||
|
|
||||||
* https://github.com/benashford/rs-es[rs-es]:
|
|
||||||
A REST API client with a strongly-typed Query DSL.
|
|
||||||
|
|
||||||
* https://github.com/elastic-rs/elastic[elastic]:
|
|
||||||
A modular REST API client that supports freeform queries.
|
|
||||||
|
|
||||||
[[scala]]
|
|
||||||
== Scala
|
|
||||||
|
|
||||||
* https://github.com/sksamuel/elastic4s[elastic4s]:
|
|
||||||
Scala DSL.
|
|
||||||
|
|
||||||
* https://github.com/gphat/wabisabi[wabisabi]:
|
|
||||||
Asynchronous REST API Scala client.
|
|
||||||
|
|
||||||
* https://github.com/workday/escalar[escalar]:
|
|
||||||
Type-safe Scala wrapper for the REST API.
|
|
||||||
|
|
||||||
* https://github.com/SumoLogic/elasticsearch-client[elasticsearch-client]:
|
|
||||||
Scala DSL that uses the REST API. Akka and AWS helpers included.
|
|
||||||
|
|
||||||
[[smalltalk]]
|
|
||||||
== Smalltalk
|
|
||||||
|
|
||||||
* https://github.com/newapplesho/elasticsearch-smalltalk[elasticsearch-smalltalk] -
|
|
||||||
Pharo Smalltalk client for Elasticsearch
|
|
||||||
|
|
||||||
[[vertx]]
|
|
||||||
== Vert.x
|
|
||||||
|
|
||||||
* https://github.com/reactiverse/elasticsearch-client[elasticsearch-client]:
|
|
||||||
An Elasticsearch client for Eclipse Vert.x
|
|
|
@ -1,102 +0,0 @@
|
||||||
[[anatomy]]
|
|
||||||
== API Anatomy
|
|
||||||
|
|
||||||
Once a <<client,GClient>> has been
|
|
||||||
obtained, all of Elasticsearch APIs can be executed on it. Each Groovy
|
|
||||||
API is exposed using three different mechanisms.
|
|
||||||
|
|
||||||
|
|
||||||
[[closure]]
|
|
||||||
=== Closure Request
|
|
||||||
|
|
||||||
The first type is to simply provide the request as a Closure, which
|
|
||||||
automatically gets resolved into the respective request instance (for
|
|
||||||
the index API, its the `IndexRequest` class). The API returns a special
|
|
||||||
future, called `GActionFuture`. This is a groovier version of
|
|
||||||
Elasticsearch Java `ActionFuture` (in turn a nicer extension to Java own
|
|
||||||
`Future`) which allows to register listeners (closures) on it for
|
|
||||||
success and failures, as well as blocking for the response. For example:
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def indexR = client.index {
|
|
||||||
index "test"
|
|
||||||
type "_doc"
|
|
||||||
id "1"
|
|
||||||
source {
|
|
||||||
test = "value"
|
|
||||||
complex {
|
|
||||||
value1 = "value1"
|
|
||||||
value2 = "value2"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
println "Indexed $indexR.response.id into $indexR.response.index/$indexR.response.type"
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
In the above example, calling `indexR.response` will simply block for
|
|
||||||
the response. We can also block for the response for a specific timeout:
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
IndexResponse response = indexR.response "5s" // block for 5 seconds, same as:
|
|
||||||
response = indexR.response 5, TimeValue.SECONDS //
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
We can also register closures that will be called on success and on
|
|
||||||
failure:
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
indexR.success = {IndexResponse response ->
|
|
||||||
println "Indexed $response.id into $response.index/$response.type"
|
|
||||||
}
|
|
||||||
indexR.failure = {Throwable t ->
|
|
||||||
println "Failed to index: $t.message"
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
[[request]]
|
|
||||||
=== Request
|
|
||||||
|
|
||||||
This option allows to pass the actual instance of the request (instead
|
|
||||||
of a closure) as a parameter. The rest is similar to the closure as a
|
|
||||||
parameter option (the `GActionFuture` handling). For example:
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def indexR = client.index (new IndexRequest(
|
|
||||||
index: "test",
|
|
||||||
type: "_doc",
|
|
||||||
id: "1",
|
|
||||||
source: {
|
|
||||||
test = "value"
|
|
||||||
complex {
|
|
||||||
value1 = "value1"
|
|
||||||
value2 = "value2"
|
|
||||||
}
|
|
||||||
}))
|
|
||||||
|
|
||||||
println "Indexed $indexR.response.id into $indexR.response.index/$indexR.response.type"
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
[[java-like]]
|
|
||||||
=== Java Like
|
|
||||||
|
|
||||||
The last option is to provide an actual instance of the API request, and
|
|
||||||
an `ActionListener` for the callback. This is exactly like the Java API
|
|
||||||
with the added `gexecute` which returns the `GActionFuture`:
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def indexR = node.client.prepareIndex("test", "_doc", "1").setSource({
|
|
||||||
test = "value"
|
|
||||||
complex {
|
|
||||||
value1 = "value1"
|
|
||||||
value2 = "value2"
|
|
||||||
}
|
|
||||||
}).gexecute()
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,59 +0,0 @@
|
||||||
[[client]]
|
|
||||||
== Client
|
|
||||||
|
|
||||||
Obtaining an Elasticsearch Groovy `GClient` (a `GClient` is a simple
|
|
||||||
wrapper on top of the Java `Client`) is simple. The most common way to
|
|
||||||
get a client is by starting an embedded `Node` which acts as a node
|
|
||||||
within the cluster.
|
|
||||||
|
|
||||||
|
|
||||||
[[node-client]]
|
|
||||||
=== Node Client
|
|
||||||
|
|
||||||
A Node based client is the simplest form to get a `GClient` to start
|
|
||||||
executing operations against Elasticsearch.
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.groovy.client.GClient
|
|
||||||
import org.elasticsearch.groovy.node.GNode
|
|
||||||
import static org.elasticsearch.groovy.node.GNodeBuilder.nodeBuilder
|
|
||||||
|
|
||||||
// on startup
|
|
||||||
|
|
||||||
GNode node = nodeBuilder().node();
|
|
||||||
GClient client = node.client();
|
|
||||||
|
|
||||||
// on shutdown
|
|
||||||
|
|
||||||
node.close();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Since Elasticsearch allows to configure it using JSON based settings,
|
|
||||||
the configuration itself can be done using a closure that represent the
|
|
||||||
JSON:
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.groovy.node.GNode
|
|
||||||
import org.elasticsearch.groovy.node.GNodeBuilder
|
|
||||||
import static org.elasticsearch.groovy.node.GNodeBuilder.*
|
|
||||||
|
|
||||||
// on startup
|
|
||||||
|
|
||||||
GNodeBuilder nodeBuilder = nodeBuilder();
|
|
||||||
nodeBuilder.settings {
|
|
||||||
node {
|
|
||||||
client = true
|
|
||||||
}
|
|
||||||
cluster {
|
|
||||||
name = "test"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
GNode node = nodeBuilder.node()
|
|
||||||
|
|
||||||
// on shutdown
|
|
||||||
|
|
||||||
node.stop().close()
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,16 +0,0 @@
|
||||||
[[delete]]
|
|
||||||
== Delete API
|
|
||||||
|
|
||||||
The delete API is very similar to the
|
|
||||||
// {javaclient}/java-docs-delete.html[]
|
|
||||||
Java delete API, here is an
|
|
||||||
example:
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def deleteF = node.client.delete {
|
|
||||||
index "test"
|
|
||||||
type "_doc"
|
|
||||||
id "1"
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,19 +0,0 @@
|
||||||
[[get]]
|
|
||||||
== Get API
|
|
||||||
|
|
||||||
The get API is very similar to the
|
|
||||||
// {javaclient}/java-docs-get.html[]
|
|
||||||
Java get API. The main benefit
|
|
||||||
of using groovy is handling the source content. It can be automatically
|
|
||||||
converted to a `Map` which means using Groovy to navigate it is simple:
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def getF = node.client.get {
|
|
||||||
index "test"
|
|
||||||
type "_doc"
|
|
||||||
id "1"
|
|
||||||
}
|
|
||||||
|
|
||||||
println "Result of field2: $getF.response.source.complex.field2"
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,48 +0,0 @@
|
||||||
= Groovy API
|
|
||||||
|
|
||||||
include::../Versions.asciidoc[]
|
|
||||||
|
|
||||||
[preface]
|
|
||||||
== Preface
|
|
||||||
|
|
||||||
This section describes the http://groovy-lang.org/[Groovy] API
|
|
||||||
Elasticsearch provides. All Elasticsearch APIs are executed using a
|
|
||||||
<<client,GClient>>, and are completely
|
|
||||||
asynchronous in nature (they either accept a listener, or return a
|
|
||||||
future).
|
|
||||||
|
|
||||||
The Groovy API is a wrapper on top of the
|
|
||||||
{javaclient}[Java API] exposing it in a groovier
|
|
||||||
manner. The execution options for each API follow a similar manner and
|
|
||||||
covered in <<anatomy>>.
|
|
||||||
|
|
||||||
|
|
||||||
[[maven]]
|
|
||||||
=== Maven Repository
|
|
||||||
|
|
||||||
The Groovy API is hosted on
|
|
||||||
http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22elasticsearch-groovy%22[Maven
|
|
||||||
Central].
|
|
||||||
|
|
||||||
For example, you can define the latest version in your `pom.xml` file:
|
|
||||||
|
|
||||||
["source","xml",subs="attributes"]
|
|
||||||
--------------------------------------------------
|
|
||||||
<dependency>
|
|
||||||
<groupId>org.elasticsearch</groupId>
|
|
||||||
<artifactId>elasticsearch-groovy</artifactId>
|
|
||||||
<version>{version}</version>
|
|
||||||
</dependency>
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
include::anatomy.asciidoc[]
|
|
||||||
|
|
||||||
include::client.asciidoc[]
|
|
||||||
|
|
||||||
include::index_.asciidoc[]
|
|
||||||
|
|
||||||
include::get.asciidoc[]
|
|
||||||
|
|
||||||
include::delete.asciidoc[]
|
|
||||||
|
|
||||||
include::search.asciidoc[]
|
|
|
@ -1,32 +0,0 @@
|
||||||
[[index_]]
|
|
||||||
== Index API
|
|
||||||
|
|
||||||
The index API is very similar to the
|
|
||||||
// {javaclient}/java-docs-index.html[]
|
|
||||||
Java index API. The Groovy
|
|
||||||
extension to it is the ability to provide the indexed source using a
|
|
||||||
closure. For example:
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def indexR = client.index {
|
|
||||||
index "test"
|
|
||||||
type "_doc"
|
|
||||||
id "1"
|
|
||||||
source {
|
|
||||||
test = "value"
|
|
||||||
complex {
|
|
||||||
value1 = "value1"
|
|
||||||
value2 = "value2"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
In the above example, the source closure itself gets transformed into an
|
|
||||||
XContent (defaults to JSON). In order to change how the source closure
|
|
||||||
is serialized, a global (static) setting can be set on the `GClient` by
|
|
||||||
changing the `indexContentType` field.
|
|
||||||
|
|
||||||
Note also that the `source` can be set using the typical Java based
|
|
||||||
APIs, the `Closure` option is a Groovy extension.
|
|
|
@ -1,116 +0,0 @@
|
||||||
[[search]]
|
|
||||||
== Search API
|
|
||||||
|
|
||||||
The search API is very similar to the
|
|
||||||
// {javaclient}/java-search.html[]
|
|
||||||
Java search API. The Groovy
|
|
||||||
extension allows to provide the search source to execute as a `Closure`
|
|
||||||
including the query itself (similar to GORM criteria builder):
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def search = node.client.search {
|
|
||||||
indices "test"
|
|
||||||
types "_doc"
|
|
||||||
source {
|
|
||||||
query {
|
|
||||||
term(test: "value")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
search.response.hits.each {SearchHit hit ->
|
|
||||||
println "Got hit $hit.id from $hit.index/$hit.type"
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
It can also be executed using the "Java API" while still using a closure
|
|
||||||
for the query:
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def search = node.client.prepareSearch("test").setQuery({
|
|
||||||
term(test: "value")
|
|
||||||
}).gexecute();
|
|
||||||
|
|
||||||
search.response.hits.each {SearchHit hit ->
|
|
||||||
println "Got hit $hit.id from $hit.index/$hit.type"
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
The format of the search `Closure` follows the same JSON syntax as the
|
|
||||||
{ref}/search-search.html[Search API] request.
|
|
||||||
|
|
||||||
|
|
||||||
[[more-examples]]
|
|
||||||
=== More examples
|
|
||||||
|
|
||||||
Term query where multiple values are provided (see
|
|
||||||
{ref}/query-dsl-terms-query.html[terms]):
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def search = node.client.search {
|
|
||||||
indices "test"
|
|
||||||
types "_doc"
|
|
||||||
source {
|
|
||||||
query {
|
|
||||||
terms(test: ["value1", "value2"])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Query string (see
|
|
||||||
{ref}/query-dsl-query-string-query.html[query string]):
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def search = node.client.search {
|
|
||||||
indices "test"
|
|
||||||
types "_doc"
|
|
||||||
source {
|
|
||||||
query {
|
|
||||||
query_string(
|
|
||||||
fields: ["test"],
|
|
||||||
query: "value1 value2")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Pagination (see
|
|
||||||
{ref}/search-request-from-size.html[from/size]):
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def search = node.client.search {
|
|
||||||
indices "test"
|
|
||||||
types "_doc"
|
|
||||||
source {
|
|
||||||
from = 0
|
|
||||||
size = 10
|
|
||||||
query {
|
|
||||||
term(test: "value")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Sorting (see {ref}/search-request-sort.html[sort]):
|
|
||||||
|
|
||||||
[source,groovy]
|
|
||||||
--------------------------------------------------
|
|
||||||
def search = node.client.search {
|
|
||||||
indices "test"
|
|
||||||
types "_doc"
|
|
||||||
source {
|
|
||||||
query {
|
|
||||||
term(test: "value")
|
|
||||||
}
|
|
||||||
sort = [
|
|
||||||
date : [ order: "desc"]
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,76 +0,0 @@
|
||||||
[[java-admin-cluster-health]]
|
|
||||||
==== Cluster Health
|
|
||||||
|
|
||||||
[[java-admin-cluster-health-health]]
|
|
||||||
===== Health
|
|
||||||
|
|
||||||
The cluster health API allows to get a very simple status on the health of the cluster and also can give you
|
|
||||||
some technical information about the cluster status per index:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
ClusterHealthResponse healths = client.admin().cluster().prepareHealth().get(); <1>
|
|
||||||
String clusterName = healths.getClusterName(); <2>
|
|
||||||
int numberOfDataNodes = healths.getNumberOfDataNodes(); <3>
|
|
||||||
int numberOfNodes = healths.getNumberOfNodes(); <4>
|
|
||||||
|
|
||||||
for (ClusterIndexHealth health : healths.getIndices().values()) { <5>
|
|
||||||
String index = health.getIndex(); <6>
|
|
||||||
int numberOfShards = health.getNumberOfShards(); <7>
|
|
||||||
int numberOfReplicas = health.getNumberOfReplicas(); <8>
|
|
||||||
ClusterHealthStatus status = health.getStatus(); <9>
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Get information for all indices
|
|
||||||
<2> Access the cluster name
|
|
||||||
<3> Get the total number of data nodes
|
|
||||||
<4> Get the total number of nodes
|
|
||||||
<5> Iterate over all indices
|
|
||||||
<6> Index name
|
|
||||||
<7> Number of shards
|
|
||||||
<8> Number of replicas
|
|
||||||
<9> Index status
|
|
||||||
|
|
||||||
[[java-admin-cluster-health-wait-status]]
|
|
||||||
===== Wait for status
|
|
||||||
|
|
||||||
You can use the cluster health API to wait for a specific status for the whole cluster or for a given index:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
client.admin().cluster().prepareHealth() <1>
|
|
||||||
.setWaitForYellowStatus() <2>
|
|
||||||
.get();
|
|
||||||
client.admin().cluster().prepareHealth("company") <3>
|
|
||||||
.setWaitForGreenStatus() <4>
|
|
||||||
.get();
|
|
||||||
|
|
||||||
client.admin().cluster().prepareHealth("employee") <5>
|
|
||||||
.setWaitForGreenStatus() <6>
|
|
||||||
.setTimeout(TimeValue.timeValueSeconds(2)) <7>
|
|
||||||
.get();
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Prepare a health request
|
|
||||||
<2> Wait for the cluster being yellow
|
|
||||||
<3> Prepare the health request for index `company`
|
|
||||||
<4> Wait for the index being green
|
|
||||||
<5> Prepare the health request for index `employee`
|
|
||||||
<6> Wait for the index being green
|
|
||||||
<7> Wait at most for 2 seconds
|
|
||||||
|
|
||||||
If the index does not have the expected status and you want to fail in that case, you need
|
|
||||||
to explicitly interpret the result:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
ClusterHealthResponse response = client.admin().cluster().prepareHealth("company")
|
|
||||||
.setWaitForGreenStatus() <1>
|
|
||||||
.get();
|
|
||||||
|
|
||||||
ClusterHealthStatus status = response.getIndices().get("company").getStatus();
|
|
||||||
if (!status.equals(ClusterHealthStatus.GREEN)) {
|
|
||||||
throw new RuntimeException("Index is in " + status + " state"); <2>
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Wait for the index being green
|
|
||||||
<2> Throw an exception if not `GREEN`
|
|
|
@ -1,16 +0,0 @@
|
||||||
[[java-admin-cluster]]
|
|
||||||
=== Cluster Administration
|
|
||||||
|
|
||||||
To access cluster Java API, you need to call `cluster()` method from an <<java-admin,`AdminClient`>>:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
ClusterAdminClient clusterAdminClient = client.admin().cluster();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[NOTE]
|
|
||||||
In the rest of this guide, we will use `client.admin().cluster()`.
|
|
||||||
|
|
||||||
include::health.asciidoc[]
|
|
||||||
|
|
||||||
include::stored-scripts.asciidoc[]
|
|
|
@ -1,29 +0,0 @@
|
||||||
[[stored-scripts]]
|
|
||||||
==== Stored Scripts API
|
|
||||||
|
|
||||||
The stored script API allows one to interact with scripts and templates
|
|
||||||
stored in Elasticsearch. It can be used to create, update, get,
|
|
||||||
and delete stored scripts and templates.
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
PutStoredScriptResponse response = client.admin().cluster().preparePutStoredScript()
|
|
||||||
.setId("script1")
|
|
||||||
.setContent(new BytesArray("{\"script\": {\"lang\": \"painless\", \"source\": \"_score * doc['my_numeric_field'].value\"} }"), XContentType.JSON)
|
|
||||||
.get();
|
|
||||||
|
|
||||||
GetStoredScriptResponse response = client().admin().cluster().prepareGetStoredScript()
|
|
||||||
.setId("script1")
|
|
||||||
.get();
|
|
||||||
|
|
||||||
DeleteStoredScriptResponse response = client().admin().cluster().prepareDeleteStoredScript()
|
|
||||||
.setId("script1")
|
|
||||||
.get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
To store templates simply use "mustache" for the scriptLang.
|
|
||||||
|
|
||||||
===== Script Language
|
|
||||||
|
|
||||||
The put stored script API allows one to set the language of the stored script.
|
|
||||||
If one is not provided the default scripting language will be used.
|
|
|
@ -1,18 +0,0 @@
|
||||||
[[java-admin]]
|
|
||||||
== Java API Administration
|
|
||||||
|
|
||||||
Elasticsearch provides a full Java API to deal with administration tasks.
|
|
||||||
|
|
||||||
To access them, you need to call `admin()` method from a client to get an `AdminClient`:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AdminClient adminClient = client.admin();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[NOTE]
|
|
||||||
In the rest of this guide, we will use `client.admin()`.
|
|
||||||
|
|
||||||
include::indices/index.asciidoc[]
|
|
||||||
|
|
||||||
include::cluster/index.asciidoc[]
|
|
|
@ -1,28 +0,0 @@
|
||||||
[[java-admin-indices-create-index]]
|
|
||||||
==== Create Index
|
|
||||||
|
|
||||||
Using an <<java-admin-indices,`IndicesAdminClient`>>, you can create an index with all default settings and no mapping:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
client.admin().indices().prepareCreate("twitter").get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[discrete]
|
|
||||||
[[java-admin-indices-create-index-settings]]
|
|
||||||
===== Index Settings
|
|
||||||
|
|
||||||
Each index created can have specific settings associated with it.
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
client.admin().indices().prepareCreate("twitter")
|
|
||||||
.setSettings(Settings.builder() <1>
|
|
||||||
.put("index.number_of_shards", 3)
|
|
||||||
.put("index.number_of_replicas", 2)
|
|
||||||
)
|
|
||||||
.get(); <2>
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Settings for this index
|
|
||||||
<2> Execute the action and wait for the result
|
|
||||||
|
|
|
@ -1,22 +0,0 @@
|
||||||
[[java-admin-indices-get-settings]]
|
|
||||||
==== Get Settings
|
|
||||||
|
|
||||||
The get settings API allows to retrieve settings of index/indices:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
GetSettingsResponse response = client.admin().indices()
|
|
||||||
.prepareGetSettings("company", "employee").get(); <1>
|
|
||||||
for (ObjectObjectCursor<String, Settings> cursor : response.getIndexToSettings()) { <2>
|
|
||||||
String index = cursor.key; <3>
|
|
||||||
Settings settings = cursor.value; <4>
|
|
||||||
Integer shards = settings.getAsInt("index.number_of_shards", null); <5>
|
|
||||||
Integer replicas = settings.getAsInt("index.number_of_replicas", null); <6>
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Get settings for indices `company` and `employee`
|
|
||||||
<2> Iterate over results
|
|
||||||
<3> Index name
|
|
||||||
<4> Settings for the given index
|
|
||||||
<5> Number of shards for this index
|
|
||||||
<6> Number of replicas for this index
|
|
|
@ -1,21 +0,0 @@
|
||||||
[[java-admin-indices]]
|
|
||||||
=== Indices Administration
|
|
||||||
|
|
||||||
To access indices Java API, you need to call `indices()` method from an <<java-admin,`AdminClient`>>:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
IndicesAdminClient indicesAdminClient = client.admin().indices();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[NOTE]
|
|
||||||
In the rest of this guide, we will use `client.admin().indices()`.
|
|
||||||
|
|
||||||
include::create-index.asciidoc[]
|
|
||||||
|
|
||||||
include::put-mapping.asciidoc[]
|
|
||||||
|
|
||||||
include::refresh.asciidoc[]
|
|
||||||
|
|
||||||
include::get-settings.asciidoc[]
|
|
||||||
include::update-settings.asciidoc[]
|
|
|
@ -1,30 +0,0 @@
|
||||||
[[java-admin-indices-put-mapping]]
|
|
||||||
|
|
||||||
==== Put Mapping
|
|
||||||
|
|
||||||
You can add mappings at index creation time:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-tests}/IndicesDocumentationIT.java[index-with-mapping]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`
|
|
||||||
<2> Add a `_doc` type with a field called `message` that has the data type `text`.
|
|
||||||
|
|
||||||
There are several variants of the above `addMapping` method, some taking an
|
|
||||||
`XContentBuilder` or a `Map` with the mapping definition as arguments. Make sure
|
|
||||||
to check the javadocs to pick the simplest one for your use case.
|
|
||||||
|
|
||||||
The PUT mapping API also allows for updating the mapping after index
|
|
||||||
creation. In this case you can provide the mapping as a String similar
|
|
||||||
to the REST API syntax:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-tests}/IndicesDocumentationIT.java[putMapping-request-source]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Puts a mapping on existing index called `twitter`
|
|
||||||
<2> Adds a new field `name` to the mapping
|
|
||||||
<3> The type can be also provided within the source
|
|
||||||
|
|
||||||
:base-dir!:
|
|
|
@ -1,19 +0,0 @@
|
||||||
[[java-admin-indices-refresh]]
|
|
||||||
==== Refresh
|
|
||||||
|
|
||||||
The refresh API allows to explicitly refresh one or more index:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
client.admin().indices().prepareRefresh().get(); <1>
|
|
||||||
client.admin().indices()
|
|
||||||
.prepareRefresh("twitter") <2>
|
|
||||||
.get();
|
|
||||||
client.admin().indices()
|
|
||||||
.prepareRefresh("twitter", "company") <3>
|
|
||||||
.get();
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Refresh all indices
|
|
||||||
<2> Refresh one index
|
|
||||||
<3> Refresh many indices
|
|
||||||
|
|
|
@ -1,16 +0,0 @@
|
||||||
[[java-admin-indices-update-settings]]
|
|
||||||
==== Update Indices Settings
|
|
||||||
|
|
||||||
You can change index settings by calling:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
client.admin().indices().prepareUpdateSettings("twitter") <1>
|
|
||||||
.setSettings(Settings.builder() <2>
|
|
||||||
.put("index.number_of_replicas", 0)
|
|
||||||
)
|
|
||||||
.get();
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Index to update
|
|
||||||
<2> Settings
|
|
||||||
|
|
|
@ -1,33 +0,0 @@
|
||||||
[[java-aggregations-bucket]]
|
|
||||||
|
|
||||||
include::bucket/global-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/filter-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/filters-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/missing-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/nested-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/reverse-nested-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/children-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/terms-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/significantterms-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/range-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/daterange-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/iprange-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/histogram-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/datehistogram-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/geodistance-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::bucket/geohashgrid-aggregation.asciidoc[]
|
|
|
@ -1,35 +0,0 @@
|
||||||
[[java-aggs-bucket-children]]
|
|
||||||
==== Children Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-children-aggregation.html[Children Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.children("agg", "reseller"); <1>
|
|
||||||
--------------------------------------------------
|
|
||||||
1. `"agg"` is the name of the aggregation and `"reseller"` is the child type
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.join.aggregations.Children;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Children agg = sr.getAggregations().get("agg");
|
|
||||||
agg.getDocCount(); // Doc count
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,73 +0,0 @@
|
||||||
[[java-aggs-bucket-datehistogram]]
|
|
||||||
==== Date Histogram Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-datehistogram-aggregation.html[Date Histogram Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.dateHistogram("agg")
|
|
||||||
.field("dateOfBirth")
|
|
||||||
.calendarInterval(DateHistogramInterval.YEAR);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Or if you want to set an interval of 10 days:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.dateHistogram("agg")
|
|
||||||
.field("dateOfBirth")
|
|
||||||
.fixedInterval(DateHistogramInterval.days(10));
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Histogram agg = sr.getAggregations().get("agg");
|
|
||||||
|
|
||||||
// For each entry
|
|
||||||
for (Histogram.Bucket entry : agg.getBuckets()) {
|
|
||||||
DateTime key = (DateTime) entry.getKey(); // Key
|
|
||||||
String keyAsString = entry.getKeyAsString(); // Key as String
|
|
||||||
long docCount = entry.getDocCount(); // Doc count
|
|
||||||
|
|
||||||
logger.info("key [{}], date [{}], doc_count [{}]", keyAsString, key.getYear(), docCount);
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
This will basically produce for the first example:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
key [1942-01-01T00:00:00.000Z], date [1942], doc_count [1]
|
|
||||||
key [1945-01-01T00:00:00.000Z], date [1945], doc_count [1]
|
|
||||||
key [1946-01-01T00:00:00.000Z], date [1946], doc_count [1]
|
|
||||||
...
|
|
||||||
key [2005-01-01T00:00:00.000Z], date [2005], doc_count [1]
|
|
||||||
key [2007-01-01T00:00:00.000Z], date [2007], doc_count [2]
|
|
||||||
key [2008-01-01T00:00:00.000Z], date [2008], doc_count [3]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
===== Order
|
|
||||||
|
|
||||||
Supports the same order functionality as the <<java-aggs-bucket-terms,`Terms Aggregation`>>.
|
|
|
@ -1,59 +0,0 @@
|
||||||
[[java-aggs-bucket-daterange]]
|
|
||||||
==== Date Range Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-daterange-aggregation.html[Date Range Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.dateRange("agg")
|
|
||||||
.field("dateOfBirth")
|
|
||||||
.format("yyyy")
|
|
||||||
.addUnboundedTo("1950") // from -infinity to 1950 (excluded)
|
|
||||||
.addRange("1950", "1960") // from 1950 to 1960 (excluded)
|
|
||||||
.addUnboundedFrom("1960"); // from 1960 to +infinity
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.range.Range;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Range agg = sr.getAggregations().get("agg");
|
|
||||||
|
|
||||||
// For each entry
|
|
||||||
for (Range.Bucket entry : agg.getBuckets()) {
|
|
||||||
String key = entry.getKeyAsString(); // Date range as key
|
|
||||||
DateTime fromAsDate = (DateTime) entry.getFrom(); // Date bucket from as a Date
|
|
||||||
DateTime toAsDate = (DateTime) entry.getTo(); // Date bucket to as a Date
|
|
||||||
long docCount = entry.getDocCount(); // Doc count
|
|
||||||
|
|
||||||
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, fromAsDate, toAsDate, docCount);
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
This will basically produce:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
key [*-1950], from [null], to [1950-01-01T00:00:00.000Z], doc_count [8]
|
|
||||||
key [1950-1960], from [1950-01-01T00:00:00.000Z], to [1960-01-01T00:00:00.000Z], doc_count [5]
|
|
||||||
key [1960-*], from [1960-01-01T00:00:00.000Z], to [null], doc_count [37]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,34 +0,0 @@
|
||||||
[[java-aggs-bucket-filter]]
|
|
||||||
==== Filter Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-filter-aggregation.html[Filter Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilders
|
|
||||||
.filter("agg", QueryBuilders.termQuery("gender", "male"));
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.filter.Filter;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Filter agg = sr.getAggregations().get("agg");
|
|
||||||
agg.getDocCount(); // Doc count
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,51 +0,0 @@
|
||||||
[[java-aggs-bucket-filters]]
|
|
||||||
==== Filters Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-filters-aggregation.html[Filters Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.filters("agg",
|
|
||||||
new FiltersAggregator.KeyedFilter("men", QueryBuilders.termQuery("gender", "male")),
|
|
||||||
new FiltersAggregator.KeyedFilter("women", QueryBuilders.termQuery("gender", "female")));
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.filters.Filters;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Filters agg = sr.getAggregations().get("agg");
|
|
||||||
|
|
||||||
// For each entry
|
|
||||||
for (Filters.Bucket entry : agg.getBuckets()) {
|
|
||||||
String key = entry.getKeyAsString(); // bucket key
|
|
||||||
long docCount = entry.getDocCount(); // Doc count
|
|
||||||
logger.info("key [{}], doc_count [{}]", key, docCount);
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
This will basically produce:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
key [men], doc_count [4982]
|
|
||||||
key [women], doc_count [5018]
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,58 +0,0 @@
|
||||||
[[java-aggs-bucket-geodistance]]
|
|
||||||
==== Geo Distance Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-geodistance-aggregation.html[Geo Distance Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.geoDistance("agg", new GeoPoint(48.84237171118314,2.33320027692004))
|
|
||||||
.field("address.location")
|
|
||||||
.unit(DistanceUnit.KILOMETERS)
|
|
||||||
.addUnboundedTo(3.0)
|
|
||||||
.addRange(3.0, 10.0)
|
|
||||||
.addRange(10.0, 500.0);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.range.Range;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Range agg = sr.getAggregations().get("agg");
|
|
||||||
|
|
||||||
// For each entry
|
|
||||||
for (Range.Bucket entry : agg.getBuckets()) {
|
|
||||||
String key = entry.getKeyAsString(); // key as String
|
|
||||||
Number from = (Number) entry.getFrom(); // bucket from value
|
|
||||||
Number to = (Number) entry.getTo(); // bucket to value
|
|
||||||
long docCount = entry.getDocCount(); // Doc count
|
|
||||||
|
|
||||||
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, from, to, docCount);
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
This will basically produce:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
key [*-3.0], from [0.0], to [3.0], doc_count [161]
|
|
||||||
key [3.0-10.0], from [3.0], to [10.0], doc_count [460]
|
|
||||||
key [10.0-500.0], from [10.0], to [500.0], doc_count [4925]
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,57 +0,0 @@
|
||||||
[[java-aggs-bucket-geohashgrid]]
|
|
||||||
==== Geo Hash Grid Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-geohashgrid-aggregation.html[Geo Hash Grid Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.geohashGrid("agg")
|
|
||||||
.field("address.location")
|
|
||||||
.precision(4);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.geogrid.GeoHashGrid;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
GeoHashGrid agg = sr.getAggregations().get("agg");
|
|
||||||
|
|
||||||
// For each entry
|
|
||||||
for (GeoHashGrid.Bucket entry : agg.getBuckets()) {
|
|
||||||
String keyAsString = entry.getKeyAsString(); // key as String
|
|
||||||
GeoPoint key = (GeoPoint) entry.getKey(); // key as geo point
|
|
||||||
long docCount = entry.getDocCount(); // Doc count
|
|
||||||
|
|
||||||
logger.info("key [{}], point {}, doc_count [{}]", keyAsString, key, docCount);
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
This will basically produce:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
key [gbqu], point [47.197265625, -1.58203125], doc_count [1282]
|
|
||||||
key [gbvn], point [50.361328125, -4.04296875], doc_count [1248]
|
|
||||||
key [u1j0], point [50.712890625, 7.20703125], doc_count [1156]
|
|
||||||
key [u0j2], point [45.087890625, 7.55859375], doc_count [1138]
|
|
||||||
...
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,35 +0,0 @@
|
||||||
[[java-aggs-bucket-global]]
|
|
||||||
==== Global Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-global-aggregation.html[Global Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilders
|
|
||||||
.global("agg")
|
|
||||||
.subAggregation(AggregationBuilders.terms("genders").field("gender"));
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.global.Global;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Global agg = sr.getAggregations().get("agg");
|
|
||||||
agg.getDocCount(); // Doc count
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,48 +0,0 @@
|
||||||
[[java-aggs-bucket-histogram]]
|
|
||||||
==== Histogram Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-histogram-aggregation.html[Histogram Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.histogram("agg")
|
|
||||||
.field("height")
|
|
||||||
.interval(1);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Histogram agg = sr.getAggregations().get("agg");
|
|
||||||
|
|
||||||
// For each entry
|
|
||||||
for (Histogram.Bucket entry : agg.getBuckets()) {
|
|
||||||
Number key = (Number) entry.getKey(); // Key
|
|
||||||
long docCount = entry.getDocCount(); // Doc count
|
|
||||||
|
|
||||||
logger.info("key [{}], doc_count [{}]", key, docCount);
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
===== Order
|
|
||||||
|
|
||||||
Supports the same order functionality as the <<java-aggs-bucket-terms,`Terms Aggregation`>>.
|
|
|
@ -1,79 +0,0 @@
|
||||||
[[java-aggs-bucket-iprange]]
|
|
||||||
==== Ip Range Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-iprange-aggregation.html[Ip Range Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder<?> aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.ipRange("agg")
|
|
||||||
.field("ip")
|
|
||||||
.addUnboundedTo("192.168.1.0") // from -infinity to 192.168.1.0 (excluded)
|
|
||||||
.addRange("192.168.1.0", "192.168.2.0") // from 192.168.1.0 to 192.168.2.0 (excluded)
|
|
||||||
.addUnboundedFrom("192.168.2.0"); // from 192.168.2.0 to +infinity
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Note that you could also use ip masks as ranges:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder<?> aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.ipRange("agg")
|
|
||||||
.field("ip")
|
|
||||||
.addMaskRange("192.168.0.0/32")
|
|
||||||
.addMaskRange("192.168.0.0/24")
|
|
||||||
.addMaskRange("192.168.0.0/16");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.range.Range;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Range agg = sr.getAggregations().get("agg");
|
|
||||||
|
|
||||||
// For each entry
|
|
||||||
for (Range.Bucket entry : agg.getBuckets()) {
|
|
||||||
String key = entry.getKeyAsString(); // Ip range as key
|
|
||||||
String fromAsString = entry.getFromAsString(); // Ip bucket from as a String
|
|
||||||
String toAsString = entry.getToAsString(); // Ip bucket to as a String
|
|
||||||
long docCount = entry.getDocCount(); // Doc count
|
|
||||||
|
|
||||||
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, fromAsString, toAsString, docCount);
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
This will basically produce for the first example:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
key [*-192.168.1.0], from [null], to [192.168.1.0], doc_count [13]
|
|
||||||
key [192.168.1.0-192.168.2.0], from [192.168.1.0], to [192.168.2.0], doc_count [14]
|
|
||||||
key [192.168.2.0-*], from [192.168.2.0], to [null], doc_count [23]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
And for the second one (using Ip masks):
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
key [192.168.0.0/32], from [192.168.0.0], to [192.168.0.1], doc_count [0]
|
|
||||||
key [192.168.0.0/24], from [192.168.0.0], to [192.168.1.0], doc_count [13]
|
|
||||||
key [192.168.0.0/16], from [192.168.0.0], to [192.169.0.0], doc_count [50]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,34 +0,0 @@
|
||||||
[[java-aggs-bucket-missing]]
|
|
||||||
==== Missing Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-missing-aggregation.html[Missing Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilders.missing("agg").field("gender");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.missing.Missing;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Missing agg = sr.getAggregations().get("agg");
|
|
||||||
agg.getDocCount(); // Doc count
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,34 +0,0 @@
|
||||||
[[java-aggs-bucket-nested]]
|
|
||||||
==== Nested Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-nested-aggregation.html[Nested Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilders
|
|
||||||
.nested("agg", "resellers");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.nested.Nested;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Nested agg = sr.getAggregations().get("agg");
|
|
||||||
agg.getDocCount(); // Doc count
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,58 +0,0 @@
|
||||||
[[java-aggs-bucket-range]]
|
|
||||||
==== Range Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-range-aggregation.html[Range Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.range("agg")
|
|
||||||
.field("height")
|
|
||||||
.addUnboundedTo(1.0f) // from -infinity to 1.0 (excluded)
|
|
||||||
.addRange(1.0f, 1.5f) // from 1.0 to 1.5 (excluded)
|
|
||||||
.addUnboundedFrom(1.5f); // from 1.5 to +infinity
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.range.Range;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Range agg = sr.getAggregations().get("agg");
|
|
||||||
|
|
||||||
// For each entry
|
|
||||||
for (Range.Bucket entry : agg.getBuckets()) {
|
|
||||||
String key = entry.getKeyAsString(); // Range as key
|
|
||||||
Number from = (Number) entry.getFrom(); // Bucket from
|
|
||||||
Number to = (Number) entry.getTo(); // Bucket to
|
|
||||||
long docCount = entry.getDocCount(); // Doc count
|
|
||||||
|
|
||||||
logger.info("key [{}], from [{}], to [{}], doc_count [{}]", key, from, to, docCount);
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
This will basically produce for the first example:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
key [*-1.0], from [-Infinity], to [1.0], doc_count [9]
|
|
||||||
key [1.0-1.5], from [1.0], to [1.5], doc_count [21]
|
|
||||||
key [1.5-*], from [1.5], to [Infinity], doc_count [20]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,50 +0,0 @@
|
||||||
[[java-aggs-bucket-reverse-nested]]
|
|
||||||
==== Reverse Nested Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-reverse-nested-aggregation.html[Reverse Nested Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.nested("agg", "resellers")
|
|
||||||
.subAggregation(
|
|
||||||
AggregationBuilders
|
|
||||||
.terms("name").field("resellers.name")
|
|
||||||
.subAggregation(
|
|
||||||
AggregationBuilders
|
|
||||||
.reverseNested("reseller_to_product")
|
|
||||||
)
|
|
||||||
);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.nested.Nested;
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.nested.ReverseNested;
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Nested agg = sr.getAggregations().get("agg");
|
|
||||||
Terms name = agg.getAggregations().get("name");
|
|
||||||
for (Terms.Bucket bucket : name.getBuckets()) {
|
|
||||||
ReverseNested resellerToProduct = bucket.getAggregations().get("reseller_to_product");
|
|
||||||
resellerToProduct.getDocCount(); // Doc count
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,47 +0,0 @@
|
||||||
[[java-aggs-bucket-significantterms]]
|
|
||||||
==== Significant Terms Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-significantterms-aggregation.html[Significant Terms Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.significantTerms("significant_countries")
|
|
||||||
.field("address.country");
|
|
||||||
|
|
||||||
// Let say you search for men only
|
|
||||||
SearchResponse sr = client.prepareSearch()
|
|
||||||
.setQuery(QueryBuilders.termQuery("gender", "male"))
|
|
||||||
.addAggregation(aggregation)
|
|
||||||
.get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.significant.SignificantTerms;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
SignificantTerms agg = sr.getAggregations().get("significant_countries");
|
|
||||||
|
|
||||||
// For each entry
|
|
||||||
for (SignificantTerms.Bucket entry : agg.getBuckets()) {
|
|
||||||
entry.getKey(); // Term
|
|
||||||
entry.getDocCount(); // Doc count
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,97 +0,0 @@
|
||||||
[[java-aggs-bucket-terms]]
|
|
||||||
==== Terms Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-bucket-terms-aggregation.html[Terms Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilders
|
|
||||||
.terms("genders")
|
|
||||||
.field("gender");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Terms genders = sr.getAggregations().get("genders");
|
|
||||||
|
|
||||||
// For each entry
|
|
||||||
for (Terms.Bucket entry : genders.getBuckets()) {
|
|
||||||
entry.getKey(); // Term
|
|
||||||
entry.getDocCount(); // Doc count
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
===== Order
|
|
||||||
|
|
||||||
Import bucket ordering strategy classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.BucketOrder;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Ordering the buckets by their `doc_count` in an ascending manner:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilders
|
|
||||||
.terms("genders")
|
|
||||||
.field("gender")
|
|
||||||
.order(BucketOrder.count(true))
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Ordering the buckets alphabetically by their terms in an ascending manner:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilders
|
|
||||||
.terms("genders")
|
|
||||||
.field("gender")
|
|
||||||
.order(BucketOrder.key(true))
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Ordering the buckets by single value metrics sub-aggregation (identified by the aggregation name):
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilders
|
|
||||||
.terms("genders")
|
|
||||||
.field("gender")
|
|
||||||
.order(BucketOrder.aggregation("avg_height", false))
|
|
||||||
.subAggregation(
|
|
||||||
AggregationBuilders.avg("avg_height").field("height")
|
|
||||||
)
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Ordering the buckets by multiple criteria:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilders
|
|
||||||
.terms("genders")
|
|
||||||
.field("gender")
|
|
||||||
.order(BucketOrder.compound( // in order of priority:
|
|
||||||
BucketOrder.aggregation("avg_height", false), // sort by sub-aggregation first
|
|
||||||
BucketOrder.count(true))) // then bucket count as a tie-breaker
|
|
||||||
.subAggregation(
|
|
||||||
AggregationBuilders.avg("avg_height").field("height")
|
|
||||||
)
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,27 +0,0 @@
|
||||||
[[java-aggregations-metrics]]
|
|
||||||
|
|
||||||
include::metrics/min-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/max-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/sum-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/avg-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/stats-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/extendedstats-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/valuecount-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/percentile-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/percentile-rank-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/cardinality-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/geobounds-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/tophits-aggregation.asciidoc[]
|
|
||||||
|
|
||||||
include::metrics/scripted-metric-aggregation.asciidoc[]
|
|
|
@ -1,37 +0,0 @@
|
||||||
[[java-aggs-metrics-avg]]
|
|
||||||
==== Avg Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-avg-aggregation.html[Avg Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AvgAggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.avg("agg")
|
|
||||||
.field("height");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.avg.Avg;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Avg agg = sr.getAggregations().get("agg");
|
|
||||||
double value = agg.getValue();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,38 +0,0 @@
|
||||||
[[java-aggs-metrics-cardinality]]
|
|
||||||
==== Cardinality Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-cardinality-aggregation.html[Cardinality Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
CardinalityAggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.cardinality("agg")
|
|
||||||
.field("tags");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.cardinality.Cardinality;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Cardinality agg = sr.getAggregations().get("agg");
|
|
||||||
long value = agg.getValue();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
|
@ -1,44 +0,0 @@
|
||||||
[[java-aggs-metrics-extendedstats]]
|
|
||||||
==== Extended Stats Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-extendedstats-aggregation.html[Extended Stats Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
ExtendedStatsAggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.extendedStats("agg")
|
|
||||||
.field("height");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
ExtendedStats agg = sr.getAggregations().get("agg");
|
|
||||||
double min = agg.getMin();
|
|
||||||
double max = agg.getMax();
|
|
||||||
double avg = agg.getAvg();
|
|
||||||
double sum = agg.getSum();
|
|
||||||
long count = agg.getCount();
|
|
||||||
double stdDeviation = agg.getStdDeviation();
|
|
||||||
double sumOfSquares = agg.getSumOfSquares();
|
|
||||||
double variance = agg.getVariance();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,46 +0,0 @@
|
||||||
[[java-aggs-metrics-geobounds]]
|
|
||||||
==== Geo Bounds Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-geobounds-aggregation.html[Geo Bounds Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
GeoBoundsAggregationBuilder aggregation =
|
|
||||||
GeoBoundsAggregationBuilder
|
|
||||||
.geoBounds("agg")
|
|
||||||
.field("address.location")
|
|
||||||
.wrapLongitude(true);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.geobounds.GeoBounds;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
GeoBounds agg = sr.getAggregations().get("agg");
|
|
||||||
GeoPoint bottomRight = agg.bottomRight();
|
|
||||||
GeoPoint topLeft = agg.topLeft();
|
|
||||||
logger.info("bottomRight {}, topLeft {}", bottomRight, topLeft);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
This will basically produce:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
bottomRight [40.70500764381921, 13.952946866893775], topLeft [53.49603022435221, -4.190029308156676]
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,37 +0,0 @@
|
||||||
[[java-aggs-metrics-max]]
|
|
||||||
==== Max Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-max-aggregation.html[Max Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
MaxAggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.max("agg")
|
|
||||||
.field("height");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.max.Max;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Max agg = sr.getAggregations().get("agg");
|
|
||||||
double value = agg.getValue();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,37 +0,0 @@
|
||||||
[[java-aggs-metrics-min]]
|
|
||||||
==== Min Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-min-aggregation.html[Min Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
MinAggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.min("agg")
|
|
||||||
.field("height");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.min.Min;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Min agg = sr.getAggregations().get("agg");
|
|
||||||
double value = agg.getValue();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,68 +0,0 @@
|
||||||
[[java-aggs-metrics-percentile]]
|
|
||||||
==== Percentile Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-percentile-aggregation.html[Percentile Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
PercentilesAggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.percentiles("agg")
|
|
||||||
.field("height");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
You can provide your own percentiles instead of using defaults:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
PercentilesAggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.percentiles("agg")
|
|
||||||
.field("height")
|
|
||||||
.percentiles(1.0, 5.0, 10.0, 20.0, 30.0, 75.0, 95.0, 99.0);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile;
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.percentiles.Percentiles;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Percentiles agg = sr.getAggregations().get("agg");
|
|
||||||
// For each entry
|
|
||||||
for (Percentile entry : agg) {
|
|
||||||
double percent = entry.getPercent(); // Percent
|
|
||||||
double value = entry.getValue(); // Value
|
|
||||||
|
|
||||||
logger.info("percent [{}], value [{}]", percent, value);
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
This will basically produce for the first example:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
percent [1.0], value [0.814338896154595]
|
|
||||||
percent [5.0], value [0.8761912455821302]
|
|
||||||
percent [25.0], value [1.173346540141847]
|
|
||||||
percent [50.0], value [1.5432023318692198]
|
|
||||||
percent [75.0], value [1.923915462033674]
|
|
||||||
percent [95.0], value [2.2273644908535335]
|
|
||||||
percent [99.0], value [2.284989339108279]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,55 +0,0 @@
|
||||||
[[java-aggs-metrics-percentile-rank]]
|
|
||||||
==== Percentile Ranks Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-percentile-rank-aggregation.html[Percentile Ranks Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
PercentileRanksAggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.percentileRanks("agg")
|
|
||||||
.field("height")
|
|
||||||
.values(1.24, 1.91, 2.22);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.percentiles.Percentile;
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.percentiles.PercentileRanks;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
PercentileRanks agg = sr.getAggregations().get("agg");
|
|
||||||
// For each entry
|
|
||||||
for (Percentile entry : agg) {
|
|
||||||
double percent = entry.getPercent(); // Percent
|
|
||||||
double value = entry.getValue(); // Value
|
|
||||||
|
|
||||||
logger.info("percent [{}], value [{}]", percent, value);
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
This will basically produce:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
percent [29.664353095090945], value [1.24]
|
|
||||||
percent [73.9335313461868], value [1.91]
|
|
||||||
percent [94.40095147327283], value [2.22]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,100 +0,0 @@
|
||||||
[[java-aggs-metrics-scripted-metric]]
|
|
||||||
==== Scripted Metric Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[Scripted Metric Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
ScriptedMetricAggregationBuilder aggregation = AggregationBuilders
|
|
||||||
.scriptedMetric("agg")
|
|
||||||
.initScript(new Script("state.heights = []"))
|
|
||||||
.mapScript(new Script("state.heights.add(doc.gender.value == 'male' ? doc.height.value : -1.0 * doc.height.value)"));
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
You can also specify a `combine` script which will be executed on each shard:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
ScriptedMetricAggregationBuilder aggregation = AggregationBuilders
|
|
||||||
.scriptedMetric("agg")
|
|
||||||
.initScript(new Script("state.heights = []"))
|
|
||||||
.mapScript(new Script("state.heights.add(doc.gender.value == 'male' ? doc.height.value : -1.0 * doc.height.value)"))
|
|
||||||
.combineScript(new Script("double heights_sum = 0.0; for (t in state.heights) { heights_sum += t } return heights_sum"));
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
You can also specify a `reduce` script which will be executed on the node which gets the request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
ScriptedMetricAggregationBuilder aggregation = AggregationBuilders
|
|
||||||
.scriptedMetric("agg")
|
|
||||||
.initScript(new Script("state.heights = []"))
|
|
||||||
.mapScript(new Script("state.heights.add(doc.gender.value == 'male' ? doc.height.value : -1.0 * doc.height.value)"))
|
|
||||||
.combineScript(new Script("double heights_sum = 0.0; for (t in state.heights) { heights_sum += t } return heights_sum"))
|
|
||||||
.reduceScript(new Script("double heights_sum = 0.0; for (a in states) { heights_sum += a } return heights_sum"));
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.tophits.TopHits;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
ScriptedMetric agg = sr.getAggregations().get("agg");
|
|
||||||
Object scriptedResult = agg.aggregation();
|
|
||||||
logger.info("scriptedResult [{}]", scriptedResult);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Note that the result depends on the script you built.
|
|
||||||
For the first example, this will basically produce:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
scriptedResult object [ArrayList]
|
|
||||||
scriptedResult [ {
|
|
||||||
"heights" : [ 1.122218480146643, -1.8148918111233887, -1.7626731575142909, ... ]
|
|
||||||
}, {
|
|
||||||
"heights" : [ -0.8046067304119863, -2.0785486707864553, -1.9183567430207953, ... ]
|
|
||||||
}, {
|
|
||||||
"heights" : [ 2.092635728868694, 1.5697545960886536, 1.8826954461968808, ... ]
|
|
||||||
}, {
|
|
||||||
"heights" : [ -2.1863201099468403, 1.6328549117346856, -1.7078288405893842, ... ]
|
|
||||||
}, {
|
|
||||||
"heights" : [ 1.6043904836424177, -2.0736538674414025, 0.9898266674373053, ... ]
|
|
||||||
} ]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
The second example will produce:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
scriptedResult object [ArrayList]
|
|
||||||
scriptedResult [-41.279615707402876,
|
|
||||||
-60.88007362339038,
|
|
||||||
38.823270659734256,
|
|
||||||
14.840192739445632,
|
|
||||||
11.300902755741326]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
The last example will produce:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
scriptedResult object [Double]
|
|
||||||
scriptedResult [2.171917696507009]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,41 +0,0 @@
|
||||||
[[java-aggs-metrics-stats]]
|
|
||||||
==== Stats Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-stats-aggregation.html[Stats Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
StatsAggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.stats("agg")
|
|
||||||
.field("height");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.stats.Stats;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Stats agg = sr.getAggregations().get("agg");
|
|
||||||
double min = agg.getMin();
|
|
||||||
double max = agg.getMax();
|
|
||||||
double avg = agg.getAvg();
|
|
||||||
double sum = agg.getSum();
|
|
||||||
long count = agg.getCount();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,37 +0,0 @@
|
||||||
[[java-aggs-metrics-sum]]
|
|
||||||
==== Sum Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-sum-aggregation.html[Sum Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
SumAggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.sum("agg")
|
|
||||||
.field("height");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.sum.Sum;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Sum agg = sr.getAggregations().get("agg");
|
|
||||||
double value = agg.getValue();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,79 +0,0 @@
|
||||||
[[java-aggs-metrics-tophits]]
|
|
||||||
==== Top Hits Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-top-hits-aggregation.html[Top Hits Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.terms("agg").field("gender")
|
|
||||||
.subAggregation(
|
|
||||||
AggregationBuilders.topHits("top")
|
|
||||||
);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
You can use most of the options available for standard search such as `from`, `size`, `sort`, `highlight`, `explain`...
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
AggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.terms("agg").field("gender")
|
|
||||||
.subAggregation(
|
|
||||||
AggregationBuilders.topHits("top")
|
|
||||||
.explain(true)
|
|
||||||
.size(1)
|
|
||||||
.from(10)
|
|
||||||
);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.tophits.TopHits;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
Terms agg = sr.getAggregations().get("agg");
|
|
||||||
|
|
||||||
// For each entry
|
|
||||||
for (Terms.Bucket entry : agg.getBuckets()) {
|
|
||||||
String key = entry.getKey(); // bucket key
|
|
||||||
long docCount = entry.getDocCount(); // Doc count
|
|
||||||
logger.info("key [{}], doc_count [{}]", key, docCount);
|
|
||||||
|
|
||||||
// We ask for top_hits for each bucket
|
|
||||||
TopHits topHits = entry.getAggregations().get("top");
|
|
||||||
for (SearchHit hit : topHits.getHits().getHits()) {
|
|
||||||
logger.info(" -> id [{}], _source [{}]", hit.getId(), hit.getSourceAsString());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
This will basically produce for the first example:
|
|
||||||
|
|
||||||
[source,text]
|
|
||||||
--------------------------------------------------
|
|
||||||
key [male], doc_count [5107]
|
|
||||||
-> id [AUnzSZze9k7PKXtq04x2], _source [{"gender":"male",...}]
|
|
||||||
-> id [AUnzSZzj9k7PKXtq04x4], _source [{"gender":"male",...}]
|
|
||||||
-> id [AUnzSZzl9k7PKXtq04x5], _source [{"gender":"male",...}]
|
|
||||||
key [female], doc_count [4893]
|
|
||||||
-> id [AUnzSZzM9k7PKXtq04xy], _source [{"gender":"female",...}]
|
|
||||||
-> id [AUnzSZzp9k7PKXtq04x8], _source [{"gender":"female",...}]
|
|
||||||
-> id [AUnzSZ0W9k7PKXtq04yS], _source [{"gender":"female",...}]
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,37 +0,0 @@
|
||||||
[[java-aggs-metrics-valuecount]]
|
|
||||||
==== Value Count Aggregation
|
|
||||||
|
|
||||||
Here is how you can use
|
|
||||||
{ref}/search-aggregations-metrics-valuecount-aggregation.html[Value Count Aggregation]
|
|
||||||
with Java API.
|
|
||||||
|
|
||||||
|
|
||||||
===== Prepare aggregation request
|
|
||||||
|
|
||||||
Here is an example on how to create the aggregation request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
ValueCountAggregationBuilder aggregation =
|
|
||||||
AggregationBuilders
|
|
||||||
.count("agg")
|
|
||||||
.field("height");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
===== Use aggregation response
|
|
||||||
|
|
||||||
Import Aggregation definition classes:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCount;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// sr is here your SearchResponse object
|
|
||||||
ValueCount agg = sr.getAggregations().get("agg");
|
|
||||||
long value = agg.getValue();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,63 +0,0 @@
|
||||||
[[java-aggs]]
|
|
||||||
== Aggregations
|
|
||||||
|
|
||||||
Elasticsearch provides a full Java API to play with aggregations. See the
|
|
||||||
{ref}/search-aggregations.html[Aggregations guide].
|
|
||||||
|
|
||||||
Use the factory for aggregation builders (`AggregationBuilders`) and add each aggregation
|
|
||||||
you want to compute when querying and add it to your search request:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
SearchResponse sr = node.client().prepareSearch()
|
|
||||||
.setQuery( /* your query */ )
|
|
||||||
.addAggregation( /* add an aggregation */ )
|
|
||||||
.execute().actionGet();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Note that you can add more than one aggregation. See
|
|
||||||
{ref}/search-search.html[Search Java API] for details.
|
|
||||||
|
|
||||||
To build aggregation requests, use `AggregationBuilders` helpers. Just import them
|
|
||||||
in your class:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.search.aggregations.AggregationBuilders;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
=== Structuring aggregations
|
|
||||||
|
|
||||||
As explained in the
|
|
||||||
{ref}/search-aggregations.html[Aggregations guide], you can define
|
|
||||||
sub aggregations inside an aggregation.
|
|
||||||
|
|
||||||
An aggregation could be a metrics aggregation or a bucket aggregation.
|
|
||||||
|
|
||||||
For example, here is a 3 levels aggregation composed of:
|
|
||||||
|
|
||||||
* Terms aggregation (bucket)
|
|
||||||
* Date Histogram aggregation (bucket)
|
|
||||||
* Average aggregation (metric)
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
SearchResponse sr = node.client().prepareSearch()
|
|
||||||
.addAggregation(
|
|
||||||
AggregationBuilders.terms("by_country").field("country")
|
|
||||||
.subAggregation(AggregationBuilders.dateHistogram("by_year")
|
|
||||||
.field("dateOfBirth")
|
|
||||||
.calendarInterval(DateHistogramInterval.YEAR)
|
|
||||||
.subAggregation(AggregationBuilders.avg("avg_children").field("children"))
|
|
||||||
)
|
|
||||||
)
|
|
||||||
.execute().actionGet();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
=== Metrics aggregations
|
|
||||||
|
|
||||||
include::aggregations/metrics.asciidoc[]
|
|
||||||
|
|
||||||
=== Bucket aggregations
|
|
||||||
|
|
||||||
include::aggregations/bucket.asciidoc[]
|
|
|
@ -1,110 +0,0 @@
|
||||||
[[client]]
|
|
||||||
== Client
|
|
||||||
|
|
||||||
You can use the *Java client* in multiple ways:
|
|
||||||
|
|
||||||
* Perform standard <<java-docs-index,index>>, <<java-docs-get,get>>,
|
|
||||||
<<java-docs-delete,delete>> and <<java-search,search>> operations on an
|
|
||||||
existing cluster
|
|
||||||
* Perform administrative tasks on a running cluster
|
|
||||||
|
|
||||||
Obtaining an Elasticsearch `Client` is simple. The most common way to
|
|
||||||
get a client is by creating a <<transport-client,`TransportClient`>>
|
|
||||||
that connects to a cluster.
|
|
||||||
|
|
||||||
[IMPORTANT]
|
|
||||||
==============================
|
|
||||||
|
|
||||||
The client must have the same major version (e.g. `2.x`, or `5.x`) as the
|
|
||||||
nodes in the cluster. Clients may connect to clusters which have a different
|
|
||||||
minor version (e.g. `2.3.x`) but it is possible that new functionality may not
|
|
||||||
be supported. Ideally, the client should have the same version as the
|
|
||||||
cluster.
|
|
||||||
|
|
||||||
==============================
|
|
||||||
|
|
||||||
[[transport-client]]
|
|
||||||
=== Transport Client
|
|
||||||
|
|
||||||
deprecated[7.0.0, The `TransportClient` is deprecated in favour of the {java-rest}/java-rest-high.html[Java High Level REST Client] and will be removed in Elasticsearch 8.0. The {java-rest}/java-rest-high-level-migration.html[migration guide] describes all the steps needed to migrate.]
|
|
||||||
|
|
||||||
The `TransportClient` connects remotely to an Elasticsearch cluster
|
|
||||||
using the transport module. It does not join the cluster, but simply
|
|
||||||
gets one or more initial transport addresses and communicates with them
|
|
||||||
in round robin fashion on each action (though most actions will probably
|
|
||||||
be "two hop" operations).
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// on startup
|
|
||||||
|
|
||||||
TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)
|
|
||||||
.addTransportAddress(new TransportAddress(InetAddress.getByName("host1"), 9300))
|
|
||||||
.addTransportAddress(new TransportAddress(InetAddress.getByName("host2"), 9300));
|
|
||||||
|
|
||||||
// on shutdown
|
|
||||||
|
|
||||||
client.close();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Note that you have to set the cluster name if you use one different than
|
|
||||||
"elasticsearch":
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
Settings settings = Settings.builder()
|
|
||||||
.put("cluster.name", "myClusterName").build();
|
|
||||||
TransportClient client = new PreBuiltTransportClient(settings);
|
|
||||||
//Add transport addresses and do something with the client...
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
The Transport client comes with a cluster sniffing feature which
|
|
||||||
allows it to dynamically add new hosts and remove old ones.
|
|
||||||
When sniffing is enabled, the transport client will connect to the nodes in its
|
|
||||||
internal node list, which is built via calls to `addTransportAddress`.
|
|
||||||
After this, the client will call the internal cluster state API on those nodes
|
|
||||||
to discover available data nodes. The internal node list of the client will
|
|
||||||
be replaced with those data nodes only. This list is refreshed every five seconds by default.
|
|
||||||
Note that the IP addresses the sniffer connects to are the ones declared as the 'publish'
|
|
||||||
address in those node's Elasticsearch config.
|
|
||||||
|
|
||||||
Keep in mind that the list might possibly not include the original node it connected to
|
|
||||||
if that node is not a data node. If, for instance, you initially connect to a
|
|
||||||
master node, after sniffing, no further requests will go to that master node,
|
|
||||||
but rather to any data nodes instead. The reason the transport client excludes non-data
|
|
||||||
nodes is to avoid sending search traffic to master only nodes.
|
|
||||||
|
|
||||||
In order to enable sniffing, set `client.transport.sniff` to `true`:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
Settings settings = Settings.builder()
|
|
||||||
.put("client.transport.sniff", true).build();
|
|
||||||
TransportClient client = new PreBuiltTransportClient(settings);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Other transport client level settings include:
|
|
||||||
|
|
||||||
[cols="<,<",options="header",]
|
|
||||||
|=======================================================================
|
|
||||||
|Parameter |Description
|
|
||||||
|`client.transport.ignore_cluster_name` |Set to `true` to ignore cluster
|
|
||||||
name validation of connected nodes. (since 0.19.4)
|
|
||||||
|
|
||||||
|`client.transport.ping_timeout` |The time to wait for a ping response
|
|
||||||
from a node. Defaults to `5s`.
|
|
||||||
|
|
||||||
|`client.transport.nodes_sampler_interval` |How often to sample / ping
|
|
||||||
the nodes listed and connected. Defaults to `5s`.
|
|
||||||
|=======================================================================
|
|
||||||
|
|
||||||
|
|
||||||
[[client-connected-to-client-node]]
|
|
||||||
=== Connecting a Client to a Coordinating Only Node
|
|
||||||
|
|
||||||
You can start locally a {ref}/modules-node.html#coordinating-only-node[Coordinating Only Node]
|
|
||||||
and then simply create a <<transport-client,`TransportClient`>> in your
|
|
||||||
application which connects to this Coordinating Only Node.
|
|
||||||
|
|
||||||
This way, the coordinating only node will be able to load whatever plugin you
|
|
||||||
need (think about discovery plugins for example).
|
|
|
@ -1,36 +0,0 @@
|
||||||
[[java-docs]]
|
|
||||||
== Document APIs
|
|
||||||
|
|
||||||
This section describes the following CRUD APIs:
|
|
||||||
|
|
||||||
.Single document APIs
|
|
||||||
* <<java-docs-index>>
|
|
||||||
* <<java-docs-get>>
|
|
||||||
* <<java-docs-delete>>
|
|
||||||
* <<java-docs-update>>
|
|
||||||
|
|
||||||
.Multi-document APIs
|
|
||||||
* <<java-docs-multi-get>>
|
|
||||||
* <<java-docs-bulk>>
|
|
||||||
* <<java-docs-reindex>>
|
|
||||||
* <<java-docs-update-by-query>>
|
|
||||||
* <<java-docs-delete-by-query>>
|
|
||||||
|
|
||||||
NOTE: All CRUD APIs are single-index APIs. The `index` parameter accepts a single
|
|
||||||
index name, or an `alias` which points to a single index.
|
|
||||||
|
|
||||||
include::docs/index_.asciidoc[]
|
|
||||||
|
|
||||||
include::docs/get.asciidoc[]
|
|
||||||
|
|
||||||
include::docs/delete.asciidoc[]
|
|
||||||
|
|
||||||
include::docs/update.asciidoc[]
|
|
||||||
|
|
||||||
include::docs/multi-get.asciidoc[]
|
|
||||||
|
|
||||||
include::docs/bulk.asciidoc[]
|
|
||||||
|
|
||||||
include::docs/update-by-query.asciidoc[]
|
|
||||||
|
|
||||||
include::docs/reindex.asciidoc[]
|
|
|
@ -1,190 +0,0 @@
|
||||||
[[java-docs-bulk]]
|
|
||||||
=== Bulk API
|
|
||||||
|
|
||||||
The bulk API allows one to index and delete several documents in a
|
|
||||||
single request. Here is a sample usage:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import static org.elasticsearch.common.xcontent.XContentFactory.*;
|
|
||||||
|
|
||||||
BulkRequestBuilder bulkRequest = client.prepareBulk();
|
|
||||||
|
|
||||||
// either use client#prepare, or use Requests# to directly build index/delete requests
|
|
||||||
bulkRequest.add(client.prepareIndex("twitter", "_doc", "1")
|
|
||||||
.setSource(jsonBuilder()
|
|
||||||
.startObject()
|
|
||||||
.field("user", "kimchy")
|
|
||||||
.field("postDate", new Date())
|
|
||||||
.field("message", "trying out Elasticsearch")
|
|
||||||
.endObject()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
bulkRequest.add(client.prepareIndex("twitter", "_doc", "2")
|
|
||||||
.setSource(jsonBuilder()
|
|
||||||
.startObject()
|
|
||||||
.field("user", "kimchy")
|
|
||||||
.field("postDate", new Date())
|
|
||||||
.field("message", "another post")
|
|
||||||
.endObject()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
BulkResponse bulkResponse = bulkRequest.get();
|
|
||||||
if (bulkResponse.hasFailures()) {
|
|
||||||
// process failures by iterating through each bulk response item
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[[java-docs-bulk-processor]]
|
|
||||||
=== Using Bulk Processor
|
|
||||||
|
|
||||||
The `BulkProcessor` class offers a simple interface to flush bulk operations automatically based on the number or size
|
|
||||||
of requests, or after a given period.
|
|
||||||
|
|
||||||
To use it, first create a `BulkProcessor` instance:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.action.bulk.BackoffPolicy;
|
|
||||||
import org.elasticsearch.action.bulk.BulkProcessor;
|
|
||||||
import org.elasticsearch.common.unit.ByteSizeUnit;
|
|
||||||
import org.elasticsearch.common.unit.ByteSizeValue;
|
|
||||||
import org.elasticsearch.common.unit.TimeValue;
|
|
||||||
|
|
||||||
BulkProcessor bulkProcessor = BulkProcessor.builder(
|
|
||||||
client, <1>
|
|
||||||
new BulkProcessor.Listener() {
|
|
||||||
@Override
|
|
||||||
public void beforeBulk(long executionId,
|
|
||||||
BulkRequest request) { ... } <2>
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void afterBulk(long executionId,
|
|
||||||
BulkRequest request,
|
|
||||||
BulkResponse response) { ... } <3>
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void afterBulk(long executionId,
|
|
||||||
BulkRequest request,
|
|
||||||
Throwable failure) { ... } <4>
|
|
||||||
})
|
|
||||||
.setBulkActions(10000) <5>
|
|
||||||
.setBulkSize(new ByteSizeValue(5, ByteSizeUnit.MB)) <6>
|
|
||||||
.setFlushInterval(TimeValue.timeValueSeconds(5)) <7>
|
|
||||||
.setConcurrentRequests(1) <8>
|
|
||||||
.setBackoffPolicy(
|
|
||||||
BackoffPolicy.exponentialBackoff(TimeValue.timeValueMillis(100), 3)) <9>
|
|
||||||
.build();
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Add your Elasticsearch client
|
|
||||||
<2> This method is called just before bulk is executed. You can for example see the numberOfActions with
|
|
||||||
`request.numberOfActions()`
|
|
||||||
<3> This method is called after bulk execution. You can for example check if there was some failing requests
|
|
||||||
with `response.hasFailures()`
|
|
||||||
<4> This method is called when the bulk failed and raised a `Throwable`
|
|
||||||
<5> We want to execute the bulk every 10 000 requests
|
|
||||||
<6> We want to flush the bulk every 5mb
|
|
||||||
<7> We want to flush the bulk every 5 seconds whatever the number of requests
|
|
||||||
<8> Set the number of concurrent requests. A value of 0 means that only a single request will be allowed to be
|
|
||||||
executed. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests.
|
|
||||||
<9> Set a custom backoff policy which will initially wait for 100ms, increase exponentially and retries up to three
|
|
||||||
times. A retry is attempted whenever one or more bulk item requests have failed with an `EsRejectedExecutionException`
|
|
||||||
which indicates that there were too little compute resources available for processing the request. To disable backoff,
|
|
||||||
pass `BackoffPolicy.noBackoff()`.
|
|
||||||
|
|
||||||
By default, `BulkProcessor`:
|
|
||||||
|
|
||||||
* sets bulkActions to `1000`
|
|
||||||
* sets bulkSize to `5mb`
|
|
||||||
* does not set flushInterval
|
|
||||||
* sets concurrentRequests to 1, which means an asynchronous execution of the flush operation.
|
|
||||||
* sets backoffPolicy to an exponential backoff with 8 retries and a start delay of 50ms. The total wait time is roughly 5.1 seconds.
|
|
||||||
|
|
||||||
[[java-docs-bulk-processor-requests]]
|
|
||||||
==== Add requests
|
|
||||||
|
|
||||||
Then you can simply add your requests to the `BulkProcessor`:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
bulkProcessor.add(new IndexRequest("twitter", "_doc", "1").source(/* your doc here */));
|
|
||||||
bulkProcessor.add(new DeleteRequest("twitter", "_doc", "2"));
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[[java-docs-bulk-processor-close]]
|
|
||||||
==== Closing the Bulk Processor
|
|
||||||
|
|
||||||
When all documents are loaded to the `BulkProcessor` it can be closed by using `awaitClose` or `close` methods:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
bulkProcessor.awaitClose(10, TimeUnit.MINUTES);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
or
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
bulkProcessor.close();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Both methods flush any remaining documents and disable all other scheduled flushes, if they were scheduled by setting
|
|
||||||
`flushInterval`. If concurrent requests were enabled, the `awaitClose` method waits for up to the specified timeout for
|
|
||||||
all bulk requests to complete then returns `true`; if the specified waiting time elapses before all bulk requests complete,
|
|
||||||
`false` is returned. The `close` method doesn't wait for any remaining bulk requests to complete and exits immediately.
|
|
||||||
|
|
||||||
[[java-docs-bulk-processor-tests]]
|
|
||||||
==== Using Bulk Processor in tests
|
|
||||||
|
|
||||||
If you are running tests with Elasticsearch and are using the `BulkProcessor` to populate your dataset
|
|
||||||
you should better set the number of concurrent requests to `0` so the flush operation of the bulk will be executed
|
|
||||||
in a synchronous manner:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
BulkProcessor bulkProcessor = BulkProcessor.builder(client, new BulkProcessor.Listener() { /* Listener methods */ })
|
|
||||||
.setBulkActions(10000)
|
|
||||||
.setConcurrentRequests(0)
|
|
||||||
.build();
|
|
||||||
|
|
||||||
// Add your requests
|
|
||||||
bulkProcessor.add(/* Your requests */);
|
|
||||||
|
|
||||||
// Flush any remaining requests
|
|
||||||
bulkProcessor.flush();
|
|
||||||
|
|
||||||
// Or close the bulkProcessor if you don't need it anymore
|
|
||||||
bulkProcessor.close();
|
|
||||||
|
|
||||||
// Refresh your indices
|
|
||||||
client.admin().indices().prepareRefresh().get();
|
|
||||||
|
|
||||||
// Now you can start searching!
|
|
||||||
client.prepareSearch().get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
[[java-docs-bulk-global-parameters]]
|
|
||||||
==== Global Parameters
|
|
||||||
|
|
||||||
Global parameters can be specified on the BulkRequest as well as BulkProcessor, similar to the REST API. These global
|
|
||||||
parameters serve as defaults and can be overridden by local parameters specified on each sub request. Some parameters
|
|
||||||
have to be set before any sub request is added - index, type - and you have to specify them during BulkRequest or
|
|
||||||
BulkProcessor creation. Some are optional - pipeline, routing - and can be specified at any point before the bulk is sent.
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{hlrc-tests}/BulkProcessorIT.java[bulk-processor-mix-parameters]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> global parameters from the BulkRequest will be applied on a sub request
|
|
||||||
<2> local pipeline parameter on a sub request will override global parameters from BulkRequest
|
|
||||||
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{hlrc-tests}/BulkRequestWithGlobalParametersIT.java[bulk-request-mix-pipeline]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> local pipeline parameter on a sub request will override global pipeline from the BulkRequest
|
|
||||||
<2> global parameter from the BulkRequest will be applied on a sub request
|
|
|
@ -1,42 +0,0 @@
|
||||||
[[java-docs-delete]]
|
|
||||||
=== Delete API
|
|
||||||
|
|
||||||
The delete API allows one to delete a typed JSON document from a specific
|
|
||||||
index based on its id. The following example deletes the JSON document
|
|
||||||
from an index called twitter, under a type called `_doc`, with id valued
|
|
||||||
1:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
DeleteResponse response = client.prepareDelete("twitter", "_doc", "1").get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
For more information on the delete operation, check out the
|
|
||||||
{ref}/docs-delete.html[delete API] docs.
|
|
||||||
|
|
||||||
[[java-docs-delete-by-query]]
|
|
||||||
=== Delete By Query API
|
|
||||||
|
|
||||||
The delete by query API allows one to delete a given set of documents based on
|
|
||||||
the result of a query:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[delete-by-query-sync]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> query
|
|
||||||
<2> index
|
|
||||||
<3> execute the operation
|
|
||||||
<4> number of deleted documents
|
|
||||||
|
|
||||||
As it can be a long running operation, if you wish to do it asynchronously, you can call `execute` instead of `get`
|
|
||||||
and provide a listener like:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[delete-by-query-async]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> query
|
|
||||||
<2> index
|
|
||||||
<3> listener
|
|
||||||
<4> number of deleted documents
|
|
|
@ -1,14 +0,0 @@
|
||||||
[[java-docs-get]]
|
|
||||||
=== Get API
|
|
||||||
|
|
||||||
The get API allows to get a typed JSON document from the index based on
|
|
||||||
its id. The following example gets a JSON document from an index called
|
|
||||||
twitter, under a type called `_doc`, with id valued 1:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
GetResponse response = client.prepareGet("twitter", "_doc", "1").get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
For more information on the get operation, check out the REST
|
|
||||||
{ref}/docs-get.html[get] docs.
|
|
|
@ -1,167 +0,0 @@
|
||||||
[[java-docs-index]]
|
|
||||||
=== Index API
|
|
||||||
|
|
||||||
The index API allows one to index a typed JSON document into a specific
|
|
||||||
index and make it searchable.
|
|
||||||
|
|
||||||
|
|
||||||
[[java-docs-index-generate]]
|
|
||||||
==== Generate JSON document
|
|
||||||
|
|
||||||
There are several different ways of generating a JSON document:
|
|
||||||
|
|
||||||
* Manually (aka do it yourself) using native `byte[]` or as a `String`
|
|
||||||
|
|
||||||
* Using a `Map` that will be automatically converted to its JSON
|
|
||||||
equivalent
|
|
||||||
|
|
||||||
* Using a third party library to serialize your beans such as
|
|
||||||
https://github.com/FasterXML/jackson[Jackson]
|
|
||||||
|
|
||||||
* Using built-in helpers XContentFactory.jsonBuilder()
|
|
||||||
|
|
||||||
Internally, each type is converted to `byte[]` (so a String is converted
|
|
||||||
to a `byte[]`). Therefore, if the object is in this form already, then
|
|
||||||
use it. The `jsonBuilder` is highly optimized JSON generator that
|
|
||||||
directly constructs a `byte[]`.
|
|
||||||
|
|
||||||
|
|
||||||
[[java-docs-index-generate-diy]]
|
|
||||||
===== Do It Yourself
|
|
||||||
|
|
||||||
Nothing really difficult here but note that you will have to encode
|
|
||||||
dates according to the
|
|
||||||
{ref}/mapping-date-format.html[Date Format].
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
String json = "{" +
|
|
||||||
"\"user\":\"kimchy\"," +
|
|
||||||
"\"postDate\":\"2013-01-30\"," +
|
|
||||||
"\"message\":\"trying out Elasticsearch\"" +
|
|
||||||
"}";
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
[[java-docs-index-generate-using-map]]
|
|
||||||
===== Using Map
|
|
||||||
|
|
||||||
Map is a key:values pair collection. It represents a JSON structure:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
Map<String, Object> json = new HashMap<String, Object>();
|
|
||||||
json.put("user","kimchy");
|
|
||||||
json.put("postDate",new Date());
|
|
||||||
json.put("message","trying out Elasticsearch");
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
[[java-docs-index-generate-beans]]
|
|
||||||
===== Serialize your beans
|
|
||||||
|
|
||||||
You can use https://github.com/FasterXML/jackson[Jackson] to serialize
|
|
||||||
your beans to JSON. Please add http://search.maven.org/#search%7Cga%7C1%7Cjackson-databind[Jackson Databind]
|
|
||||||
to your project. Then you can use `ObjectMapper` to serialize your beans:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import com.fasterxml.jackson.databind.*;
|
|
||||||
|
|
||||||
// instance a json mapper
|
|
||||||
ObjectMapper mapper = new ObjectMapper(); // create once, reuse
|
|
||||||
|
|
||||||
// generate json
|
|
||||||
byte[] json = mapper.writeValueAsBytes(yourbeaninstance);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
[[java-docs-index-generate-helpers]]
|
|
||||||
===== Use Elasticsearch helpers
|
|
||||||
|
|
||||||
Elasticsearch provides built-in helpers to generate JSON content.
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import static org.elasticsearch.common.xcontent.XContentFactory.*;
|
|
||||||
|
|
||||||
XContentBuilder builder = jsonBuilder()
|
|
||||||
.startObject()
|
|
||||||
.field("user", "kimchy")
|
|
||||||
.field("postDate", new Date())
|
|
||||||
.field("message", "trying out Elasticsearch")
|
|
||||||
.endObject()
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Note that you can also add arrays with `startArray(String)` and
|
|
||||||
`endArray()` methods. By the way, the `field` method +
|
|
||||||
accepts many object types. You can directly pass numbers, dates and even
|
|
||||||
other XContentBuilder objects.
|
|
||||||
|
|
||||||
If you need to see the generated JSON content, you can use the
|
|
||||||
`Strings.toString()` method.
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import org.elasticsearch.common.Strings;
|
|
||||||
|
|
||||||
String json = Strings.toString(builder);
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
[[java-docs-index-doc]]
|
|
||||||
==== Index document
|
|
||||||
|
|
||||||
The following example indexes a JSON document into an index called
|
|
||||||
twitter, under a type called `_doc`, with id valued 1:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import static org.elasticsearch.common.xcontent.XContentFactory.*;
|
|
||||||
|
|
||||||
IndexResponse response = client.prepareIndex("twitter", "_doc", "1")
|
|
||||||
.setSource(jsonBuilder()
|
|
||||||
.startObject()
|
|
||||||
.field("user", "kimchy")
|
|
||||||
.field("postDate", new Date())
|
|
||||||
.field("message", "trying out Elasticsearch")
|
|
||||||
.endObject()
|
|
||||||
)
|
|
||||||
.get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Note that you can also index your documents as JSON String and that you
|
|
||||||
don't have to give an ID:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
String json = "{" +
|
|
||||||
"\"user\":\"kimchy\"," +
|
|
||||||
"\"postDate\":\"2013-01-30\"," +
|
|
||||||
"\"message\":\"trying out Elasticsearch\"" +
|
|
||||||
"}";
|
|
||||||
|
|
||||||
IndexResponse response = client.prepareIndex("twitter", "_doc")
|
|
||||||
.setSource(json, XContentType.JSON)
|
|
||||||
.get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
`IndexResponse` object will give you a report:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// Index name
|
|
||||||
String _index = response.getIndex();
|
|
||||||
// Type name
|
|
||||||
String _type = response.getType();
|
|
||||||
// Document ID (generated or not)
|
|
||||||
String _id = response.getId();
|
|
||||||
// Version (if it's the first time you index this document, you will get: 1)
|
|
||||||
long _version = response.getVersion();
|
|
||||||
// status has stored current instance statement.
|
|
||||||
RestStatus status = response.status();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
For more information on the index operation, check out the REST
|
|
||||||
{ref}/docs-index_.html[index] docs.
|
|
||||||
|
|
|
@ -1,30 +0,0 @@
|
||||||
[[java-docs-multi-get]]
|
|
||||||
=== Multi Get API
|
|
||||||
|
|
||||||
The multi get API allows to get a list of documents based on their `index` and `id`:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
MultiGetResponse multiGetItemResponses = client.prepareMultiGet()
|
|
||||||
.add("twitter", "_doc", "1") <1>
|
|
||||||
.add("twitter", "_doc", "2", "3", "4") <2>
|
|
||||||
.add("another", "_doc", "foo") <3>
|
|
||||||
.get();
|
|
||||||
|
|
||||||
for (MultiGetItemResponse itemResponse : multiGetItemResponses) { <4>
|
|
||||||
GetResponse response = itemResponse.getResponse();
|
|
||||||
if (response.isExists()) { <5>
|
|
||||||
String json = response.getSourceAsString(); <6>
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> get by a single id
|
|
||||||
<2> or by a list of ids for the same index
|
|
||||||
<3> you can also get from another index
|
|
||||||
<4> iterate over the result set
|
|
||||||
<5> you can check if the document exists
|
|
||||||
<6> access to the `_source` field
|
|
||||||
|
|
||||||
For more information on the multi get operation, check out the REST
|
|
||||||
{ref}/docs-multi-get.html[multi get] docs.
|
|
||||||
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-docs-reindex]]
|
|
||||||
=== Reindex API
|
|
||||||
|
|
||||||
See {ref}/docs-reindex.html[reindex API].
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[reindex1]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Optionally a query can provided to filter what documents should be
|
|
||||||
re-indexed from the source to the target index.
|
|
|
@ -1,166 +0,0 @@
|
||||||
[[java-docs-update-by-query]]
|
|
||||||
=== Update By Query API
|
|
||||||
|
|
||||||
The simplest usage of `updateByQuery` updates each
|
|
||||||
document in an index without changing the source. This usage enables
|
|
||||||
picking up a new property or another online mapping change.
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Calls to the `updateByQuery` API start by getting a snapshot of the index, indexing
|
|
||||||
any documents found using the `internal` versioning.
|
|
||||||
|
|
||||||
NOTE: Version conflicts happen when a document changes between the time of the
|
|
||||||
snapshot and the time the index request processes.
|
|
||||||
|
|
||||||
When the versions match, `updateByQuery` updates the document
|
|
||||||
and increments the version number.
|
|
||||||
|
|
||||||
All update and query failures cause `updateByQuery` to abort. These failures are
|
|
||||||
available from the `BulkByScrollResponse#getIndexingFailures` method. Any
|
|
||||||
successful updates remain and are not rolled back. While the first failure
|
|
||||||
causes the abort, the response contains all of the failures generated by the
|
|
||||||
failed bulk request.
|
|
||||||
|
|
||||||
To prevent version conflicts from causing `updateByQuery` to abort, set
|
|
||||||
`abortOnVersionConflict(false)`. The first example does this because it is
|
|
||||||
trying to pick up an online mapping change and a version conflict means that
|
|
||||||
the conflicting document was updated between the start of the `updateByQuery`
|
|
||||||
and the time when it attempted to update the document. This is fine because
|
|
||||||
that update will have picked up the online mapping update.
|
|
||||||
|
|
||||||
The `UpdateByQueryRequestBuilder` API supports filtering the updated documents,
|
|
||||||
limiting the total number of documents to update, and updating documents
|
|
||||||
with a script:
|
|
||||||
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-filter]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
`UpdateByQueryRequestBuilder` also enables direct access to the query used
|
|
||||||
to select the documents. You can use this access to change the default scroll size or
|
|
||||||
otherwise modify the request for matching documents.
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-size]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
You can also combine `maxDocs` with sorting to limit the documents updated:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-sort]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
In addition to changing the `_source` field for the document, you can use a
|
|
||||||
script to change the action, similar to the Update API:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-script]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
As in the <<java-docs-update,Update API>>, you can set the value of `ctx.op` to change the
|
|
||||||
operation that executes:
|
|
||||||
|
|
||||||
`noop`::
|
|
||||||
|
|
||||||
Set `ctx.op = "noop"` if your script doesn't make any
|
|
||||||
changes. The `updateByQuery` operation then omits that document from the updates.
|
|
||||||
This behavior increments the `noop` counter in the response body.
|
|
||||||
|
|
||||||
`delete`::
|
|
||||||
|
|
||||||
Set `ctx.op = "delete"` if your script decides that the document must be
|
|
||||||
deleted. The deletion will be reported in the `deleted` counter in the
|
|
||||||
response body.
|
|
||||||
|
|
||||||
Setting `ctx.op` to any other value generates an error. Setting any
|
|
||||||
other field in `ctx` generates an error.
|
|
||||||
|
|
||||||
This API doesn't allow you to move the documents it touches, just modify their
|
|
||||||
source. This is intentional! We've made no provisions for removing the document
|
|
||||||
from its original location.
|
|
||||||
|
|
||||||
You can also perform these operations on multiple indices at once, similar to the search API:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-multi-index]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
If you provide a `routing` value then the process copies the routing value to the scroll query,
|
|
||||||
limiting the process to the shards that match that routing value:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-routing]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
`updateByQuery` can also use the ingest node by
|
|
||||||
specifying a `pipeline` like this:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-pipeline]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[discrete]
|
|
||||||
[[java-docs-update-by-query-task-api]]
|
|
||||||
=== Works with the Task API
|
|
||||||
|
|
||||||
You can fetch the status of all running update-by-query requests with the Task API:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-list-tasks]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
With the `TaskId` shown above you can look up the task directly:
|
|
||||||
|
|
||||||
// provide API Example
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-get-task]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[discrete]
|
|
||||||
[[java-docs-update-by-query-cancel-task-api]]
|
|
||||||
=== Works with the Cancel Task API
|
|
||||||
|
|
||||||
Any Update By Query can be canceled using the Task Cancel API:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-cancel-task]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Use the `list tasks` API to find the value of `taskId`.
|
|
||||||
|
|
||||||
Cancelling a request is typically a very fast process but can take up to a few seconds.
|
|
||||||
The task status API continues to list the task until the cancellation is complete.
|
|
||||||
|
|
||||||
[discrete]
|
|
||||||
[[java-docs-update-by-query-rethrottle]]
|
|
||||||
=== Rethrottling
|
|
||||||
|
|
||||||
Use the `_rethrottle` API to change the value of `requests_per_second` on a running update:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{client-reindex-tests}/ReindexDocumentationIT.java[update-by-query-rethrottle]
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Use the `list tasks` API to find the value of `taskId`.
|
|
||||||
|
|
||||||
As with the `updateByQuery` API, the value of `requests_per_second`
|
|
||||||
can be any positive float value to set the level of the throttle, or `Float.POSITIVE_INFINITY` to disable throttling.
|
|
||||||
A value of `requests_per_second` that speeds up the process takes
|
|
||||||
effect immediately. `requests_per_second` values that slow the query take effect
|
|
||||||
after completing the current batch in order to prevent scroll timeouts.
|
|
|
@ -1,118 +0,0 @@
|
||||||
[[java-docs-update]]
|
|
||||||
=== Update API
|
|
||||||
|
|
||||||
|
|
||||||
You can either create an `UpdateRequest` and send it to the client:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
UpdateRequest updateRequest = new UpdateRequest();
|
|
||||||
updateRequest.index("index");
|
|
||||||
updateRequest.type("_doc");
|
|
||||||
updateRequest.id("1");
|
|
||||||
updateRequest.doc(jsonBuilder()
|
|
||||||
.startObject()
|
|
||||||
.field("gender", "male")
|
|
||||||
.endObject());
|
|
||||||
client.update(updateRequest).get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Or you can use `prepareUpdate()` method:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
client.prepareUpdate("ttl", "doc", "1")
|
|
||||||
.setScript(new Script(
|
|
||||||
"ctx._source.gender = \"male\"", <1>
|
|
||||||
ScriptType.INLINE, null, null))
|
|
||||||
.get();
|
|
||||||
|
|
||||||
client.prepareUpdate("ttl", "doc", "1")
|
|
||||||
.setDoc(jsonBuilder() <2>
|
|
||||||
.startObject()
|
|
||||||
.field("gender", "male")
|
|
||||||
.endObject())
|
|
||||||
.get();
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Your script. It could also be a locally stored script name.
|
|
||||||
In that case, you'll need to use `ScriptType.FILE`.
|
|
||||||
<2> Document which will be merged to the existing one.
|
|
||||||
|
|
||||||
Note that you can't provide both `script` and `doc`.
|
|
||||||
|
|
||||||
[[java-docs-update-api-script]]
|
|
||||||
==== Update by script
|
|
||||||
|
|
||||||
The update API allows to update a document based on a script provided:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
UpdateRequest updateRequest = new UpdateRequest("ttl", "doc", "1")
|
|
||||||
.script(new Script("ctx._source.gender = \"male\""));
|
|
||||||
client.update(updateRequest).get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
[[java-docs-update-api-merge-docs]]
|
|
||||||
==== Update by merging documents
|
|
||||||
|
|
||||||
The update API also support passing a partial document, which will be merged into the existing document (simple
|
|
||||||
recursive merge, inner merging of objects, replacing core "keys/values" and arrays). For example:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
UpdateRequest updateRequest = new UpdateRequest("index", "type", "1")
|
|
||||||
.doc(jsonBuilder()
|
|
||||||
.startObject()
|
|
||||||
.field("gender", "male")
|
|
||||||
.endObject());
|
|
||||||
client.update(updateRequest).get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
[[java-docs-update-api-upsert]]
|
|
||||||
==== Upsert
|
|
||||||
|
|
||||||
There is also support for `upsert`. If the document does not exist, the content of the `upsert`
|
|
||||||
element will be used to index the fresh doc:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
IndexRequest indexRequest = new IndexRequest("index", "type", "1")
|
|
||||||
.source(jsonBuilder()
|
|
||||||
.startObject()
|
|
||||||
.field("name", "Joe Smith")
|
|
||||||
.field("gender", "male")
|
|
||||||
.endObject());
|
|
||||||
UpdateRequest updateRequest = new UpdateRequest("index", "type", "1")
|
|
||||||
.doc(jsonBuilder()
|
|
||||||
.startObject()
|
|
||||||
.field("name", "Joe Dalton")
|
|
||||||
.endObject())
|
|
||||||
.upsert(indexRequest); <1>
|
|
||||||
client.update(updateRequest).get();
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> If the document does not exist, the one in `indexRequest` will be added
|
|
||||||
|
|
||||||
If the document `index/_doc/1` already exists, we will have after this operation a document like:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"name" : "Joe Dalton", <1>
|
|
||||||
"gender": "male"
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
// NOTCONSOLE
|
|
||||||
<1> This field is updated by the update request
|
|
||||||
|
|
||||||
If it does not exist, we will have a new document:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"name" : "Joe Smith",
|
|
||||||
"gender": "male"
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
// NOTCONSOLE
|
|
|
@ -1,149 +0,0 @@
|
||||||
= Java API (deprecated)
|
|
||||||
|
|
||||||
include::{elasticsearch-root}/docs/Versions.asciidoc[]
|
|
||||||
|
|
||||||
[[java-api]]
|
|
||||||
[preface]
|
|
||||||
== Preface
|
|
||||||
|
|
||||||
deprecated[7.0.0, The `TransportClient` is deprecated in favour of the {java-rest}/java-rest-high.html[Java High Level REST Client] and will be removed in Elasticsearch 8.0. The {java-rest}/java-rest-high-level-migration.html[migration guide] describes all the steps needed to migrate.]
|
|
||||||
|
|
||||||
This section describes the Java API that Elasticsearch provides. All
|
|
||||||
Elasticsearch operations are executed using a
|
|
||||||
<<client,Client>> object. All
|
|
||||||
operations are completely asynchronous in nature (either accepts a
|
|
||||||
listener, or returns a future).
|
|
||||||
|
|
||||||
Additionally, operations on a client may be accumulated and executed in
|
|
||||||
<<java-docs-bulk,Bulk>>.
|
|
||||||
|
|
||||||
Note, all the APIs are exposed through the
|
|
||||||
Java API (actually, the Java API is used internally to execute them).
|
|
||||||
|
|
||||||
== Javadoc
|
|
||||||
|
|
||||||
The javadoc for the transport client can be found at {transport-client-javadoc}/index.html.
|
|
||||||
|
|
||||||
== Maven Repository
|
|
||||||
|
|
||||||
Elasticsearch is hosted on
|
|
||||||
http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.elasticsearch.client%22[Maven
|
|
||||||
Central].
|
|
||||||
|
|
||||||
For example, you can define the latest version in your `pom.xml` file:
|
|
||||||
|
|
||||||
["source","xml",subs="attributes"]
|
|
||||||
--------------------------------------------------
|
|
||||||
<dependency>
|
|
||||||
<groupId>org.elasticsearch.client</groupId>
|
|
||||||
<artifactId>transport</artifactId>
|
|
||||||
<version>{version}</version>
|
|
||||||
</dependency>
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[[java-transport-usage-maven-lucene]]
|
|
||||||
=== Lucene Snapshot repository
|
|
||||||
|
|
||||||
The very first releases of any major version (like a beta), might have been built on top of a Lucene Snapshot version.
|
|
||||||
In such a case you will be unable to resolve the Lucene dependencies of the client.
|
|
||||||
|
|
||||||
For example, if you want to use the `6.0.0-beta1` version which depends on Lucene `7.0.0-snapshot-00142c9`, you must
|
|
||||||
define the following repository.
|
|
||||||
|
|
||||||
For Maven:
|
|
||||||
|
|
||||||
["source","xml",subs="attributes"]
|
|
||||||
--------------------------------------------------
|
|
||||||
<repository>
|
|
||||||
<id>elastic-lucene-snapshots</id>
|
|
||||||
<name>Elastic Lucene Snapshots</name>
|
|
||||||
<url>https://s3.amazonaws.com/download.elasticsearch.org/lucenesnapshots/00142c9</url>
|
|
||||||
<releases><enabled>true</enabled></releases>
|
|
||||||
<snapshots><enabled>false</enabled></snapshots>
|
|
||||||
</repository>
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
For Gradle:
|
|
||||||
|
|
||||||
["source","groovy",subs="attributes"]
|
|
||||||
--------------------------------------------------
|
|
||||||
maven {
|
|
||||||
name "lucene-snapshots"
|
|
||||||
url 'https://s3.amazonaws.com/download.elasticsearch.org/lucenesnapshots/00142c9'
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
=== Log4j 2 Logger
|
|
||||||
|
|
||||||
You need to also include Log4j 2 dependencies:
|
|
||||||
|
|
||||||
["source","xml",subs="attributes"]
|
|
||||||
--------------------------------------------------
|
|
||||||
<dependency>
|
|
||||||
<groupId>org.apache.logging.log4j</groupId>
|
|
||||||
<artifactId>log4j-core</artifactId>
|
|
||||||
<version>2.11.1</version>
|
|
||||||
</dependency>
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
And also provide a Log4j 2 configuration file in your classpath.
|
|
||||||
For example, you can add in your `src/main/resources` project dir a `log4j2.properties` file like:
|
|
||||||
|
|
||||||
|
|
||||||
["source","properties",subs="attributes"]
|
|
||||||
--------------------------------------------------
|
|
||||||
appender.console.type = Console
|
|
||||||
appender.console.name = console
|
|
||||||
appender.console.layout.type = PatternLayout
|
|
||||||
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%m%n
|
|
||||||
|
|
||||||
rootLogger.level = info
|
|
||||||
rootLogger.appenderRef.console.ref = console
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
=== Using another Logger
|
|
||||||
|
|
||||||
If you want to use another logger than Log4j 2, you can use http://www.slf4j.org/[SLF4J] bridge to do that:
|
|
||||||
|
|
||||||
["source","xml",subs="attributes"]
|
|
||||||
--------------------------------------------------
|
|
||||||
<dependency>
|
|
||||||
<groupId>org.apache.logging.log4j</groupId>
|
|
||||||
<artifactId>log4j-to-slf4j</artifactId>
|
|
||||||
<version>2.11.1</version>
|
|
||||||
</dependency>
|
|
||||||
<dependency>
|
|
||||||
<groupId>org.slf4j</groupId>
|
|
||||||
<artifactId>slf4j-api</artifactId>
|
|
||||||
<version>1.7.24</version>
|
|
||||||
</dependency>
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
http://www.slf4j.org/manual.html[This page] lists implementations you can use. Pick your favorite logger
|
|
||||||
and add it as a dependency. As an example, we will use the `slf4j-simple` logger:
|
|
||||||
|
|
||||||
["source","xml",subs="attributes"]
|
|
||||||
--------------------------------------------------
|
|
||||||
<dependency>
|
|
||||||
<groupId>org.slf4j</groupId>
|
|
||||||
<artifactId>slf4j-simple</artifactId>
|
|
||||||
<version>1.7.21</version>
|
|
||||||
</dependency>
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
:client-tests: {elasticsearch-root}/server/src/internalClusterTest/java/org/elasticsearch/client/documentation
|
|
||||||
:hlrc-tests: {elasticsearch-root}/client/rest-high-level/src/test/java/org/elasticsearch/client
|
|
||||||
|
|
||||||
:client-reindex-tests: {elasticsearch-root}/modules/reindex/src/internalClusterTest/java/org/elasticsearch/client/documentation
|
|
||||||
|
|
||||||
include::client.asciidoc[]
|
|
||||||
|
|
||||||
include::docs.asciidoc[]
|
|
||||||
|
|
||||||
include::search.asciidoc[]
|
|
||||||
|
|
||||||
include::aggs.asciidoc[]
|
|
||||||
|
|
||||||
include::query-dsl.asciidoc[]
|
|
||||||
|
|
||||||
include::admin/index.asciidoc[]
|
|
|
@ -1,40 +0,0 @@
|
||||||
[[java-query-dsl]]
|
|
||||||
== Query DSL
|
|
||||||
|
|
||||||
Elasticsearch provides a full Java query dsl in a similar manner to the
|
|
||||||
REST {ref}/query-dsl.html[Query DSL]. The factory for query
|
|
||||||
builders is `QueryBuilders`. Once your query is ready, you can use the
|
|
||||||
<<java-search,Search API>>.
|
|
||||||
|
|
||||||
To use `QueryBuilders` just import them in your class:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import static org.elasticsearch.index.query.QueryBuilders.*;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Note that you can easily print (aka debug) JSON generated queries using
|
|
||||||
`toString()` method on `QueryBuilder` object.
|
|
||||||
|
|
||||||
The `QueryBuilder` can then be used with any API that accepts a query,
|
|
||||||
such as `count` and `search`.
|
|
||||||
|
|
||||||
:query-dsl-test: {elasticsearch-root}/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/QueryDSLDocumentationTests.java
|
|
||||||
|
|
||||||
include::query-dsl/match-all-query.asciidoc[]
|
|
||||||
|
|
||||||
include::query-dsl/full-text-queries.asciidoc[]
|
|
||||||
|
|
||||||
include::query-dsl/term-level-queries.asciidoc[]
|
|
||||||
|
|
||||||
include::query-dsl/compound-queries.asciidoc[]
|
|
||||||
|
|
||||||
include::query-dsl/joining-queries.asciidoc[]
|
|
||||||
|
|
||||||
include::query-dsl/geo-queries.asciidoc[]
|
|
||||||
|
|
||||||
include::query-dsl/special-queries.asciidoc[]
|
|
||||||
|
|
||||||
include::query-dsl/span-queries.asciidoc[]
|
|
||||||
|
|
||||||
:query-dsl-test!:
|
|
|
@ -1,13 +0,0 @@
|
||||||
[[java-query-dsl-bool-query]]
|
|
||||||
==== Bool Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-bool-query.html[Bool Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[bool]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> must query
|
|
||||||
<2> must not query
|
|
||||||
<3> should query
|
|
||||||
<4> a query that must appear in the matching documents but doesn't contribute to scoring.
|
|
|
@ -1,12 +0,0 @@
|
||||||
[[java-query-dsl-boosting-query]]
|
|
||||||
==== Boosting Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-boosting-query.html[Boosting Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[boosting]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> query that will promote documents
|
|
||||||
<2> query that will demote documents
|
|
||||||
<3> negative boost
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-query-dsl-common-terms-query]]
|
|
||||||
==== Common Terms Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-common-terms-query.html[Common Terms Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[common_terms]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
||||||
<2> value
|
|
|
@ -1,45 +0,0 @@
|
||||||
[[java-compound-queries]]
|
|
||||||
=== Compound queries
|
|
||||||
|
|
||||||
Compound queries wrap other compound or leaf queries, either to combine their
|
|
||||||
results and scores, to change their behaviour, or to switch from query to
|
|
||||||
filter context.
|
|
||||||
|
|
||||||
The queries in this group are:
|
|
||||||
|
|
||||||
<<java-query-dsl-constant-score-query,`constant_score` query>>::
|
|
||||||
|
|
||||||
A query which wraps another query, but executes it in filter context. All
|
|
||||||
matching documents are given the same ``constant'' `_score`.
|
|
||||||
|
|
||||||
<<java-query-dsl-bool-query,`bool` query>>::
|
|
||||||
|
|
||||||
The default query for combining multiple leaf or compound query clauses, as
|
|
||||||
`must`, `should`, `must_not`, or `filter` clauses. The `must` and `should`
|
|
||||||
clauses have their scores combined -- the more matching clauses, the better --
|
|
||||||
while the `must_not` and `filter` clauses are executed in filter context.
|
|
||||||
|
|
||||||
<<java-query-dsl-dis-max-query,`dis_max` query>>::
|
|
||||||
|
|
||||||
A query which accepts multiple queries, and returns any documents which match
|
|
||||||
any of the query clauses. While the `bool` query combines the scores from all
|
|
||||||
matching queries, the `dis_max` query uses the score of the single best-
|
|
||||||
matching query clause.
|
|
||||||
|
|
||||||
<<java-query-dsl-function-score-query,`function_score` query>>::
|
|
||||||
|
|
||||||
Modify the scores returned by the main query with functions to take into
|
|
||||||
account factors like popularity, recency, distance, or custom algorithms
|
|
||||||
implemented with scripting.
|
|
||||||
|
|
||||||
<<java-query-dsl-boosting-query,`boosting` query>>::
|
|
||||||
|
|
||||||
Return documents which match a `positive` query, but reduce the score of
|
|
||||||
documents which also match a `negative` query.
|
|
||||||
|
|
||||||
|
|
||||||
include::constant-score-query.asciidoc[]
|
|
||||||
include::bool-query.asciidoc[]
|
|
||||||
include::dis-max-query.asciidoc[]
|
|
||||||
include::function-score-query.asciidoc[]
|
|
||||||
include::boosting-query.asciidoc[]
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-query-dsl-constant-score-query]]
|
|
||||||
==== Constant Score Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-constant-score-query.html[Constant Score Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[constant_score]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> your query
|
|
||||||
<2> query score
|
|
|
@ -1,13 +0,0 @@
|
||||||
[[java-query-dsl-dis-max-query]]
|
|
||||||
==== Dis Max Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-dis-max-query.html[Dis Max Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[dis_max]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> add your queries
|
|
||||||
<2> add your queries
|
|
||||||
<3> boost factor
|
|
||||||
<4> tie breaker
|
|
|
@ -1,10 +0,0 @@
|
||||||
[[java-query-dsl-exists-query]]
|
|
||||||
==== Exists Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-exists-query.html[Exists Query].
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[exists]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
|
@ -1,44 +0,0 @@
|
||||||
[[java-full-text-queries]]
|
|
||||||
=== Full text queries
|
|
||||||
|
|
||||||
The high-level full text queries are usually used for running full text
|
|
||||||
queries on full text fields like the body of an email. They understand how the
|
|
||||||
field being queried is analyzed and will apply each field's
|
|
||||||
`analyzer` (or `search_analyzer`) to the query string before executing.
|
|
||||||
|
|
||||||
The queries in this group are:
|
|
||||||
|
|
||||||
<<java-query-dsl-match-query,`match` query>>::
|
|
||||||
|
|
||||||
The standard query for performing full text queries, including fuzzy matching
|
|
||||||
and phrase or proximity queries.
|
|
||||||
|
|
||||||
<<java-query-dsl-multi-match-query,`multi_match` query>>::
|
|
||||||
|
|
||||||
The multi-field version of the `match` query.
|
|
||||||
|
|
||||||
<<java-query-dsl-common-terms-query,`common_terms` query>>::
|
|
||||||
|
|
||||||
A more specialized query which gives more preference to uncommon words.
|
|
||||||
|
|
||||||
<<java-query-dsl-query-string-query,`query_string` query>>::
|
|
||||||
|
|
||||||
Supports the compact Lucene query string syntax,
|
|
||||||
allowing you to specify AND|OR|NOT conditions and multi-field search
|
|
||||||
within a single query string. For expert users only.
|
|
||||||
|
|
||||||
<<java-query-dsl-simple-query-string-query,`simple_query_string`>>::
|
|
||||||
|
|
||||||
A simpler, more robust version of the `query_string` syntax suitable
|
|
||||||
for exposing directly to users.
|
|
||||||
|
|
||||||
include::match-query.asciidoc[]
|
|
||||||
|
|
||||||
include::multi-match-query.asciidoc[]
|
|
||||||
|
|
||||||
include::common-terms-query.asciidoc[]
|
|
||||||
|
|
||||||
include::query-string-query.asciidoc[]
|
|
||||||
|
|
||||||
include::simple-query-string-query.asciidoc[]
|
|
||||||
|
|
|
@ -1,19 +0,0 @@
|
||||||
[[java-query-dsl-function-score-query]]
|
|
||||||
==== Function Score Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-function-score-query.html[Function Score Query].
|
|
||||||
|
|
||||||
To use `ScoreFunctionBuilders` just import them in your class:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.*;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[function_score]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Add a first function based on a query
|
|
||||||
<2> And randomize the score based on a given seed
|
|
||||||
<3> Add another function based on the age field
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-query-dsl-fuzzy-query]]
|
|
||||||
==== Fuzzy Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-fuzzy-query.html[Fuzzy Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[fuzzy]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
||||||
<2> text
|
|
|
@ -1,12 +0,0 @@
|
||||||
[[java-query-dsl-geo-bounding-box-query]]
|
|
||||||
==== Geo Bounding Box Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-geo-bounding-box-query.html[Geo Bounding Box Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[geo_bounding_box]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
||||||
<2> bounding box top left point
|
|
||||||
<3> bounding box bottom right point
|
|
|
@ -1,12 +0,0 @@
|
||||||
[[java-query-dsl-geo-distance-query]]
|
|
||||||
==== Geo Distance Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-geo-distance-query.html[Geo Distance Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[geo_distance]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
||||||
<2> center point
|
|
||||||
<3> distance from center point
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-query-dsl-geo-polygon-query]]
|
|
||||||
==== Geo Polygon Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-geo-polygon-query.html[Geo Polygon Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[geo_polygon]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> add your polygon of points a document should fall within
|
|
||||||
<2> initialise the query with field and points
|
|
|
@ -1,34 +0,0 @@
|
||||||
[[java-geo-queries]]
|
|
||||||
=== Geo queries
|
|
||||||
|
|
||||||
Elasticsearch supports two types of geo data:
|
|
||||||
`geo_point` fields which support lat/lon pairs, and
|
|
||||||
`geo_shape` fields, which support points, lines, circles, polygons, multi-polygons etc.
|
|
||||||
|
|
||||||
The queries in this group are:
|
|
||||||
|
|
||||||
<<java-query-dsl-geo-shape-query,`geo_shape`>> query::
|
|
||||||
|
|
||||||
Find document with geo-shapes which either intersect, are contained by, or
|
|
||||||
do not intersect with the specified geo-shape.
|
|
||||||
|
|
||||||
<<java-query-dsl-geo-bounding-box-query,`geo_bounding_box`>> query::
|
|
||||||
|
|
||||||
Finds documents with geo-points that fall into the specified rectangle.
|
|
||||||
|
|
||||||
<<java-query-dsl-geo-distance-query,`geo_distance`>> query::
|
|
||||||
|
|
||||||
Finds document with geo-points within the specified distance of a central
|
|
||||||
point.
|
|
||||||
|
|
||||||
<<java-query-dsl-geo-polygon-query,`geo_polygon`>> query::
|
|
||||||
|
|
||||||
Find documents with geo-points within the specified polygon.
|
|
||||||
|
|
||||||
include::geo-shape-query.asciidoc[]
|
|
||||||
|
|
||||||
include::geo-bounding-box-query.asciidoc[]
|
|
||||||
|
|
||||||
include::geo-distance-query.asciidoc[]
|
|
||||||
|
|
||||||
include::geo-polygon-query.asciidoc[]
|
|
|
@ -1,56 +0,0 @@
|
||||||
[[java-query-dsl-geo-shape-query]]
|
|
||||||
==== GeoShape Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-geo-shape-query.html[Geo Shape Query]
|
|
||||||
|
|
||||||
Note: the `geo_shape` type uses `Spatial4J` and `JTS`, both of which are
|
|
||||||
optional dependencies. Consequently you must add `Spatial4J` and `JTS`
|
|
||||||
to your classpath in order to use this type:
|
|
||||||
|
|
||||||
[source,xml]
|
|
||||||
-----------------------------------------------
|
|
||||||
<dependency>
|
|
||||||
<groupId>org.locationtech.spatial4j</groupId>
|
|
||||||
<artifactId>spatial4j</artifactId>
|
|
||||||
<version>0.7</version> <1>
|
|
||||||
</dependency>
|
|
||||||
|
|
||||||
<dependency>
|
|
||||||
<groupId>org.locationtech.jts</groupId>
|
|
||||||
<artifactId>jts-core</artifactId>
|
|
||||||
<version>1.15.0</version> <2>
|
|
||||||
<exclusions>
|
|
||||||
<exclusion>
|
|
||||||
<groupId>xerces</groupId>
|
|
||||||
<artifactId>xercesImpl</artifactId>
|
|
||||||
</exclusion>
|
|
||||||
</exclusions>
|
|
||||||
</dependency>
|
|
||||||
-----------------------------------------------
|
|
||||||
<1> check for updates in http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.locationtech.spatial4j%22%20AND%20a%3A%22spatial4j%22[Maven Central]
|
|
||||||
<2> check for updates in http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.locationtech.jts%22%20AND%20a%3A%22jts-core%22[Maven Central]
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// Import ShapeRelation and ShapeBuilder
|
|
||||||
import org.elasticsearch.common.geo.ShapeRelation;
|
|
||||||
import org.elasticsearch.common.geo.builders.ShapeBuilder;
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[geo_shape]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
||||||
<2> shape
|
|
||||||
<3> relation can be `ShapeRelation.CONTAINS`, `ShapeRelation.WITHIN`, `ShapeRelation.INTERSECTS` or `ShapeRelation.DISJOINT`
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[indexed_geo_shape]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
||||||
<2> The ID of the document that containing the pre-indexed shape.
|
|
||||||
<3> relation
|
|
||||||
<4> Name of the index where the pre-indexed shape is. Defaults to 'shapes'.
|
|
||||||
<5> The field specified as path containing the pre-indexed shape. Defaults to 'shape'.
|
|
|
@ -1,23 +0,0 @@
|
||||||
[[java-query-dsl-has-child-query]]
|
|
||||||
==== Has Child Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-has-child-query.html[Has Child Query]
|
|
||||||
|
|
||||||
When using the `has_child` query it is important to use the `PreBuiltTransportClient` instead of the regular client:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
Settings settings = Settings.builder().put("cluster.name", "elasticsearch").build();
|
|
||||||
TransportClient client = new PreBuiltTransportClient(settings);
|
|
||||||
client.addTransportAddress(new TransportAddress(new InetSocketAddress(InetAddresses.forString("127.0.0.1"), 9300)));
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Otherwise the parent-join module doesn't get loaded and the `has_child` query can't be used from the transport client.
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[has_child]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> child type to query against
|
|
||||||
<2> query
|
|
||||||
<3> score mode can be `ScoreMode.Avg`, `ScoreMode.Max`, `ScoreMode.Min`, `ScoreMode.None` or `ScoreMode.Total`
|
|
|
@ -1,23 +0,0 @@
|
||||||
[[java-query-dsl-has-parent-query]]
|
|
||||||
==== Has Parent Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-has-parent-query.html[Has Parent]
|
|
||||||
|
|
||||||
When using the `has_parent` query it is important to use the `PreBuiltTransportClient` instead of the regular client:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
Settings settings = Settings.builder().put("cluster.name", "elasticsearch").build();
|
|
||||||
TransportClient client = new PreBuiltTransportClient(settings);
|
|
||||||
client.addTransportAddress(new TransportAddress(new InetSocketAddress(InetAddresses.forString("127.0.0.1"), 9300)));
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Otherwise the parent-join module doesn't get loaded and the `has_parent` query can't be used from the transport client.
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[has_parent]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> parent type to query against
|
|
||||||
<2> query
|
|
||||||
<3> whether the score from the parent hit should propagate to the child hit
|
|
|
@ -1,10 +0,0 @@
|
||||||
[[java-query-dsl-ids-query]]
|
|
||||||
==== Ids Query
|
|
||||||
|
|
||||||
|
|
||||||
See {ref}/query-dsl-ids-query.html[Ids Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[ids]
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,28 +0,0 @@
|
||||||
[[java-joining-queries]]
|
|
||||||
=== Joining queries
|
|
||||||
|
|
||||||
Performing full SQL-style joins in a distributed system like Elasticsearch is
|
|
||||||
prohibitively expensive. Instead, Elasticsearch offers two forms of join
|
|
||||||
which are designed to scale horizontally.
|
|
||||||
|
|
||||||
<<java-query-dsl-nested-query,`nested` query>>::
|
|
||||||
|
|
||||||
Documents may contains fields of type `nested`. These
|
|
||||||
fields are used to index arrays of objects, where each object can be queried
|
|
||||||
(with the `nested` query) as an independent document.
|
|
||||||
|
|
||||||
<<java-query-dsl-has-child-query,`has_child`>> and <<java-query-dsl-has-parent-query,`has_parent`>> queries::
|
|
||||||
|
|
||||||
A parent-child relationship can exist between two
|
|
||||||
document types within a single index. The `has_child` query returns parent
|
|
||||||
documents whose child documents match the specified query, while the
|
|
||||||
`has_parent` query returns child documents whose parent document matches the
|
|
||||||
specified query.
|
|
||||||
|
|
||||||
include::nested-query.asciidoc[]
|
|
||||||
|
|
||||||
include::has-child-query.asciidoc[]
|
|
||||||
|
|
||||||
include::has-parent-query.asciidoc[]
|
|
||||||
|
|
||||||
|
|
|
@ -1,9 +0,0 @@
|
||||||
[[java-query-dsl-match-all-query]]
|
|
||||||
=== Match All Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-match-all-query.html[Match All Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[match_all]
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-query-dsl-match-query]]
|
|
||||||
==== Match Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-match-query.html[Match Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[match]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
||||||
<2> text
|
|
|
@ -1,13 +0,0 @@
|
||||||
[[java-query-dsl-mlt-query]]
|
|
||||||
==== More Like This Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-mlt-query.html[More Like This Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[more_like_this]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> fields
|
|
||||||
<2> text
|
|
||||||
<3> ignore threshold
|
|
||||||
<4> max num of Terms in generated queries
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-query-dsl-multi-match-query]]
|
|
||||||
==== Multi Match Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-multi-match-query.html[Multi Match Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[multi_match]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> text
|
|
||||||
<2> fields
|
|
|
@ -1,12 +0,0 @@
|
||||||
[[java-query-dsl-nested-query]]
|
|
||||||
==== Nested Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-nested-query.html[Nested Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[nested]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> path to nested document
|
|
||||||
<2> your query. Any fields referenced inside the query must use the complete path (fully qualified).
|
|
||||||
<3> score mode could be `ScoreMode.Max`, `ScoreMode.Min`, `ScoreMode.Total`, `ScoreMode.Avg` or `ScoreMode.None`
|
|
|
@ -1,61 +0,0 @@
|
||||||
[[java-query-percolate-query]]
|
|
||||||
==== Percolate Query
|
|
||||||
|
|
||||||
See:
|
|
||||||
* {ref}/query-dsl-percolate-query.html[Percolate Query]
|
|
||||||
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
Settings settings = Settings.builder().put("cluster.name", "elasticsearch").build();
|
|
||||||
TransportClient client = new PreBuiltTransportClient(settings);
|
|
||||||
client.addTransportAddress(new TransportAddress(new InetSocketAddress(InetAddresses.forString("127.0.0.1"), 9300)));
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Before the `percolate` query can be used an `percolator` mapping should be added and
|
|
||||||
a document containing a percolator query should be indexed:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
// create an index with a percolator field with the name 'query':
|
|
||||||
client.admin().indices().prepareCreate("myIndexName")
|
|
||||||
.addMapping("_doc", "query", "type=percolator", "content", "type=text")
|
|
||||||
.get();
|
|
||||||
|
|
||||||
//This is the query we're registering in the percolator
|
|
||||||
QueryBuilder qb = termQuery("content", "amazing");
|
|
||||||
|
|
||||||
//Index the query = register it in the percolator
|
|
||||||
client.prepareIndex("myIndexName", "_doc", "myDesignatedQueryName")
|
|
||||||
.setSource(jsonBuilder()
|
|
||||||
.startObject()
|
|
||||||
.field("query", qb) // Register the query
|
|
||||||
.endObject())
|
|
||||||
.setRefreshPolicy(RefreshPolicy.IMMEDIATE) // Needed when the query shall be available immediately
|
|
||||||
.get();
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
This indexes the above term query under the name
|
|
||||||
*myDesignatedQueryName*.
|
|
||||||
|
|
||||||
In order to check a document against the registered queries, use this
|
|
||||||
code:
|
|
||||||
|
|
||||||
[source,java]
|
|
||||||
--------------------------------------------------
|
|
||||||
//Build a document to check against the percolator
|
|
||||||
XContentBuilder docBuilder = XContentFactory.jsonBuilder().startObject();
|
|
||||||
docBuilder.field("content", "This is amazing!");
|
|
||||||
docBuilder.endObject(); //End of the JSON root object
|
|
||||||
|
|
||||||
PercolateQueryBuilder percolateQuery = new PercolateQueryBuilder("query", "_doc", BytesReference.bytes(docBuilder));
|
|
||||||
|
|
||||||
// Percolate, by executing the percolator query in the query dsl:
|
|
||||||
SearchResponse response = client().prepareSearch("myIndexName")
|
|
||||||
.setQuery(percolateQuery))
|
|
||||||
.get();
|
|
||||||
//Iterate over the results
|
|
||||||
for(SearchHit hit : response.getHits()) {
|
|
||||||
// Percolator queries as hit
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-query-dsl-prefix-query]]
|
|
||||||
==== Prefix Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-prefix-query.html[Prefix Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[prefix]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
||||||
<2> prefix
|
|
|
@ -1,9 +0,0 @@
|
||||||
[[java-query-dsl-query-string-query]]
|
|
||||||
==== Query String Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-query-string-query.html[Query String Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[query_string]
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,22 +0,0 @@
|
||||||
[[java-query-dsl-range-query]]
|
|
||||||
==== Range Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-range-query.html[Range Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[range]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
||||||
<2> from
|
|
||||||
<3> to
|
|
||||||
<4> include lower value means that `from` is `gt` when `false` or `gte` when `true`
|
|
||||||
<5> include upper value means that `to` is `lt` when `false` or `lte` when `true`
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[range_simplified]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
||||||
<2> set `from` to 10 and `includeLower` to `true`
|
|
||||||
<3> set `to` to 20 and `includeUpper` to `false`
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-query-dsl-regexp-query]]
|
|
||||||
==== Regexp Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-regexp-query.html[Regexp Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[regexp]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> field
|
|
||||||
<2> regexp
|
|
|
@ -1,29 +0,0 @@
|
||||||
[[java-query-dsl-script-query]]
|
|
||||||
==== Script Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-script-query.html[Script Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[script_inline]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> inlined script
|
|
||||||
|
|
||||||
|
|
||||||
If you have stored on each data node a script named `myscript.painless` with:
|
|
||||||
|
|
||||||
[source,painless]
|
|
||||||
--------------------------------------------------
|
|
||||||
doc['num1'].value > params.param1
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
You can use it then with:
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[script_file]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Script type: either `ScriptType.FILE`, `ScriptType.INLINE` or `ScriptType.INDEXED`
|
|
||||||
<2> Scripting engine
|
|
||||||
<3> Script name
|
|
||||||
<4> Parameters as a `Map<String, Object>`
|
|
|
@ -1,9 +0,0 @@
|
||||||
[[java-query-dsl-simple-query-string-query]]
|
|
||||||
==== Simple Query String Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-simple-query-string-query.html[Simple Query String Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[simple_query_string]
|
|
||||||
--------------------------------------------------
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-query-dsl-span-containing-query]]
|
|
||||||
==== Span Containing Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-span-containing-query.html[Span Containing Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[span_containing]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> `big` part
|
|
||||||
<2> `little` part
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-query-dsl-span-first-query]]
|
|
||||||
==== Span First Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-span-first-query.html[Span First Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[span_first]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> query
|
|
||||||
<2> max end position
|
|
|
@ -1,11 +0,0 @@
|
||||||
[[java-query-dsl-span-multi-term-query]]
|
|
||||||
==== Span Multi Term Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-span-multi-term-query.html[Span Multi Term Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[span_multi]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> Can be any builder extending the `MultiTermQueryBuilder` class. For example: `FuzzyQueryBuilder`,
|
|
||||||
`PrefixQueryBuilder`, `RangeQueryBuilder`, `RegexpQueryBuilder` or `WildcardQueryBuilder`.
|
|
|
@ -1,12 +0,0 @@
|
||||||
[[java-query-dsl-span-near-query]]
|
|
||||||
==== Span Near Query
|
|
||||||
|
|
||||||
See {ref}/query-dsl-span-near-query.html[Span Near Query]
|
|
||||||
|
|
||||||
["source","java",subs="attributes,callouts,macros"]
|
|
||||||
--------------------------------------------------
|
|
||||||
include-tagged::{query-dsl-test}[span_near]
|
|
||||||
--------------------------------------------------
|
|
||||||
<1> span term queries
|
|
||||||
<2> slop factor: the maximum number of intervening unmatched positions
|
|
||||||
<3> whether matches are required to be in-order
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue