parent
20423a43e7
commit
0442b737be
|
@ -2,12 +2,14 @@
|
|||
== Clients
|
||||
|
||||
|
||||
[[community-perl]]
|
||||
=== Perl
|
||||
|
||||
* http://github.com/clintongormley/ElasticSearch.pm[ElasticSearch.pm]:
|
||||
Perl client.
|
||||
|
||||
|
||||
[[community-python]]
|
||||
=== Python
|
||||
|
||||
* http://github.com/aparo/pyes[pyes]:
|
||||
|
@ -29,6 +31,7 @@
|
|||
Python Map-Reduce engine targeting Elasticsearch indices.
|
||||
|
||||
|
||||
[[community-ruby]]
|
||||
=== Ruby
|
||||
|
||||
* http://github.com/karmi/tire[Tire]:
|
||||
|
@ -44,6 +47,7 @@
|
|||
Ruby client + Rails integration.
|
||||
|
||||
|
||||
[[community-php]]
|
||||
=== PHP
|
||||
|
||||
* http://github.com/ruflin/Elastica[Elastica]:
|
||||
|
@ -55,12 +59,14 @@
|
|||
PHP client, one-to-one mapping with query DSL, fluid interface.
|
||||
|
||||
|
||||
[[community-java]]
|
||||
=== Java
|
||||
|
||||
* https://github.com/searchbox-io/Jest[Jest]:
|
||||
Java Rest client.
|
||||
|
||||
|
||||
[[community-javascript]]
|
||||
=== Javascript
|
||||
|
||||
* https://github.com/fullscale/elastic.js[Elastic.js]:
|
||||
|
@ -75,6 +81,7 @@
|
|||
* https://github.com/printercu/elastics[elastics]: Simple tiny client that just works
|
||||
|
||||
|
||||
[[community-dotnet]]
|
||||
=== .Net
|
||||
|
||||
* https://github.com/Yegoroff/PlainElastic.Net[PlainElastic.Net]:
|
||||
|
@ -87,6 +94,7 @@
|
|||
.NET client.
|
||||
|
||||
|
||||
[[community-scala]]
|
||||
=== Scala
|
||||
|
||||
* https://github.com/sksamuel/elastic4s[elastic4s]:
|
||||
|
@ -99,12 +107,14 @@
|
|||
Scala client.
|
||||
|
||||
|
||||
[[community-clojure]]
|
||||
=== Clojure
|
||||
|
||||
* http://github.com/clojurewerkz/elastisch[Elastisch]:
|
||||
Clojure client.
|
||||
|
||||
|
||||
[[community-go]]
|
||||
=== Go
|
||||
|
||||
* https://github.com/mattbaird/elastigo[elastigo]:
|
||||
|
@ -114,6 +124,7 @@
|
|||
Go lib.
|
||||
|
||||
|
||||
[[community-erlang]]
|
||||
=== Erlang
|
||||
|
||||
* http://github.com/tsloughter/erlastic_search[erlastic_search]:
|
||||
|
@ -128,12 +139,14 @@
|
|||
environment.
|
||||
|
||||
|
||||
[[community-eventmachine]]
|
||||
=== EventMachine
|
||||
|
||||
* http://github.com/vangberg/em-elasticsearch[em-elasticsearch]:
|
||||
elasticsearch library for eventmachine.
|
||||
|
||||
|
||||
[[community-command-line]]
|
||||
=== Command Line
|
||||
|
||||
* https://github.com/elasticsearch/es2unix[es2unix]:
|
||||
|
@ -143,17 +156,20 @@
|
|||
command line shell for elasticsearch.
|
||||
|
||||
|
||||
[[community-ocaml]]
|
||||
=== OCaml
|
||||
|
||||
* https://github.com/tovbinm/ocaml-elasticsearch[ocaml-elasticsearch]:
|
||||
OCaml client for Elasticsearch
|
||||
|
||||
|
||||
[[community-smalltalk]]
|
||||
=== Smalltalk
|
||||
|
||||
* http://ss3.gemstone.com/ss/Elasticsearch.html[Elasticsearch] -
|
||||
Smalltalk client for Elasticsearch
|
||||
|
||||
[[community-cold-fusion]]
|
||||
=== Cold Fusion
|
||||
|
||||
* https://github.com/jasonfill/ColdFusion-ElasticSearch-Client[ColdFusion-ElasticSearch-Client]
|
||||
|
|
|
@ -6,6 +6,7 @@ obtained, all of ElasticSearch APIs can be executed on it. Each Groovy
|
|||
API is exposed using three different mechanisms.
|
||||
|
||||
|
||||
[[closure]]
|
||||
=== Closure Request
|
||||
|
||||
The first type is to simply provide the request as a Closure, which
|
||||
|
@ -57,6 +58,7 @@ indexR.failure = {Throwable t ->
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[request]]
|
||||
=== Request
|
||||
|
||||
This option allows to pass the actual instance of the request (instead
|
||||
|
@ -81,6 +83,7 @@ println "Indexed $indexR.response.id into $indexR.response.index/$indexR.respons
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[java-like]]
|
||||
=== Java Like
|
||||
|
||||
The last option is to provide an actual instance of the API request, and
|
||||
|
|
|
@ -7,6 +7,7 @@ get a client is by starting an embedded `Node` which acts as a node
|
|||
within the cluster.
|
||||
|
||||
|
||||
[[node-client]]
|
||||
=== Node Client
|
||||
|
||||
A Node based client is the simplest form to get a `GClient` to start
|
||||
|
|
|
@ -17,6 +17,7 @@ manner. The execution options for each API follow a similar manner and
|
|||
covered in <<anatomy>>.
|
||||
|
||||
|
||||
[[maven]]
|
||||
=== Maven Repository
|
||||
|
||||
The Groovy API is hosted on
|
||||
|
|
|
@ -41,6 +41,7 @@ The format of the search `Closure` follows the same JSON syntax as the
|
|||
{ref}/search-search.html[Search API] request.
|
||||
|
||||
|
||||
[[more-examples]]
|
||||
=== More examples
|
||||
|
||||
Term query where multiple values are provided (see
|
||||
|
|
|
@ -29,6 +29,7 @@ major versions.
|
|||
______________________________________________________________________________________________________________________________________________________________
|
||||
|
||||
|
||||
[[node-client]]
|
||||
=== Node Client
|
||||
|
||||
Instantiating a node based client is the simplest way to get a `Client`
|
||||
|
@ -120,6 +121,7 @@ node.close();
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[transport-client]]
|
||||
=== Transport Client
|
||||
|
||||
The `TransportClient` connects remotely to an elasticsearch cluster
|
||||
|
|
|
@ -17,6 +17,7 @@ For more information on the delete operation, check out the
|
|||
{ref}/docs-delete.html[delete API] docs.
|
||||
|
||||
|
||||
[[operation-threading]]
|
||||
=== Operation Threading
|
||||
|
||||
The delete API allows to set the threading model the operation will be
|
||||
|
|
|
@ -30,6 +30,7 @@ import org.elasticsearch.search.facet.FacetBuilders.*;
|
|||
=== Facets
|
||||
|
||||
|
||||
[[terms]]
|
||||
==== Terms Facet
|
||||
|
||||
Here is how you can use
|
||||
|
@ -75,6 +76,7 @@ for (TermsFacet.Entry entry : f) {
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[range]]
|
||||
==== Range Facet
|
||||
|
||||
Here is how you can use
|
||||
|
@ -123,6 +125,7 @@ for (RangeFacet.Entry entry : f) {
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[histogram]]
|
||||
==== Histogram Facet
|
||||
|
||||
Here is how you can use
|
||||
|
@ -164,6 +167,7 @@ for (HistogramFacet.Entry entry : f) {
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[date-histogram]]
|
||||
==== Date Histogram Facet
|
||||
|
||||
Here is how you can use
|
||||
|
@ -206,6 +210,7 @@ for (DateHistogramFacet.Entry entry : f) {
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[filter]]
|
||||
==== Filter Facet (not facet filter)
|
||||
|
||||
Here is how you can use
|
||||
|
@ -248,6 +253,7 @@ f.getCount(); // Number of docs that matched
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[query]]
|
||||
==== Query Facet
|
||||
|
||||
Here is how you can use
|
||||
|
@ -287,6 +293,7 @@ See <<query-dsl-queries,Queries>> to
|
|||
learn how to build queries using Java.
|
||||
|
||||
|
||||
[[statistical]]
|
||||
==== Statistical
|
||||
|
||||
Here is how you can use
|
||||
|
@ -330,6 +337,7 @@ f.getVariance(); // Variance
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[terms-stats]]
|
||||
==== Terms Stats Facet
|
||||
|
||||
Here is how you can use
|
||||
|
@ -378,6 +386,7 @@ for (TermsStatsFacet.Entry entry : f) {
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[geo-distance]]
|
||||
==== Geo Distance Facet
|
||||
|
||||
Here is how you can use
|
||||
|
@ -429,6 +438,7 @@ for (GeoDistanceFacet.Entry entry : f) {
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[facet-filter]]
|
||||
=== Facet filters (not Filter Facet)
|
||||
|
||||
By default, facets are applied on the query resultset whatever filters
|
||||
|
@ -469,6 +479,7 @@ See documentation on how to build
|
|||
<<query-dsl-filters,Filters>>.
|
||||
|
||||
|
||||
[[scope]]
|
||||
=== Scope
|
||||
|
||||
By default, facets are computed within the query resultset. But, you can
|
||||
|
|
|
@ -5,6 +5,7 @@ The index API allows one to index a typed JSON document into a specific
|
|||
index and make it searchable.
|
||||
|
||||
|
||||
[[generate]]
|
||||
=== Generate JSON document
|
||||
|
||||
There are different way of generating JSON document:
|
||||
|
@ -41,6 +42,7 @@ String json = "{" +
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[using-map]]
|
||||
==== Using Map
|
||||
|
||||
Map is a key:values pair collection. It represents very well a JSON
|
||||
|
@ -55,6 +57,7 @@ json.put("message","trying out Elastic Search");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[beans]]
|
||||
==== Serialize your beans
|
||||
|
||||
Elasticsearch already use Jackson but shade it under
|
||||
|
@ -88,6 +91,7 @@ String json = mapper.writeValueAsString(yourbeaninstance);
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[helpers]]
|
||||
==== Use Elasticsearch helpers
|
||||
|
||||
Elasticsearch provides built-in helpers to generate JSON content.
|
||||
|
@ -118,6 +122,7 @@ String json = builder.string();
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[index-doc]]
|
||||
=== Index document
|
||||
|
||||
The following example indexes a JSON document into an index called
|
||||
|
|
|
@ -20,6 +20,7 @@ Note that you can easily print (aka debug) JSON generated queries using
|
|||
`toString()` method on `FilterBuilder` object.
|
||||
|
||||
|
||||
[[and-filter]]
|
||||
=== And Filter
|
||||
|
||||
See {ref}/query-dsl-and-filter.html[And Filter]
|
||||
|
@ -37,6 +38,7 @@ Note that you can cache the result using
|
|||
`AndFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[bool-filter]]
|
||||
=== Bool Filter
|
||||
|
||||
See {ref}/query-dsl-bool-filter.html[Bool Filter]
|
||||
|
@ -55,6 +57,7 @@ Note that you can cache the result using
|
|||
`BoolFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[exists-filter]]
|
||||
=== Exists Filter
|
||||
|
||||
See {ref}/query-dsl-exists-filter.html[Exists Filter].
|
||||
|
@ -66,6 +69,7 @@ FilterBuilders.existsFilter("user");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[ids-filter]]
|
||||
=== Ids Filter
|
||||
|
||||
See {ref}/query-dsl-ids-filter.html[IDs Filter]
|
||||
|
@ -80,6 +84,7 @@ FilterBuilders.idsFilter().addIds("1", "4", "100");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[limit-filter]]
|
||||
=== Limit Filter
|
||||
|
||||
See {ref}/query-dsl-limit-filter.html[Limit Filter]
|
||||
|
@ -91,6 +96,7 @@ FilterBuilders.limitFilter(100);
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[type-filter]]
|
||||
=== Type Filter
|
||||
|
||||
See {ref}/query-dsl-type-filter.html[Type Filter]
|
||||
|
@ -102,6 +108,7 @@ FilterBuilders.typeFilter("my_type");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[geo-bbox-filter]]
|
||||
=== Geo Bounding Box Filter
|
||||
|
||||
See {ref}/query-dsl-geo-bounding-box-filter.html[Geo
|
||||
|
@ -119,6 +126,7 @@ Note that you can cache the result using
|
|||
<<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[geo-distance-filter]]
|
||||
=== GeoDistance Filter
|
||||
|
||||
See {ref}/query-dsl-geo-distance-filter.html[Geo
|
||||
|
@ -138,6 +146,7 @@ Note that you can cache the result using
|
|||
<<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[geo-distance-range-filter]]
|
||||
=== Geo Distance Range Filter
|
||||
|
||||
See {ref}/query-dsl-geo-distance-range-filter.html[Geo
|
||||
|
@ -160,6 +169,7 @@ Note that you can cache the result using
|
|||
<<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[geo-poly-filter]]
|
||||
=== Geo Polygon Filter
|
||||
|
||||
See {ref}/query-dsl-geo-polygon-filter.html[Geo Polygon
|
||||
|
@ -178,6 +188,7 @@ Note that you can cache the result using
|
|||
<<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[geo-shape-filter]]
|
||||
=== Geo Shape Filter
|
||||
|
||||
See {ref}/query-dsl-geo-shape-filter.html[Geo Shape
|
||||
|
@ -237,6 +248,7 @@ filter = FilterBuilders.geoShapeFilter("location", "New Zealand", "countries")
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[has-child-parent-filter]]
|
||||
=== Has Child / Has Parent Filters
|
||||
|
||||
See:
|
||||
|
@ -255,6 +267,7 @@ QFilterBuilders.hasParentFilter("blog",
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[match-all-filter]]
|
||||
=== Match All Filter
|
||||
|
||||
See {ref}/query-dsl-match-all-filter.html[Match All Filter]
|
||||
|
@ -265,6 +278,7 @@ FilterBuilders.matchAllFilter();
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[missing-filter]]
|
||||
=== Missing Filter
|
||||
|
||||
See {ref}/query-dsl-missing-filter.html[Missing Filter]
|
||||
|
@ -278,6 +292,7 @@ FilterBuilders.missingFilter("user")
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[not-filter]]
|
||||
=== Not Filter
|
||||
|
||||
See {ref}/query-dsl-not-filter.html[Not Filter]
|
||||
|
@ -290,6 +305,7 @@ FilterBuilders.notFilter(
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[numeric-range-filter]]
|
||||
=== Numeric Range Filter
|
||||
|
||||
See {ref}/query-dsl-numeric-range-filter.html[Numeric
|
||||
|
@ -309,6 +325,7 @@ Note that you can cache the result using
|
|||
<<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[or-filter]]
|
||||
=== Or Filter
|
||||
|
||||
See {ref}/query-dsl-or-filter.html[Or Filter]
|
||||
|
@ -326,6 +343,7 @@ Note that you can cache the result using
|
|||
`OrFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[prefix-filter]]
|
||||
=== Prefix Filter
|
||||
|
||||
See {ref}/query-dsl-prefix-filter.html[Prefix Filter]
|
||||
|
@ -340,6 +358,7 @@ Note that you can cache the result using
|
|||
`PrefixFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[query-filter]]
|
||||
=== Query Filter
|
||||
|
||||
See {ref}/query-dsl-query-filter.html[Query Filter]
|
||||
|
@ -356,6 +375,7 @@ Note that you can cache the result using
|
|||
`QueryFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[range-filter]]
|
||||
=== Range Filter
|
||||
|
||||
See {ref}/query-dsl-range-filter.html[Range Filter]
|
||||
|
@ -379,6 +399,7 @@ Note that you can ask not to cache the result using
|
|||
`RangeFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[script-filter]]
|
||||
=== Script Filter
|
||||
|
||||
See {ref}/query-dsl-script-filter.html[Script Filter]
|
||||
|
@ -395,6 +416,7 @@ Note that you can cache the result using
|
|||
`ScriptFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[term-filter]]
|
||||
=== Term Filter
|
||||
|
||||
See {ref}/query-dsl-term-filter.html[Term Filter]
|
||||
|
@ -409,6 +431,7 @@ Note that you can ask not to cache the result using
|
|||
`TermFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[terms-filter]]
|
||||
=== Terms Filter
|
||||
|
||||
See {ref}/query-dsl-terms-filter.html[Terms Filter]
|
||||
|
@ -425,6 +448,7 @@ Note that you can ask not to cache the result using
|
|||
`TermsFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
|
||||
|
||||
|
||||
[[nested-filter]]
|
||||
=== Nested Filter
|
||||
|
||||
See {ref}/query-dsl-nested-filter.html[Nested Filter]
|
||||
|
@ -444,6 +468,7 @@ Note that you can ask not to cache the result using
|
|||
|
||||
[[query-dsl-filters-caching]]
|
||||
|
||||
[[caching]]
|
||||
=== Caching
|
||||
|
||||
By default, some filters are cached or not cached. You can have a fine
|
||||
|
|
|
@ -22,6 +22,7 @@ The `QueryBuilder` can then be used with any API that accepts a query,
|
|||
such as `count` and `search`.
|
||||
|
||||
|
||||
[[match]]
|
||||
=== Match Query
|
||||
|
||||
See {ref}/query-dsl-match-query.html[Match Query]
|
||||
|
@ -33,6 +34,7 @@ QueryBuilder qb = QueryBuilders.matchQuery("name", "kimchy elasticsearch");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[multimatch]]
|
||||
=== MultiMatch Query
|
||||
|
||||
See {ref}/query-dsl-multi-match-query.html[MultiMatch
|
||||
|
@ -47,6 +49,7 @@ QueryBuilder qb = QueryBuilders.multiMatchQuery(
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[bool]]
|
||||
=== Boolean Query
|
||||
|
||||
See {ref}/query-dsl-bool-query.html[Boolean Query]
|
||||
|
@ -63,6 +66,7 @@ QueryBuilder qb = QueryBuilders
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[boosting]]
|
||||
=== Boosting Query
|
||||
|
||||
See {ref}/query-dsl-boosting-query.html[Boosting Query]
|
||||
|
@ -77,6 +81,7 @@ QueryBuilders.boostingQuery()
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[ids]]
|
||||
=== IDs Query
|
||||
|
||||
See {ref}/query-dsl-ids-query.html[IDs Query]
|
||||
|
@ -88,6 +93,7 @@ QueryBuilders.idsQuery().ids("1", "2");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[custom-score]]
|
||||
=== Custom Score Query
|
||||
|
||||
See {ref}/query-dsl-custom-score-query.html[Custom Score
|
||||
|
@ -106,6 +112,7 @@ QueryBuilders.customScoreQuery(QueryBuilders.matchAllQuery())
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[custom-boost-factor]]
|
||||
=== Custom Boost Factor Query
|
||||
|
||||
See
|
||||
|
@ -119,6 +126,7 @@ QueryBuilders.customBoostFactorQuery(QueryBuilders.matchAllQuery()) // Your quer
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[constant-score]]
|
||||
=== Constant Score Query
|
||||
|
||||
See {ref}/query-dsl-constant-score-query.html[Constant
|
||||
|
@ -136,6 +144,7 @@ QueryBuilders.constantScoreQuery(QueryBuilders.termQuery("name","kimchy"))
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[dismax]]
|
||||
=== Disjunction Max Query
|
||||
|
||||
See {ref}/query-dsl-dis-max-query.html[Disjunction Max
|
||||
|
@ -151,6 +160,7 @@ QueryBuilders.disMaxQuery()
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[field]]
|
||||
=== Field Query
|
||||
|
||||
See {ref}/query-dsl-field-query.html[Field Query]
|
||||
|
@ -165,6 +175,7 @@ QueryBuilders.queryString("+kimchy -dadoonet").field("name");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[flt]]
|
||||
=== Fuzzy Like This (Field) Query (flt and flt_field)
|
||||
|
||||
See:
|
||||
|
@ -186,6 +197,7 @@ QueryBuilders.fuzzyLikeThisFieldQuery("name.first") // Only on singl
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[fuzzy]]
|
||||
=== FuzzyQuery
|
||||
|
||||
See {ref}/query-dsl-fuzzy-query.html[Fuzzy Query]
|
||||
|
@ -197,6 +209,7 @@ QueryBuilder qb = QueryBuilders.fuzzyQuery("name", "kimzhy");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[has-child-parent]]
|
||||
=== Has Child / Has Parent
|
||||
|
||||
See:
|
||||
|
@ -215,6 +228,7 @@ QueryBuilders.hasParentQuery("blog",
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[match-all]]
|
||||
=== MatchAll Query
|
||||
|
||||
See {ref}/query-dsl-match-all-query.html[Match All
|
||||
|
@ -226,7 +240,8 @@ QueryBuilder qb = QueryBuilders.matchAllQuery();
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
=== Fuzzy Like This (Field) Query (flt and flt_field)
|
||||
[[mlt]]
|
||||
=== More Like This (Field) Query (mlt and mlt_field)
|
||||
|
||||
See:
|
||||
* {ref}/query-dsl-mlt-query.html[More Like This Query]
|
||||
|
@ -249,6 +264,7 @@ QueryBuilders.moreLikeThisFieldQuery("name.first") // Only on singl
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[prefix]]
|
||||
=== Prefix Query
|
||||
|
||||
See {ref}/query-dsl-prefix-query.html[Prefix Query]
|
||||
|
@ -259,6 +275,7 @@ QueryBuilders.prefixQuery("brand", "heine");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[query-string]]
|
||||
=== QueryString Query
|
||||
|
||||
See {ref}/query-dsl-query-string-query.html[QueryString Query]
|
||||
|
@ -269,6 +286,7 @@ QueryBuilder qb = QueryBuilders.queryString("+kimchy -elasticsearch");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[range]]
|
||||
=== Range Query
|
||||
|
||||
See {ref}/query-dsl-range-query.html[Range Query]
|
||||
|
@ -326,6 +344,7 @@ QueryBuilders.spanTermQuery("user","kimchy");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[term]]
|
||||
=== Term Query
|
||||
|
||||
See {ref}/query-dsl-term-query.html[Term Query]
|
||||
|
@ -336,6 +355,7 @@ QueryBuilder qb = QueryBuilders.termQuery("name", "kimchy");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[terms]]
|
||||
=== Terms Query
|
||||
|
||||
See {ref}/query-dsl-terms-query.html[Terms Query]
|
||||
|
@ -348,6 +368,7 @@ QueryBuilders.termsQuery("tags", // field
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[top-children]]
|
||||
=== Top Children Query
|
||||
|
||||
See {ref}/query-dsl-top-children-query.html[Top Children Query]
|
||||
|
@ -364,6 +385,7 @@ QueryBuilders.topChildrenQuery(
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[wildcard]]
|
||||
=== Wildcard Query
|
||||
|
||||
See {ref}/query-dsl-wildcard-query.html[Wildcard Query]
|
||||
|
@ -375,6 +397,7 @@ QueryBuilders.wildcardQuery("user", "k?mc*");
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[nested]]
|
||||
=== Nested Query
|
||||
|
||||
See {ref}/query-dsl-nested-query.html[Nested Query]
|
||||
|
@ -391,6 +414,7 @@ QueryBuilders.nestedQuery("obj1", // Path
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[custom-filters-score]]
|
||||
=== Custom Filters Score Query
|
||||
|
||||
See
|
||||
|
@ -407,6 +431,7 @@ QueryBuilders.customFiltersScoreQuery(
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[indices]]
|
||||
=== Indices Query
|
||||
|
||||
See {ref}/query-dsl-indices-query.html[Indices Query]
|
||||
|
@ -430,6 +455,7 @@ QueryBuilders.indicesQuery(
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[geo-shape]]
|
||||
=== GeoShape Query
|
||||
|
||||
See {ref}/query-dsl-geo-shape-query.html[GeoShape Query]
|
||||
|
|
|
@ -42,6 +42,7 @@ For more information on the search operation, check out the REST
|
|||
{ref}/search.html[search] docs.
|
||||
|
||||
|
||||
[[scrolling]]
|
||||
=== Using scrolls in Java
|
||||
|
||||
Read the {ref}/search-request-scroll.html[scroll documentation]
|
||||
|
@ -90,6 +91,7 @@ thread for each local shard.
|
|||
The default mode is `THREAD_PER_SHARD`.
|
||||
|
||||
|
||||
[[msearch]]
|
||||
=== MultiSearch API
|
||||
|
||||
See {ref}/search-multi-search.html[MultiSearch API Query]
|
||||
|
@ -116,6 +118,7 @@ for (MultiSearchResponse.Item item : sr.responses()) {
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[facets]]
|
||||
=== Using Facets
|
||||
|
||||
The following code shows how to add two facets within your search:
|
||||
|
|
|
@ -55,6 +55,7 @@ index :
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[backwards-compatibility]]
|
||||
=== Backwards compatibility
|
||||
|
||||
All analyzers, tokenizers, and token filters can be configured with a
|
||||
|
|
|
@ -14,6 +14,7 @@ character filters, tokenizers and token filters to create
|
|||
<<analysis-custom-analyzer,custom analyzers>>.
|
||||
|
||||
[float]
|
||||
[[default-analyzers]]
|
||||
=== Default Analyzers
|
||||
|
||||
An analyzer is registered under a logical name. It can then be
|
||||
|
@ -28,6 +29,7 @@ used just when indexing, and the `default_search` can be used to
|
|||
configure a default analyzer that will be used just when searching.
|
||||
|
||||
[float]
|
||||
[[aliasing-analyzers]]
|
||||
=== Aliasing Analyzers
|
||||
|
||||
Analyzers can be aliased to have several registered lookup names
|
||||
|
|
|
@ -8,6 +8,7 @@ https://github.com/elasticsearch/elasticsearch-analysis-icu[elasticsearch-analys
|
|||
The plugin includes the following analysis components:
|
||||
|
||||
[float]
|
||||
[[icu-normalization]]
|
||||
=== ICU Normalization
|
||||
|
||||
Normalizes characters as explained
|
||||
|
@ -34,6 +35,7 @@ Here is a sample settings:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[icu-folding]]
|
||||
=== ICU Folding
|
||||
|
||||
Folding of unicode characters based on `UTR#30`. It registers itself
|
||||
|
@ -58,6 +60,7 @@ normally be left out. Sample setting:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[filtering]]
|
||||
==== Filtering
|
||||
|
||||
The folding can be filtered by a set of unicode characters with the
|
||||
|
@ -94,6 +97,7 @@ filter below.
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[icu-collation]]
|
||||
=== ICU Collation
|
||||
|
||||
Uses collation token filter. Allows to either specify the rules for
|
||||
|
|
|
@ -48,6 +48,7 @@ $ curl -XGET 'http://localhost:9200/_cluster/health?wait_for_status=yellow&timeo
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[request-params]]
|
||||
=== Request Parameters
|
||||
|
||||
The cluster health API accepts the following request parameters:
|
||||
|
|
|
@ -38,6 +38,7 @@ $ curl -XPOST 'http://localhost:9200/_cluster/nodes/_all/_shutdown'
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[delay]]
|
||||
=== Delay
|
||||
|
||||
By default, the shutdown will be executed after a 1 second delay (`1s`).
|
||||
|
|
|
@ -81,6 +81,7 @@ curl -XGET 'http://localhost:9200/_nodes/10.0.0.1/stats/process'
|
|||
The `all` flag can be set to return all the stats.
|
||||
|
||||
[float]
|
||||
[[field-data]]
|
||||
=== Field data statistics
|
||||
|
||||
You can get information about field data memory usage on node
|
||||
|
|
|
@ -48,6 +48,7 @@ curl -XGET localhost:9200/_cluster/settings
|
|||
There is a specific list of settings that can be updated, those include:
|
||||
|
||||
[float]
|
||||
[[settings]]
|
||||
=== Cluster settings
|
||||
|
||||
[float]
|
||||
|
@ -141,6 +142,7 @@ There is a specific list of settings that can be updated, those include:
|
|||
See <<modules-threadpool>>
|
||||
|
||||
[float]
|
||||
[[index-settings]]
|
||||
=== Index settings
|
||||
|
||||
[float]
|
||||
|
@ -189,6 +191,7 @@ There is a specific list of settings that can be updated, those include:
|
|||
See <<index-modules-store>>
|
||||
|
||||
[float]
|
||||
[[logger]]
|
||||
=== Logger
|
||||
|
||||
Logger values can also be updated by setting `logger.` prefix. More
|
||||
|
|
|
@ -85,6 +85,7 @@ If using the HTTP API, make sure that the client does not send HTTP
|
|||
chunks, as this will slow things down.
|
||||
|
||||
[float]
|
||||
[[versioning]]
|
||||
=== Versioning
|
||||
|
||||
Each bulk item can include the version value using the
|
||||
|
@ -94,6 +95,7 @@ support the `version_type`/`_version_type` when using `external`
|
|||
versioning.
|
||||
|
||||
[float]
|
||||
[[routing]]
|
||||
=== Routing
|
||||
|
||||
Each bulk item can include the routing value using the
|
||||
|
@ -101,12 +103,14 @@ Each bulk item can include the routing value using the
|
|||
index / delete operation based on the `_routing` mapping.
|
||||
|
||||
[float]
|
||||
[[percolator]]
|
||||
=== Percolator
|
||||
|
||||
Each bulk index action can include a percolate value using the
|
||||
`_percolate`/`percolate` field.
|
||||
|
||||
[float]
|
||||
[[parent]]
|
||||
=== Parent
|
||||
|
||||
Each bulk item can include the parent value using the `_parent`/`parent`
|
||||
|
@ -114,6 +118,7 @@ field. It automatically follows the behavior of the index / delete
|
|||
operation based on the `_parent` / `_routing` mapping.
|
||||
|
||||
[float]
|
||||
[[timestamp]]
|
||||
=== Timestamp
|
||||
|
||||
Each bulk item can include the timestamp value using the
|
||||
|
@ -121,6 +126,7 @@ Each bulk item can include the timestamp value using the
|
|||
the index operation based on the `_timestamp` mapping.
|
||||
|
||||
[float]
|
||||
[[ttl]]
|
||||
=== TTL
|
||||
|
||||
Each bulk item can include the ttl value using the `_ttl`/`ttl` field.
|
||||
|
@ -128,6 +134,7 @@ It automatically follows the behavior of the index operation based on
|
|||
the `_ttl` mapping.
|
||||
|
||||
[float]
|
||||
[[consistency]]
|
||||
=== Write Consistency
|
||||
|
||||
When making bulk calls, you can require a minimum number of active
|
||||
|
@ -143,6 +150,7 @@ will need to be a single shard active (in this case, `one` and `quorum`
|
|||
is the same).
|
||||
|
||||
[float]
|
||||
[[refresh]]
|
||||
=== Refresh
|
||||
|
||||
The `refresh` parameter can be set to `true` in order to refresh the
|
||||
|
@ -152,6 +160,7 @@ to expire. Setting it to `true` can trigger additional load, and may
|
|||
slow down indexing.
|
||||
|
||||
[float]
|
||||
[[update]]
|
||||
=== Update
|
||||
|
||||
When using `update` action `_retry_on_conflict` can be used as field in
|
||||
|
|
|
@ -42,6 +42,7 @@ recommended to delete "large chunks of the data in an index", many
|
|||
times, it's better to simply reindex into a new index.
|
||||
|
||||
[float]
|
||||
[[multiple-indices]]
|
||||
=== Multiple Indices and Types
|
||||
|
||||
The delete by query API can be applied to multiple types within an
|
||||
|
@ -76,6 +77,7 @@ $ curl -XDELETE 'http://localhost:9200/_all/_query?q=tag:wow'
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[parameters]]
|
||||
=== Request Parameters
|
||||
|
||||
When executing a delete by query using the query parameter `q`, the
|
||||
|
@ -95,6 +97,7 @@ query.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
[[request-body]]
|
||||
=== Request Body
|
||||
|
||||
The delete by query can use the <<query-dsl,Query
|
||||
|
@ -103,12 +106,14 @@ executed and delete all documents. The body content can also be passed
|
|||
as a REST parameter named `source`.
|
||||
|
||||
[float]
|
||||
[[distributed]]
|
||||
=== Distributed
|
||||
|
||||
The delete by query API is broadcast across all primary shards, and from
|
||||
there, replicated across all shards replicas.
|
||||
|
||||
[float]
|
||||
[[routing]]
|
||||
=== Routing
|
||||
|
||||
The routing value (a comma separated list of the routing values) can be
|
||||
|
@ -116,6 +121,7 @@ specified to control which shards the delete by query request will be
|
|||
executed on.
|
||||
|
||||
[float]
|
||||
[[replication-type]]
|
||||
=== Replication Type
|
||||
|
||||
The replication of the operation can be done in an asynchronous manner
|
||||
|
@ -124,6 +130,7 @@ the primary shard). The `replication` parameter can be set to `async`
|
|||
(defaults to `sync`) in order to enable it.
|
||||
|
||||
[float]
|
||||
[[consistency]]
|
||||
=== Write Consistency
|
||||
|
||||
Control if the operation will be allowed to execute based on the number
|
||||
|
|
|
@ -25,6 +25,7 @@ The result of the above delete operation is:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[versioning]]
|
||||
=== Versioning
|
||||
|
||||
Each document indexed is versioned. When deleting a document, the
|
||||
|
@ -33,6 +34,7 @@ trying to delete is actually being deleted and it has not changed in the
|
|||
meantime.
|
||||
|
||||
[float]
|
||||
[[routing]]
|
||||
=== Routing
|
||||
|
||||
When indexing using the ability to control the routing, in order to
|
||||
|
@ -54,6 +56,7 @@ no routing value is specified, the delete will be broadcasted
|
|||
automatically to all shards.
|
||||
|
||||
[float]
|
||||
[[parent]]
|
||||
=== Parent
|
||||
|
||||
The `parent` parameter can be set, which will basically be the same as
|
||||
|
@ -66,6 +69,7 @@ index with the automatically generated (and indexed)
|
|||
field _parent, which is in the format parent_type#parent_id.
|
||||
|
||||
[float]
|
||||
[[index-creation]]
|
||||
=== Automatic index creation
|
||||
|
||||
The delete operation automatically creates an index if it has not been
|
||||
|
@ -76,6 +80,7 @@ before (check out the <<indices-put-mapping,put mapping>>
|
|||
API for manually creating type mapping).
|
||||
|
||||
[float]
|
||||
[[distributed]]
|
||||
=== Distributed
|
||||
|
||||
The delete operation gets hashed into a specific shard id. It then gets
|
||||
|
@ -83,6 +88,7 @@ redirected into the primary shard within that id group, and replicated
|
|||
(if needed) to shard replicas within that id group.
|
||||
|
||||
[float]
|
||||
[[replication]]
|
||||
=== Replication Type
|
||||
|
||||
The replication of the operation can be done in an asynchronous manner
|
||||
|
@ -91,6 +97,7 @@ the primary shard). The `replication` parameter can be set to `async`
|
|||
(defaults to `sync`) in order to enable it.
|
||||
|
||||
[float]
|
||||
[[consistency]]
|
||||
=== Write Consistency
|
||||
|
||||
Control if the operation will be allowed to execute based on the number
|
||||
|
@ -106,6 +113,7 @@ will need to be a single shard active (in this case, `one` and `quorum`
|
|||
is the same).
|
||||
|
||||
[float]
|
||||
[[refresh]]
|
||||
=== Refresh
|
||||
|
||||
The `refresh` parameter can be set to `true` in order to refresh the
|
||||
|
|
|
@ -39,6 +39,7 @@ curl -XHEAD 'http://localhost:9200/twitter/tweet/1'
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[realtime]]
|
||||
=== Realtime
|
||||
|
||||
By default, the get API is realtime, and is not affected by the refresh
|
||||
|
@ -58,12 +59,14 @@ will be loaded from source when using realtime GET, even if the fields
|
|||
are stored.
|
||||
|
||||
[float]
|
||||
[[type]]
|
||||
=== Optional Type
|
||||
|
||||
The get API allows for `_type` to be optional. Set it to `_all` in order
|
||||
to fetch the first document matching the id across all types.
|
||||
|
||||
[float]
|
||||
[[fields]]
|
||||
=== Fields
|
||||
|
||||
The get operation allows specifying a set of fields that will be
|
||||
|
@ -80,6 +83,7 @@ from the `_source` (parsed and extracted). It also supports sub objects
|
|||
extraction from _source, like `obj1.obj2`.
|
||||
|
||||
[float]
|
||||
[[_source]]
|
||||
=== Getting the _source directly
|
||||
|
||||
Use the `/{index}/{type}/{id}/_source` endpoint to get
|
||||
|
@ -100,6 +104,7 @@ curl -XHEAD 'http://localhost:9200/twitter/tweet/1/_source'
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[routing]]
|
||||
=== Routing
|
||||
|
||||
When indexing using the ability to control the routing, in order to get
|
||||
|
@ -115,6 +120,7 @@ user. Note, issuing a get without the correct routing, will cause the
|
|||
document not to be fetched.
|
||||
|
||||
[float]
|
||||
[[preference]]
|
||||
=== Preference
|
||||
|
||||
Controls a `preference` of which shard replicas to execute the get
|
||||
|
@ -139,6 +145,7 @@ Custom (string) value::
|
|||
user name.
|
||||
|
||||
[float]
|
||||
[[refresh]]
|
||||
=== Refresh
|
||||
|
||||
The `refresh` parameter can be set to `true` in order to refresh the
|
||||
|
@ -148,6 +155,7 @@ this does not cause a heavy load on the system (and slows down
|
|||
indexing).
|
||||
|
||||
[float]
|
||||
[[distributed]]
|
||||
=== Distributed
|
||||
|
||||
The get operation gets hashed into a specific shard id. It then gets
|
||||
|
|
|
@ -28,6 +28,7 @@ The result of the above index operation is:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[index-creation]]
|
||||
=== Automatic Index Creation
|
||||
|
||||
The index operation automatically creates an index if it has not been
|
||||
|
@ -69,6 +70,7 @@ for example, set `action.auto_create_index` to `+aaa*,-bbb*,+ccc*,-*` (+
|
|||
meaning allowed, and - meaning disallowed).
|
||||
|
||||
[float]
|
||||
[[versioning]]
|
||||
=== Versioning
|
||||
|
||||
Each indexed document is given a version number. The associated
|
||||
|
@ -116,6 +118,7 @@ latest version will be used if the index operations are out of order for
|
|||
whatever reason.
|
||||
|
||||
[float]
|
||||
[[operation-type]]
|
||||
=== Operation Type
|
||||
|
||||
The index operation also accepts an `op_type` that can be used to force
|
||||
|
@ -176,6 +179,7 @@ The result of the above index operation is:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[routing]]
|
||||
=== Routing
|
||||
|
||||
By default, shard placement — or `routing` — is controlled by using a
|
||||
|
@ -203,6 +207,7 @@ and set to be `required`, the index operation will fail if no routing
|
|||
value is provided or extracted.
|
||||
|
||||
[float]
|
||||
[[parent-children]]
|
||||
=== Parents & Children
|
||||
|
||||
A child document can be indexed by specifying it's parent when indexing.
|
||||
|
@ -220,6 +225,7 @@ to be the same as it's parent, unless the routing value is explicitly
|
|||
specified using the `routing` parameter.
|
||||
|
||||
[float]
|
||||
[[timestamp]]
|
||||
=== Timestamp
|
||||
|
||||
A document can be indexed with a `timestamp` associated with it. The
|
||||
|
@ -241,6 +247,7 @@ processed by the indexing chain. More information can be found on the
|
|||
page>>.
|
||||
|
||||
[float]
|
||||
[[ttl]]
|
||||
=== TTL
|
||||
|
||||
A document can be indexed with a `ttl` (time to live) associated with
|
||||
|
@ -280,6 +287,7 @@ More information can be found on the
|
|||
<<mapping-ttl-field,_ttl mapping page>>.
|
||||
|
||||
[float]
|
||||
[[percolate]]
|
||||
=== Percolate
|
||||
|
||||
<<search-percolate,Percolation>> can be performed
|
||||
|
@ -313,6 +321,7 @@ cuts down on parsing overhead, as the parse tree for the document is
|
|||
simply re-used for percolation.
|
||||
|
||||
[float]
|
||||
[[distributed]]
|
||||
=== Distributed
|
||||
|
||||
The index operation is directed to the primary shard based on its route
|
||||
|
@ -321,6 +330,7 @@ containing this shard. After the primary shard completes the operation,
|
|||
if needed, the update is distributed to applicable replicas.
|
||||
|
||||
[float]
|
||||
[[consistency]]
|
||||
=== Write Consistency
|
||||
|
||||
To prevent writes from taking place on the "wrong" side of a network
|
||||
|
@ -333,6 +343,7 @@ parameter can be used.
|
|||
Valid write consistency values are `one`, `quorum`, and `all`.
|
||||
|
||||
[float]
|
||||
[[replication]]
|
||||
=== Asynchronous Replication
|
||||
|
||||
By default, the index operation only returns after all shards within the
|
||||
|
@ -343,6 +354,7 @@ When asynchronous replication is used, the index operation will return
|
|||
as soon as the operation succeeds on the primary shard.
|
||||
|
||||
[float]
|
||||
[[refresh]]
|
||||
=== Refresh
|
||||
|
||||
To refresh the index immediately after the operation occurs, so that the
|
||||
|
@ -353,6 +365,7 @@ poor performance, both from an indexing and a search standpoint. Note,
|
|||
getting a document using the get API is completely realtime.
|
||||
|
||||
[float]
|
||||
[[timeout]]
|
||||
=== Timeout
|
||||
|
||||
The primary shard assigned to perform the index operation might not be
|
||||
|
|
|
@ -71,6 +71,7 @@ curl 'localhost:9200/test/type/_mget' -d '{
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[fields]]
|
||||
=== Fields
|
||||
|
||||
Specific fields can be specified to be retrieved per document to get.
|
||||
|
|
|
@ -9,6 +9,7 @@ all the relevant modules settings can be provided when creating an index
|
|||
(and it is actually the recommended way to configure an index).
|
||||
|
||||
[float]
|
||||
[[settings]]
|
||||
== Index Settings
|
||||
|
||||
There are specific index level settings that are not associated with any
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
== Index Shard Allocation
|
||||
|
||||
[float]
|
||||
[[filtering]]
|
||||
=== Shard Allocation Filtering
|
||||
|
||||
Allow to control allocation if indices on nodes based on include/exclude
|
||||
|
@ -95,6 +96,7 @@ It can be dynamically set on a live index using the update index
|
|||
settings API.
|
||||
|
||||
[float]
|
||||
[[disk]]
|
||||
=== Disk-based Shard Allocation
|
||||
|
||||
added[0.90.4]
|
||||
|
|
|
@ -5,6 +5,7 @@ There are different caching inner modules associated with an index. They
|
|||
include `filter` and others.
|
||||
|
||||
[float]
|
||||
[[filter]]
|
||||
=== Filter Cache
|
||||
|
||||
The filter cache is responsible for caching the results of filters (used
|
||||
|
@ -12,6 +13,7 @@ in the query). The default implementation of a filter cache (and the one
|
|||
recommended to use in almost all cases) is the `node` filter cache type.
|
||||
|
||||
[float]
|
||||
[[node-filter]]
|
||||
==== Node Filter Cache
|
||||
|
||||
The `node` filter cache may be configured to use either a percentage of
|
||||
|
@ -30,6 +32,7 @@ configured in the node configuration).
|
|||
`30%`, or an exact value, like `512mb`.
|
||||
|
||||
[float]
|
||||
[[index-filter]]
|
||||
==== Index Filter Cache
|
||||
|
||||
A filter cache that exists on the index level (on each node). Generally,
|
||||
|
|
|
@ -11,6 +11,7 @@ using the builtin postings formats will suite your needs as is described
|
|||
in the <<mapping-core-types,mapping section>>
|
||||
|
||||
[float]
|
||||
[[postings]]
|
||||
=== Configuring a custom postings format
|
||||
|
||||
Custom postings format can be defined in the index settings in the
|
||||
|
@ -54,6 +55,7 @@ Then we defining your mapping your can use the `my_format` name in the
|
|||
=== Available postings formats
|
||||
|
||||
[float]
|
||||
[[direct-postings]]
|
||||
==== Direct postings format
|
||||
|
||||
Wraps the default postings format for on-disk storage, but then at read
|
||||
|
@ -78,6 +80,7 @@ This postings format offers the following parameters:
|
|||
Type name: `direct`
|
||||
|
||||
[float]
|
||||
[[memory-postings]]
|
||||
==== Memory postings format
|
||||
|
||||
A postings format that stores terms & postings (docs, positions,
|
||||
|
@ -102,6 +105,7 @@ following options:
|
|||
Type name: `memory`
|
||||
|
||||
[float]
|
||||
[[bloom-postings]]
|
||||
==== Bloom filter posting format
|
||||
|
||||
The bloom filter postings format wraps a delegate postings format and on
|
||||
|
@ -127,6 +131,7 @@ following options:
|
|||
Type name: `bloom`
|
||||
|
||||
[float]
|
||||
[[pulsing-postings]]
|
||||
==== Pulsing postings format
|
||||
|
||||
The pulsing implementation in-lines the posting lists for very low
|
||||
|
@ -151,6 +156,7 @@ following parameters:
|
|||
Type name: `pulsing`
|
||||
|
||||
[float]
|
||||
[[default-postings]]
|
||||
==== Default postings format
|
||||
|
||||
The default postings format has the following options:
|
||||
|
|
|
@ -25,6 +25,7 @@ example, can be set to `5m` for a 5 minute expiry.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
[[filtering]]
|
||||
=== Filtering fielddata
|
||||
|
||||
It is possible to control which field values are loaded into memory,
|
||||
|
@ -122,6 +123,7 @@ The `frequency` and `regex` filters can be combined:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[monitoring]]
|
||||
=== Monitoring field data
|
||||
|
||||
You can monitor memory usage for field data using
|
||||
|
|
|
@ -16,6 +16,7 @@ environments, they can be throttled using store level throttling. Read
|
|||
the store module documentation on how to set it.
|
||||
|
||||
[float]
|
||||
[[policy]]
|
||||
=== Policy
|
||||
|
||||
The index merge policy module allows one to control which segments of a
|
||||
|
@ -23,6 +24,7 @@ shard index are to be merged. There are several types of policies with
|
|||
the default set to `tiered`.
|
||||
|
||||
[float]
|
||||
[[tiered]]
|
||||
==== tiered
|
||||
|
||||
Merges segments of approximately equal size, subject to an allowed
|
||||
|
@ -95,6 +97,7 @@ possibly either increase the `max_merged_segment` or issue an optimize
|
|||
call for the index (try and aim to issue it on a low traffic time).
|
||||
|
||||
[float]
|
||||
[[log-byte-size]]
|
||||
==== log_byte_size
|
||||
|
||||
A merge policy that merges segments into levels of exponentially
|
||||
|
@ -136,6 +139,7 @@ Defaults to unbounded.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
[[log-doc]]
|
||||
==== log_doc
|
||||
|
||||
A merge policy that tries to merge segments into levels of exponentially
|
||||
|
@ -171,6 +175,7 @@ Defaults to unbounded.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
[[scheduling]]
|
||||
=== Scheduling
|
||||
|
||||
The merge schedule controls the execution of merge operations once they
|
||||
|
|
|
@ -10,6 +10,7 @@ builtin similarities are most likely sufficient as is described in the
|
|||
<<mapping-core-types,mapping section>>
|
||||
|
||||
[float]
|
||||
[[configuration]]
|
||||
=== Configuring a similarity
|
||||
|
||||
Most existing or custom Similarities have configuration options which
|
||||
|
@ -47,6 +48,7 @@ Here we configure the DFRSimilarity so it can be referenced as
|
|||
=== Available similarities
|
||||
|
||||
[float]
|
||||
[[default]]
|
||||
==== Default similarity
|
||||
|
||||
The default similarity that is based on the TF/IDF model. This
|
||||
|
@ -60,6 +62,7 @@ similarity has the following option:
|
|||
Type name: `default`
|
||||
|
||||
[float]
|
||||
[[bm25]]
|
||||
==== BM25 similarity
|
||||
|
||||
Another TF/IDF based similarity that has built-in tf normalization and
|
||||
|
@ -83,6 +86,7 @@ This similarity has the following options:
|
|||
Type name: `BM25`
|
||||
|
||||
[float]
|
||||
[[drf]]
|
||||
==== DRF similarity
|
||||
|
||||
Similarity that implements the
|
||||
|
@ -104,6 +108,7 @@ All options but the first option need a normalization value.
|
|||
Type name: `DFR`
|
||||
|
||||
[float]
|
||||
[[ib]]
|
||||
==== IB similarity.
|
||||
|
||||
http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/search/similarities/IBSimilarity.html[Information
|
||||
|
@ -117,6 +122,7 @@ based model] . This similarity has the following options:
|
|||
Type name: `IB`
|
||||
|
||||
[float]
|
||||
[[default]]
|
||||
==== Default and Base Similarities
|
||||
|
||||
By default, Elasticsearch will use whatever similarity is configured as
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
== Index Slow Log
|
||||
|
||||
[float]
|
||||
[[search-slow-log]]
|
||||
=== Search Slow Log
|
||||
|
||||
Shard level slow search log allows to log slow search (query and fetch
|
||||
|
@ -55,6 +56,7 @@ index_search_slow_log_file:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[index-slow-log]]
|
||||
=== Index Slow log
|
||||
|
||||
p.The indexing slow log, similar in functionality to the search slow
|
||||
|
|
|
@ -20,6 +20,7 @@ own consequences) for storing the index in memory.
|
|||
|
||||
|
||||
[float]
|
||||
[[throttling]]
|
||||
=== Store Level Throttling
|
||||
|
||||
The way Lucene, the IR library elasticsearch uses under the covers,
|
||||
|
@ -52,6 +53,7 @@ using the index update settings API dynamically.
|
|||
The following sections lists all the different storage types supported.
|
||||
|
||||
[float]
|
||||
[[file-system]]
|
||||
=== File System
|
||||
|
||||
File system based storage is the default storage used. There are
|
||||
|
@ -89,6 +91,7 @@ process equal to the size of the file being mapped. Before using this
|
|||
class, be sure your have plenty of virtual address space.
|
||||
|
||||
[float]
|
||||
[[memory]]
|
||||
=== Memory
|
||||
|
||||
The `memory` type stores the index in main memory with the following
|
||||
|
|
|
@ -8,6 +8,7 @@ index settings, aliases, mappings, index templates
|
|||
and warmers.
|
||||
|
||||
[float]
|
||||
[[index-management]]
|
||||
== Index management:
|
||||
|
||||
* <<indices-create-index>>
|
||||
|
@ -16,6 +17,7 @@ and warmers.
|
|||
* <<indices-open-close>>
|
||||
|
||||
[float]
|
||||
[[mapping-management]]
|
||||
== Mapping management:
|
||||
|
||||
* <<indices-put-mapping>>
|
||||
|
@ -23,10 +25,12 @@ and warmers.
|
|||
* <<indices-types-exists>>
|
||||
|
||||
[float]
|
||||
[[alias-management]]
|
||||
== Alias management:
|
||||
* <<indices-aliases>>
|
||||
|
||||
[float]
|
||||
[[index-settings]]
|
||||
== Index settings:
|
||||
* <<indices-update-settings>>
|
||||
* <<indices-get-settings>>
|
||||
|
@ -35,12 +39,14 @@ and warmers.
|
|||
* <<indices-warmers>>
|
||||
|
||||
[float]
|
||||
[[monitoring]]
|
||||
== Monitoring:
|
||||
* <<indices-status>>
|
||||
* <<indices-stats>>
|
||||
* <<indices-segments>>
|
||||
|
||||
[float]
|
||||
[[status-management]]
|
||||
== Status management:
|
||||
* <<indices-clearcache>>
|
||||
* <<indices-refresh>>
|
||||
|
|
|
@ -66,6 +66,7 @@ curl -XPOST 'http://localhost:9200/_aliases' -d '
|
|||
It is an error to index to an alias which points to more than one index.
|
||||
|
||||
[float]
|
||||
[[filtered]]
|
||||
=== Filtered Aliases
|
||||
|
||||
Aliases with filters provide an easy way to create different "views" of
|
||||
|
@ -90,6 +91,7 @@ curl -XPOST 'http://localhost:9200/_aliases' -d '
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[routing]]
|
||||
==== Routing
|
||||
|
||||
It is possible to associate routing values with aliases. This feature
|
||||
|
@ -150,6 +152,7 @@ curl -XGET 'http://localhost:9200/alias2/_search?q=user:kimchy&routing=2,3'
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[adding]]
|
||||
=== Add a single index alias
|
||||
|
||||
There is also an api to add a single index alias, with options:
|
||||
|
@ -187,6 +190,7 @@ curl -XPUT 'localhost:9200/users/_alias/user_12' -d '{
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[deleting]]
|
||||
=== Delete a single index alias
|
||||
|
||||
Th API to delete a single index alias, has options:
|
||||
|
@ -204,6 +208,7 @@ curl -XDELETE 'localhost:9200/users/_alias/user_12'
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[retrieving]]
|
||||
=== Retrieving existing aliases
|
||||
|
||||
The get index alias api allows to filter by
|
||||
|
|
|
@ -50,6 +50,7 @@ Also, the text can be provided as part of the request body, and not as a
|
|||
parameter.
|
||||
|
||||
[float]
|
||||
[[format]]
|
||||
=== Format
|
||||
|
||||
By default, the format the tokens are returned in are in json and its
|
||||
|
|
|
@ -50,6 +50,7 @@ _Note you do not have to explicitly specify `index` section inside
|
|||
`settings` section._
|
||||
|
||||
[float]
|
||||
[[mappings]]
|
||||
=== Mappings
|
||||
|
||||
The create index API allows to provide a set of one or more mappings:
|
||||
|
@ -72,6 +73,7 @@ curl -XPOST localhost:9200/test -d '{
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[settings]]
|
||||
=== Index Settings
|
||||
|
||||
For more information regarding all the different index level settings
|
||||
|
|
|
@ -13,6 +13,7 @@ $ curl -XPOST 'http://localhost:9200/twitter/_optimize'
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[parameters]]
|
||||
=== Request Parameters
|
||||
|
||||
The optimize API accepts the following request parameters:
|
||||
|
@ -36,6 +37,7 @@ to `true`. Note, a merge can potentially be a very heavy operation, so
|
|||
it might make sense to run it set to `false`.
|
||||
|
||||
[float]
|
||||
[[multi-index]]
|
||||
=== Multi Index
|
||||
|
||||
The optimize API can be applied to more than one index with a single
|
||||
|
|
|
@ -26,6 +26,7 @@ More information on how to define type mappings can be found in the
|
|||
<<mapping,mapping>> section.
|
||||
|
||||
[float]
|
||||
[[merging-conflicts]]
|
||||
=== Merging & Conflicts
|
||||
|
||||
When an existing mapping already exists under the given type, the two
|
||||
|
@ -40,6 +41,7 @@ conflict. New mapping definitions can be added to object types, and core
|
|||
type mapping can be upgraded to `multi_field` type.
|
||||
|
||||
[float]
|
||||
[[multi-index]]
|
||||
=== Multi Index
|
||||
|
||||
The put mapping API can be applied to more than one index with a single
|
||||
|
|
|
@ -28,6 +28,7 @@ The settings and mappings will be applied to any index name that matches
|
|||
the `te*` template.
|
||||
|
||||
[float]
|
||||
[[delete]]
|
||||
=== Deleting a Template
|
||||
|
||||
Index templates are identified by a name (in the above case
|
||||
|
@ -39,6 +40,7 @@ curl -XDELETE localhost:9200/_template/template_1
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[getting]]
|
||||
=== GETting templates
|
||||
|
||||
Index templates are identified by a name (in the above case
|
||||
|
@ -70,6 +72,7 @@ curl -XGET localhost:9200/_template/
|
|||
|
||||
|
||||
[float]
|
||||
[[multiple-templates]]
|
||||
=== Multiple Template Matching
|
||||
|
||||
Multiple index templates can potentially match an index, in this case,
|
||||
|
@ -118,6 +121,7 @@ object/property based mappings can easily be added/overridden on higher
|
|||
order templates, with lower order templates providing the basis.
|
||||
|
||||
[float]
|
||||
[[config]]
|
||||
=== Config
|
||||
|
||||
Index templates can also be placed within the config location
|
||||
|
|
|
@ -149,6 +149,7 @@ settings API:
|
|||
See <<indices-warmers>>. Defaults to `true`.
|
||||
|
||||
[float]
|
||||
[[bulk]]
|
||||
=== Bulk Indexing Usage
|
||||
|
||||
For example, the update settings API can be used to dynamically change
|
||||
|
@ -186,6 +187,7 @@ curl -XPOST 'http://localhost:9200/test/_optimize?max_num_segments=5'
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[analysis]]
|
||||
=== Updating Index Analysis
|
||||
|
||||
It is also possible to define new <<analysis,analyzers>> for the index.
|
||||
|
|
|
@ -18,6 +18,7 @@ registered warmers to make indexing faster and less expensive and then
|
|||
enable it.
|
||||
|
||||
[float]
|
||||
[[creation]]
|
||||
=== Index Creation / Templates
|
||||
|
||||
Warmers can be registered when an index gets created, for example:
|
||||
|
@ -65,6 +66,7 @@ curl -XPUT localhost:9200/_template/template_1 -d '
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[adding]]
|
||||
=== Put Warmer
|
||||
|
||||
Allows to put a warmup search request on a specific index (or indices),
|
||||
|
@ -111,6 +113,7 @@ curl -XPUT localhost:9200/test/type1/_warmer/warmer_1 -d '{
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[removing]]
|
||||
=== Delete Warmer
|
||||
|
||||
Removing a warmer can be done against an index (or alias / indices)
|
||||
|
@ -130,6 +133,7 @@ curl -XDELETE localhost:9200/test/_warmer/
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[retrieving]]
|
||||
=== GETting Warmer
|
||||
|
||||
Getting a warmer for specific index (or alias, or several indices) based
|
||||
|
|
|
@ -17,6 +17,7 @@ no performance overhead) and have sensible defaults. Only when the
|
|||
defaults need to be overridden must a mapping definition be provided.
|
||||
|
||||
[float]
|
||||
[[mapping-types]]
|
||||
=== Mapping Types
|
||||
|
||||
Mapping types are a way to divide the documents in an index into logical
|
||||
|
@ -37,6 +38,7 @@ name usually ends up being a good indication to its "typeness" (e.g.
|
|||
apply to the cross index case.
|
||||
|
||||
[float]
|
||||
[[mapping-api]]
|
||||
=== Mapping API
|
||||
|
||||
To create a mapping, you will need the <<indices-put-mapping,Put Mapping
|
||||
|
@ -44,6 +46,7 @@ API>>, or you can add multiple mappings when you <<indices-create-index,create a
|
|||
index>>.
|
||||
|
||||
[float]
|
||||
[[mapping-settings]]
|
||||
=== Global Settings
|
||||
|
||||
The `index.mapping.ignore_malformed` global setting can be set on the
|
||||
|
|
|
@ -18,6 +18,7 @@ act as the one that converts back from milliseconds to a string
|
|||
representation.
|
||||
|
||||
[float]
|
||||
[[date-math]]
|
||||
=== Date Math
|
||||
|
||||
The `date` type supports using date math expression when using it in a
|
||||
|
@ -37,6 +38,7 @@ inclusive, the rounding will properly be rounded to the ceiling instead
|
|||
of flooring it.
|
||||
|
||||
[float]
|
||||
[[built-in]]
|
||||
=== Built In Formats
|
||||
|
||||
The following tables lists all the defaults ISO formats supported:
|
||||
|
@ -191,6 +193,7 @@ year, and two digit day of month.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
[[custom]]
|
||||
=== Custom Format
|
||||
|
||||
Allows for a completely customizable date format explained
|
||||
|
|
|
@ -64,6 +64,7 @@ The `_all` fields allows for `store`, `term_vector` and `analyzer` (with
|
|||
specific `index_analyzer` and `search_analyzer`) to be set.
|
||||
|
||||
[float]
|
||||
[[highlighting]]
|
||||
==== Highlighting
|
||||
|
||||
For any field to allow
|
||||
|
|
|
@ -22,6 +22,7 @@ example:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[include-exclude]]
|
||||
==== Includes / Excludes
|
||||
|
||||
Allow to specify paths in the source that would be included / excluded
|
||||
|
|
|
@ -39,6 +39,7 @@ Explicit mapping for the above JSON tweet can be:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[string]]
|
||||
==== String
|
||||
|
||||
The text based string type is the most basic type, and contains one or
|
||||
|
@ -149,6 +150,7 @@ the real string content that should eventually be indexed. The `_boost`
|
|||
(or `boost`) key specifies the per field document boost (here 2.0).
|
||||
|
||||
[float]
|
||||
[[number]]
|
||||
==== Number
|
||||
|
||||
A number based type supporting `float`, `double`, `byte`, `short`,
|
||||
|
@ -211,6 +213,7 @@ defaults to `true` or to the parent `object` type setting.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
[[date]]
|
||||
==== Date
|
||||
|
||||
The date type is a special type which maps to JSON string type. It
|
||||
|
@ -275,6 +278,7 @@ defaults to `true` or to the parent `object` type setting.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
[[boolean]]
|
||||
==== Boolean
|
||||
|
||||
The boolean type Maps to the JSON boolean type. It ends up storing
|
||||
|
@ -327,6 +331,7 @@ defaults to `true` or to the parent `object` type setting.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
[[binary]]
|
||||
==== Binary
|
||||
|
||||
The binary type is a base64 representation of binary data that can be
|
||||
|
@ -357,6 +362,7 @@ Defaults to the property/field name.
|
|||
|=======================================================================
|
||||
|
||||
[float]
|
||||
[[fielddata-filters]]
|
||||
==== Fielddata filters
|
||||
|
||||
It is possible to control which field values are loaded into memory,
|
||||
|
@ -393,6 +399,7 @@ effect the next time the fielddata for a segment is loaded. Use the
|
|||
to reload the fielddata using the new filters.
|
||||
|
||||
[float]
|
||||
[[postings]]
|
||||
==== Postings format
|
||||
|
||||
Posting formats define how fields are written into the index and how
|
||||
|
@ -455,6 +462,7 @@ custom postings format. See
|
|||
information.
|
||||
|
||||
[float]
|
||||
[[similarity]]
|
||||
==== Similarity
|
||||
|
||||
Elasticsearch allows you to configure a similarity (scoring algorithm) per field.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
== Cluster
|
||||
|
||||
[float]
|
||||
[[shards-allocation]]
|
||||
=== Shards Allocation
|
||||
|
||||
Shards allocation is the process of allocating shards to nodes. This can
|
||||
|
@ -61,6 +62,7 @@ The following settings may be used:
|
|||
shard from a peer shard. Defaults to `3`.
|
||||
|
||||
[float]
|
||||
[[allocation-awareness]]
|
||||
=== Shard Allocation Awareness
|
||||
|
||||
Cluster allocation awareness allows to configure shard and replicas
|
||||
|
@ -106,6 +108,7 @@ cluster.routing.allocation.awareness.attributes: rack_id,zone
|
|||
nodes that don't have values set for those attributes.
|
||||
|
||||
[float]
|
||||
[[forced-awareness]]
|
||||
=== Forced Awareness
|
||||
|
||||
Sometimes, we know in advance the number of values an awareness
|
||||
|
@ -143,6 +146,7 @@ have the same attribute values as the executing node.
|
|||
The settings can be updated using the <<cluster-update-settings,cluster update settings API>> on a live cluster.
|
||||
|
||||
[float]
|
||||
[[allocation-filtering]]
|
||||
=== Shard Allocation Filtering
|
||||
|
||||
Allow to control allocation if indices on nodes based on include/exclude
|
||||
|
|
|
@ -12,6 +12,7 @@ communication between nodes is done using the
|
|||
It is separated into several sub modules, which are explained below:
|
||||
|
||||
[float]
|
||||
[[ping]]
|
||||
==== Ping
|
||||
|
||||
This is the process where a node uses the discovery mechanisms to find
|
||||
|
@ -19,6 +20,7 @@ other nodes. There is support for both multicast and unicast based
|
|||
discovery (can be used in conjunction as well).
|
||||
|
||||
[float]
|
||||
[[multicast]]
|
||||
===== Multicast
|
||||
|
||||
Multicast ping discovery of other nodes is done by sending one or more
|
||||
|
@ -42,6 +44,7 @@ will bind to all available network interfaces.
|
|||
Multicast can be disabled by setting `multicast.enabled` to `false`.
|
||||
|
||||
[float]
|
||||
[[unicast]]
|
||||
===== Unicast
|
||||
|
||||
The unicast discovery allows to perform the discovery when multicast is
|
||||
|
@ -62,6 +65,7 @@ The unicast discovery uses the
|
|||
perform the discovery.
|
||||
|
||||
[float]
|
||||
[[master-election]]
|
||||
==== Master Election
|
||||
|
||||
As part of the initial ping process a master of the cluster is either
|
||||
|
@ -81,6 +85,7 @@ within the cluster. Its recommended to set it to a higher value than 1
|
|||
when running more than 2 nodes in the cluster.
|
||||
|
||||
[float]
|
||||
[[fault-detection]]
|
||||
==== Fault Detection
|
||||
|
||||
There are two fault detection processes running. The first is by the
|
||||
|
|
|
@ -22,6 +22,7 @@ The default gateway used is the
|
|||
<<modules-gateway-local,local>> gateway.
|
||||
|
||||
[float]
|
||||
[[recover-after]]
|
||||
=== Recovery After Nodes / Time
|
||||
|
||||
In many cases, the actual cluster meta data should only be recovered
|
||||
|
|
|
@ -5,6 +5,7 @@ The indices module allow to control settings that are globally managed
|
|||
for all indices.
|
||||
|
||||
[float]
|
||||
[[buffer]]
|
||||
=== Indexing Buffer
|
||||
|
||||
The indexing buffer setting allows to control how much memory will be
|
||||
|
@ -23,6 +24,7 @@ lower limit for the memory allocated per shard for its own indexing
|
|||
buffer. It defaults to `4mb`.
|
||||
|
||||
[float]
|
||||
[[ttl]]
|
||||
=== TTL interval
|
||||
|
||||
You can dynamically set the `indices.ttl.interval` allows to set how
|
||||
|
@ -35,6 +37,7 @@ The deletion orders are processed by bulk. You can set
|
|||
See also <<mapping-ttl-field>>.
|
||||
|
||||
[float]
|
||||
[[recovery]]
|
||||
=== Recovery
|
||||
|
||||
The following settings can be set to manage recovery policy:
|
||||
|
@ -59,6 +62,7 @@ The following settings can be set to manage recovery policy:
|
|||
defaults to `20mb`.
|
||||
|
||||
[float]
|
||||
[[throttling]]
|
||||
=== Store level throttling
|
||||
|
||||
The following settings can be set to control store throttling:
|
||||
|
|
|
@ -62,6 +62,7 @@ as valid network host settings:
|
|||
|==================================================================
|
||||
|
||||
[float]
|
||||
[[tcp-settings]]
|
||||
=== TCP Settings
|
||||
|
||||
Any component that uses TCP (like the HTTP, Transport and Memcached)
|
||||
|
|
|
@ -10,6 +10,7 @@ analyzers (in a more built in fashion), native scripts, custom discovery
|
|||
and more.
|
||||
|
||||
[float]
|
||||
[[installing]]
|
||||
==== Installing plugins
|
||||
|
||||
Installing plugins can either be done manually by placing them under the
|
||||
|
@ -44,6 +45,7 @@ bin/plugin --url file://path/to/plugin --install plugin-name
|
|||
You can run `bin/plugin -h`.
|
||||
|
||||
[float]
|
||||
[[site-plugins]]
|
||||
==== Site Plugins
|
||||
|
||||
Plugins can have "sites" in them, any plugin that exists under the
|
||||
|
@ -123,9 +125,11 @@ plugin --remove head --silent
|
|||
|
||||
|
||||
[float]
|
||||
[[known-plugins]]
|
||||
=== Known Plugins
|
||||
|
||||
[float]
|
||||
[[analysis]]
|
||||
==== Analysis Plugins
|
||||
|
||||
.Supported by Elasticsearch
|
||||
|
@ -147,6 +151,7 @@ plugin --remove head --silent
|
|||
* https://github.com/medcl/elasticsearch-analysis-string2int[String2Integer Analysis Plugin] (by Medcl)
|
||||
|
||||
[float]
|
||||
[[river]]
|
||||
==== River Plugins
|
||||
|
||||
.Supported by Elasticsearch
|
||||
|
@ -178,6 +183,7 @@ plugin --remove head --silent
|
|||
* https://github.com/plombard/SubversionRiver[Subversion River Plugin] (by Pascal Lombard)
|
||||
|
||||
[float]
|
||||
[[transport]]
|
||||
==== Transport Plugins
|
||||
|
||||
.Supported by Elasticsearch
|
||||
|
@ -190,6 +196,7 @@ plugin --remove head --silent
|
|||
* https://github.com/sonian/elasticsearch-jetty[Jetty HTTP transport plugin] (by Sonian Inc.)
|
||||
|
||||
[float]
|
||||
[[scripting]]
|
||||
==== Scripting Plugins
|
||||
|
||||
.Supported by Elasticsearch
|
||||
|
@ -199,6 +206,7 @@ plugin --remove head --silent
|
|||
* https://github.com/elasticsearch/elasticsearch-lang-python[Python language Plugin]
|
||||
|
||||
[float]
|
||||
[[site]]
|
||||
==== Site Plugins
|
||||
|
||||
.Supported by the community
|
||||
|
@ -211,6 +219,7 @@ plugin --remove head --silent
|
|||
* https://github.com/polyfractal/elasticsearch-segmentspy[SegmentSpy Plugin] (by Zachary Tong)
|
||||
|
||||
[float]
|
||||
[[misc]]
|
||||
==== Misc Plugins
|
||||
|
||||
.Supported by Elasticsearch
|
||||
|
|
|
@ -51,6 +51,7 @@ NOTE: you can update threadpool settings live using
|
|||
|
||||
|
||||
[float]
|
||||
[[types]]
|
||||
=== Thread pool types
|
||||
|
||||
The following are the types of thread pools that can be used and their
|
||||
|
@ -96,6 +97,7 @@ threadpool:
|
|||
|
||||
[processors]]
|
||||
[float]
|
||||
[[processors]]
|
||||
=== Processors setting
|
||||
The number of processors is automatically detected, and the thread pool
|
||||
settings are automatically set based on it. Sometimes, the number of processors
|
||||
|
|
|
@ -7,6 +7,7 @@ As a general rule, filters should be used instead of queries:
|
|||
* for queries on exact values
|
||||
|
||||
[float]
|
||||
[[caching]]
|
||||
=== Filters and Caching
|
||||
|
||||
Filters can be a great candidate for caching. Caching the result of a
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
[partintro]
|
||||
--
|
||||
["float",id="search-multi-index"]
|
||||
[[multiple-indices]]
|
||||
== Multiple Indices
|
||||
|
||||
All search APIs support execution across multiple indices, using simple
|
||||
|
@ -16,6 +17,7 @@ All multi indices API support the `ignore_indices` option. Setting it to
|
|||
execution. By default, when its not set, the request will fail.
|
||||
|
||||
[float]
|
||||
[[routing]]
|
||||
== Routing
|
||||
|
||||
When executing a search, it will be broadcasted to all the index/indices
|
||||
|
@ -61,6 +63,7 @@ separated string. This will result in hitting the relevant shards where
|
|||
the routing values match to.
|
||||
|
||||
[float]
|
||||
[[stats-groups]]
|
||||
== Stats Groups
|
||||
|
||||
A search can be associated with stats groups, which maintains a
|
||||
|
|
|
@ -37,6 +37,7 @@ Script fields can also be automatically detected and used as fields, so
|
|||
things like `_source.obj1.obj2` can be used, though not recommended, as
|
||||
`obj1.obj2` will work as well.
|
||||
|
||||
[[partial]]
|
||||
==== Partial
|
||||
|
||||
When loading data from `_source`, partial fields can be used to use
|
||||
|
|
|
@ -54,6 +54,7 @@ The field name supports wildcard notation, for example,
|
|||
using `comment_*` which will cause all fields that match the expression
|
||||
to be highlighted.
|
||||
|
||||
[[tags]]
|
||||
==== Highlighting Tags
|
||||
|
||||
By default, the highlighting will wrap highlighted text in `<em>` and
|
||||
|
@ -167,6 +168,7 @@ is required. Note that `fragment_size` is ignored in this case.
|
|||
When using `fast-vector-highlighter` one can use `fragment_offset`
|
||||
parameter to control the margin to start highlighting from.
|
||||
|
||||
[[settings]]
|
||||
==== Global Settings
|
||||
|
||||
Highlighting settings can be set on a global level and then overridden
|
||||
|
@ -190,6 +192,7 @@ at the field level.
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[[field-match]]
|
||||
==== Require Field Match
|
||||
|
||||
`require_field_match` can be set to `true` which will cause a field to
|
||||
|
@ -197,6 +200,7 @@ be highlighted only if a query matched that field. `false` means that
|
|||
terms are highlighted on all requested fields regardless if the query
|
||||
matches specifically on them.
|
||||
|
||||
[[boundary-characters]]
|
||||
==== Boundary Characters
|
||||
|
||||
When highlighting a field that is mapped with term vectors,
|
||||
|
|
|
@ -34,6 +34,7 @@ to execute on a *per search request* basis. The type can be configured
|
|||
by setting the *search_type* parameter in the query string. The types
|
||||
are:
|
||||
|
||||
[[query-and-fetch]]
|
||||
==== Query And Fetch
|
||||
|
||||
Parameter value: *query_and_fetch*.
|
||||
|
@ -44,6 +45,7 @@ shard returns `size` results. Since each shard already returns `size`
|
|||
hits, this type actually returns `size` times `number of shards` results
|
||||
back to the caller.
|
||||
|
||||
[[query-then-fetch]]
|
||||
==== Query Then Fetch
|
||||
|
||||
Parameter value: *query_then_fetch*.
|
||||
|
@ -59,6 +61,7 @@ groups).
|
|||
NOTE: This is the default setting, if you do not specify a `search_type`
|
||||
in your request.
|
||||
|
||||
[[dfs-query-and-fetch]]
|
||||
==== Dfs, Query And Fetch
|
||||
|
||||
Parameter value: *dfs_query_and_fetch*.
|
||||
|
@ -67,6 +70,7 @@ Same as "Query And Fetch", except for an initial scatter phase which
|
|||
goes and computes the distributed term frequencies for more accurate
|
||||
scoring.
|
||||
|
||||
[[dfs-query-then-fetch]]
|
||||
==== Dfs, Query Then Fetch
|
||||
|
||||
Parameter value: *dfs_query_then_fetch*.
|
||||
|
@ -75,6 +79,7 @@ Same as "Query Then Fetch", except for an initial scatter phase which
|
|||
goes and computes the distributed term frequencies for more accurate
|
||||
scoring.
|
||||
|
||||
[[count]]
|
||||
==== Count
|
||||
|
||||
Parameter value: *count*.
|
||||
|
@ -84,6 +89,7 @@ request without any docs (represented in `total_hits`), and possibly,
|
|||
including facets as well. In general, this is preferable to the `count`
|
||||
API as it provides more options.
|
||||
|
||||
[[scan]]
|
||||
==== Scan
|
||||
|
||||
Parameter value: *scan*.
|
||||
|
@ -128,6 +134,7 @@ returned. The total_hits will be maintained between scroll requests.
|
|||
Note, scan search type does not support sorting (either on score or a
|
||||
field) or faceting.
|
||||
|
||||
[[clear-scroll]]
|
||||
===== Clear scroll api
|
||||
|
||||
added[0.90.4]
|
||||
|
|
|
@ -107,6 +107,7 @@ term suggester's score is based on the edit distance.
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[global-suggest]]
|
||||
=== Global suggest text
|
||||
|
||||
To avoid repetition of the suggest text, it is possible to define a
|
||||
|
|
|
@ -27,6 +27,7 @@ documents. The `completion` suggester circumvents this by storing the
|
|||
FST as part of your index during index time. This allows for really fast
|
||||
loads and executions.
|
||||
|
||||
[[mapping]]
|
||||
==== Mapping
|
||||
|
||||
In order to use this feature, you have to specify a special mapping for
|
||||
|
@ -86,6 +87,7 @@ Mapping supports the following parameters:
|
|||
by the default value since prefix completions hardly grow beyond prefixes longer
|
||||
than a handful of characters.
|
||||
|
||||
[[indexing]]
|
||||
==== Indexing
|
||||
|
||||
[source,js]
|
||||
|
@ -134,6 +136,7 @@ not be able to use several inputs, an output, payloads or weights.
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[[querying]]
|
||||
==== Querying
|
||||
|
||||
Suggesting works as usual, except that you have to specify the suggest
|
||||
|
@ -175,6 +178,7 @@ indexed suggestion, if configured, otherwise the matched part of the
|
|||
`input` field.
|
||||
|
||||
|
||||
[[fuzzy]]
|
||||
==== Fuzzy queries
|
||||
|
||||
The completion suggester also supports fuzzy queries - this means,
|
||||
|
|
|
@ -24,9 +24,11 @@ It is recommended to set the min and max memory to the same value, and
|
|||
enable <<setup-configuration-memory,`mlockall`>>.
|
||||
|
||||
[float]
|
||||
[[system]]
|
||||
=== System Configuration
|
||||
|
||||
[float]
|
||||
[[file-descriptors]]
|
||||
==== File Descriptors
|
||||
|
||||
Make sure to increase the number of open files descriptors on the
|
||||
|
@ -38,6 +40,7 @@ In order to test how many open files the process can open, start it with
|
|||
files the process can open on startup.
|
||||
|
||||
["float",id="setup-configuration-memory"]
|
||||
[[memory]]
|
||||
==== Memory Settings
|
||||
|
||||
There is an option to use
|
||||
|
@ -57,6 +60,7 @@ session to exit if it fails to allocate the memory (because not enough
|
|||
memory is available on the machine).
|
||||
|
||||
[float]
|
||||
[[settings]]
|
||||
=== Elasticsearch Settings
|
||||
|
||||
*elasticsearch* configuration files can be found under `ES_HOME/config`
|
||||
|
@ -82,6 +86,7 @@ for configuring the ElasticSearch logging.
|
|||
|
||||
|
||||
[float]
|
||||
[[paths]]
|
||||
==== Paths
|
||||
|
||||
In production use, you will almost certainly want to change paths for
|
||||
|
@ -95,6 +100,7 @@ path:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[cluster-name]]
|
||||
==== Cluster name
|
||||
|
||||
Also, don't forget to give your production cluster a name, which is used
|
||||
|
@ -107,6 +113,7 @@ cluster:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[node-name]]
|
||||
==== Node name
|
||||
|
||||
You may also want to change the default node name for each node to
|
||||
|
@ -128,6 +135,7 @@ simply rename the `elasticsearch.yml` file to `elasticsearch.json` and
|
|||
add:
|
||||
|
||||
[float]
|
||||
[[styles]]
|
||||
==== Configuration styles
|
||||
|
||||
[source,js]
|
||||
|
@ -173,6 +181,7 @@ $ elasticsearch -f -Des.config=/path/to/config/file
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[index-settings]]
|
||||
=== Index Settings
|
||||
|
||||
Indices created within the cluster can provide their own settings. For
|
||||
|
@ -215,6 +224,7 @@ All of the index level configuration can be found within each
|
|||
<<index-modules,index module>>.
|
||||
|
||||
[float]
|
||||
[[logging]]
|
||||
=== Logging
|
||||
|
||||
ElasticSearch uses an internal logging abstraction and comes, out of the
|
||||
|
|
|
@ -14,6 +14,7 @@ cluster. For example, one can run a river called `my_river` with type
|
|||
`dummy`, and another river called `my_other_river` with type `dummy`.
|
||||
|
||||
|
||||
[[how-it-works]]
|
||||
== How it Works
|
||||
|
||||
A river instance (and its name) is a type within the `_river` index. All
|
||||
|
@ -44,6 +45,7 @@ curl -XDELETE 'localhost:9200/_river/my_river/'
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
[[allocation]]
|
||||
== Cluster Allocation
|
||||
|
||||
Rivers are singletons within the cluster. They get allocated
|
||||
|
@ -57,6 +59,7 @@ river names or types controlling the rivers allowed to run on it. For
|
|||
example: `my_river1,my_river2`, or `dummy,twitter`.
|
||||
|
||||
|
||||
[[status]]
|
||||
== Status
|
||||
|
||||
Each river (regardless of the implementation) exposes a high level
|
||||
|
|
Loading…
Reference in New Issue