[DOCS] Add anchors for Asciidoctor migration (#41648)

This commit is contained in:
James Rodewig 2019-04-30 10:19:09 -04:00
parent c26b8eb4de
commit 53702efddd
64 changed files with 143 additions and 11 deletions

View File

@ -186,6 +186,7 @@ Please note that Elasticsearch will ignore the choice of execution hint if it is
==== Limitations
[[div-sampler-breadth-first-nested-agg]]
===== Cannot be nested under `breadth_first` aggregations
Being a quality-based filter the diversified_sampler aggregation needs access to the relevance score produced for each document.
It therefore cannot be nested under a `terms` aggregation which has the `collect_mode` switched from the default `depth_first` mode to `breadth_first` as this discards scores.
@ -194,6 +195,7 @@ In this situation an error will be thrown.
===== Limited de-dup logic.
The de-duplication logic applies only at a shard level so will not apply across shards.
[[spec-syntax-geo-date-fields]]
===== No specialized syntax for geo/date fields
Currently the syntax for defining the diversifying values is defined by a choice of `field` or
`script` - there is no added syntactical sugar for expressing geo or date units such as "7d" (7

View File

@ -118,6 +118,7 @@ request. The response for this example would be:
// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards/]
// TESTRESPONSE[s/"hits": \.\.\./"hits": $body.hits/]
[[other-bucket]]
==== `Other` Bucket
The `other_bucket` parameter can be set to add a bucket to the response which will contain all documents that do

View File

@ -155,6 +155,7 @@ The default value is 100.
==== Limitations
[[sampler-breadth-first-nested-agg]]
===== Cannot be nested under `breadth_first` aggregations
Being a quality-based filter the sampler aggregation needs access to the relevance score produced for each document.
It therefore cannot be nested under a `terms` aggregation which has the `collect_mode` switched from the default `depth_first` mode to `breadth_first` as this discards scores.

View File

@ -436,6 +436,7 @@ Available parameters in the script are
`_subset_size`:: Number of documents in the subset.
`_superset_size`:: Number of documents in the superset.
[[sig-terms-shard-size]]
===== Size & Shard Size
The `size` parameter can be set to define how many term buckets should be returned out of the overall terms list. By

View File

@ -92,7 +92,7 @@ It only occurs 5 times in our index as a whole (see the `bg_count`) and yet 4 of
were lucky enough to appear in our 100 document sample of "bird flu" results. That suggests
a significant word and one which the user can potentially add to their search.
[[filter-duplicate-text-noisy-data]]
==== Dealing with noisy data using `filter_duplicate_text`
Free-text fields often contain a mix of original content and mechanical copies of text (cut-and-paste biographies, email reply chains,
retweets, boilerplate headers/footers, page navigation menus, sidebar news links, copyright notices, standard disclaimers, addresses).
@ -353,7 +353,7 @@ However, the `size` and `shard size` settings covered in the next section provid
This aggregation supports the same scoring heuristics (JLH, mutual_information, gnd, chi_square etc) as the <<search-aggregations-bucket-significantterms-aggregation,significant terms>> aggregation
[[sig-text-shard-size]]
===== Size & Shard Size
The `size` parameter can be set to define how many term buckets should be returned out of the overall terms list. By

View File

@ -12,7 +12,9 @@ As a formula, a weighted average is the `∑(value * weight) / ∑(weight)`
A regular average can be thought of as a weighted average where every value has an implicit weight of `1`.
[[weighted-avg-params]]
.`weighted_avg` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`value` | The configuration for the field or script that provides the values |Required |
@ -23,7 +25,9 @@ A regular average can be thought of as a weighted average where every value has
The `value` and `weight` objects have per-field specific configuration:
[[value-params]]
.`value` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`field` | The field that values should be extracted from |Required |
@ -31,7 +35,9 @@ The `value` and `weight` objects have per-field specific configuration:
|`script` | A script which provides the values for the document. This is mutually exclusive with `field` |Optional
|===
[[weight-params]]
.`weight` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`field` | The field that weights should be extracted from |Required |

View File

@ -4,6 +4,7 @@
A sibling pipeline aggregation which calculates the (mean) average value of a specified metric in a sibling aggregation.
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.
[[avg-bucket-agg-syntax]]
==== Syntax
An `avg_bucket` aggregation looks like this in isolation:
@ -18,7 +19,9 @@ An `avg_bucket` aggregation looks like this in isolation:
--------------------------------------------------
// NOTCONSOLE
[[avg-bucket-params]]
.`avg_bucket` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`buckets_path` |The path to the buckets we wish to find the average for (see <<buckets-path-syntax>> for more

View File

@ -4,6 +4,7 @@
A parent pipeline aggregation which executes a script which can perform per bucket computations on specified metrics
in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a numeric value.
[[bucket-script-agg-syntax]]
==== Syntax
A `bucket_script` aggregation looks like this in isolation:
@ -24,8 +25,9 @@ A `bucket_script` aggregation looks like this in isolation:
<1> Here, `my_var1` is the name of the variable for this buckets path to use in the script, `the_sum` is the path to
the metrics to use for that variable.
[[bucket-script-params]]
.`bucket_script` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`script` |The script to run for this aggregation. The script can be inline, file or indexed. (see <<modules-scripting>>

View File

@ -29,8 +29,9 @@ A `bucket_selector` aggregation looks like this in isolation:
<1> Here, `my_var1` is the name of the variable for this buckets path to use in the script, `the_sum` is the path to
the metrics to use for that variable.
[[bucket-selector-params]]
.`bucket_selector` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`script` |The script to run for this aggregation. The script can be inline, file or indexed. (see <<modules-scripting>>

View File

@ -33,7 +33,9 @@ A `bucket_sort` aggregation looks like this in isolation:
<1> Here, `sort_field_1` is the bucket path to the variable to be used as the primary sort and its order
is ascending.
[[bucket-sort-params]]
.`bucket_sort` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`sort` |The list of fields to sort on. See <<search-request-sort,`sort`>> for more details. |Optional |

View File

@ -19,7 +19,9 @@ A `cumulative_sum` aggregation looks like this in isolation:
--------------------------------------------------
// NOTCONSOLE
[[cumulative-sum-params]]
.`cumulative_sum` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`buckets_path` |The path to the buckets we wish to find the cumulative sum for (see <<buckets-path-syntax>> for more

View File

@ -17,7 +17,9 @@ A `derivative` aggregation looks like this in isolation:
--------------------------------------------------
// NOTCONSOLE
[[derivative-params]]
.`derivative` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`buckets_path` |The path to the buckets we wish to find the derivative for (see <<buckets-path-syntax>> for more

View File

@ -20,7 +20,9 @@ A `extended_stats_bucket` aggregation looks like this in isolation:
--------------------------------------------------
// NOTCONSOLE
[[extended-stats-bucket-params]]
.`extended_stats_bucket` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`buckets_path` |The path to the buckets we wish to calculate stats for (see <<buckets-path-syntax>> for more

View File

@ -19,7 +19,9 @@ A `max_bucket` aggregation looks like this in isolation:
--------------------------------------------------
// NOTCONSOLE
[[max-bucket-params]]
.`max_bucket` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`buckets_path` |The path to the buckets we wish to find the maximum for (see <<buckets-path-syntax>> for more

View File

@ -19,7 +19,9 @@ A `min_bucket` aggregation looks like this in isolation:
--------------------------------------------------
// NOTCONSOLE
[[min-bucket-params]]
.`min_bucket` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`buckets_path` |The path to the buckets we wish to find the minimum for (see <<buckets-path-syntax>> for more

View File

@ -24,7 +24,9 @@ A `moving_fn` aggregation looks like this in isolation:
--------------------------------------------------
// NOTCONSOLE
[[moving-avg-params]]
.`moving_avg` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`buckets_path` |Path to the metric of interest (see <<buckets-path-syntax, `buckets_path` Syntax>> for more details |Required |
@ -188,7 +190,9 @@ The functions are available from the `MovingFunctions` namespace. E.g. `MovingF
This function accepts a collection of doubles and returns the maximum value in that window. `null` and `NaN` values are ignored; the maximum
is only calculated over the real values. If the window is empty, or all values are `null`/`NaN`, `NaN` is returned as the result.
[[max-params]]
.`max(double[] values)` Parameters
[options="header"]
|===
|Parameter Name |Description
|`values` |The window of values to find the maximum
@ -229,7 +233,9 @@ POST /_search
This function accepts a collection of doubles and returns the minimum value in that window. `null` and `NaN` values are ignored; the minimum
is only calculated over the real values. If the window is empty, or all values are `null`/`NaN`, `NaN` is returned as the result.
[[min-params]]
.`min(double[] values)` Parameters
[options="header"]
|===
|Parameter Name |Description
|`values` |The window of values to find the minimum
@ -270,7 +276,9 @@ POST /_search
This function accepts a collection of doubles and returns the sum of the values in that window. `null` and `NaN` values are ignored;
the sum is only calculated over the real values. If the window is empty, or all values are `null`/`NaN`, `0.0` is returned as the result.
[[sum-params]]
.`sum(double[] values)` Parameters
[options="header"]
|===
|Parameter Name |Description
|`values` |The window of values to find the sum of
@ -312,7 +320,9 @@ This function accepts a collection of doubles and average, then returns the stan
`null` and `NaN` values are ignored; the sum is only calculated over the real values. If the window is empty, or all values are
`null`/`NaN`, `0.0` is returned as the result.
[[stddev-params]]
.`stdDev(double[] values)` Parameters
[options="header"]
|===
|Parameter Name |Description
|`values` |The window of values to find the standard deviation of
@ -363,7 +373,9 @@ the values from a `simple` moving average tend to "lag" behind the real data.
`null`/`NaN`, `NaN` is returned as the result. This means that the count used in the average calculation is count of non-`null`,non-`NaN`
values.
[[unweightedavg-params]]
.`unweightedAvg(double[] values)` Parameters
[options="header"]
|===
|Parameter Name |Description
|`values` |The window of values to find the sum of
@ -407,7 +419,9 @@ the "lag" behind the data's mean, since older points have less influence.
If the window is empty, or all values are `null`/`NaN`, `NaN` is returned as the result.
[[linearweightedavg-params]]
.`linearWeightedAvg(double[] values)` Parameters
[options="header"]
|===
|Parameter Name |Description
|`values` |The window of values to find the sum of
@ -456,7 +470,9 @@ moving average. This tends to make the moving average track the data more close
`null`/`NaN`, `NaN` is returned as the result. This means that the count used in the average calculation is count of non-`null`,non-`NaN`
values.
[[ewma-params]]
.`ewma(double[] values, double alpha)` Parameters
[options="header"]
|===
|Parameter Name |Description
|`values` |The window of values to find the sum of
@ -511,7 +527,9 @@ Values are produced by multiplying the level and trend components.
`null`/`NaN`, `NaN` is returned as the result. This means that the count used in the average calculation is count of non-`null`,non-`NaN`
values.
[[holt-params]]
.`holt(double[] values, double alpha)` Parameters
[options="header"]
|===
|Parameter Name |Description
|`values` |The window of values to find the sum of
@ -572,7 +590,9 @@ for future enhancements.
`null`/`NaN`, `NaN` is returned as the result. This means that the count used in the average calculation is count of non-`null`,non-`NaN`
values.
[[holtwinters-params]]
.`holtWinters(double[] values, double alpha)` Parameters
[options="header"]
|===
|Parameter Name |Description
|`values` |The window of values to find the sum of

View File

@ -18,7 +18,9 @@ A `percentiles_bucket` aggregation looks like this in isolation:
--------------------------------------------------
// NOTCONSOLE
[[percentiles-bucket-params]]
.`percentiles_bucket` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`buckets_path` |The path to the buckets we wish to find the percentiles for (see <<buckets-path-syntax>> for more

View File

@ -46,7 +46,9 @@ A `serial_diff` aggregation looks like this in isolation:
--------------------------------------------------
// NOTCONSOLE
[[serial-diff-params]]
.`serial_diff` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`buckets_path` |Path to the metric of interest (see <<buckets-path-syntax, `buckets_path` Syntax>> for more details |Required |

View File

@ -18,7 +18,9 @@ A `stats_bucket` aggregation looks like this in isolation:
--------------------------------------------------
// NOTCONSOLE
[[stats-bucket-params]]
.`stats_bucket` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`buckets_path` |The path to the buckets we wish to calculate stats for (see <<buckets-path-syntax>> for more

View File

@ -18,7 +18,9 @@ A `sum_bucket` aggregation looks like this in isolation:
--------------------------------------------------
// NOTCONSOLE
[[sum-bucket-params]]
.`sum_bucket` Parameters
[options="header"]
|===
|Parameter Name |Description |Required |Default Value
|`buckets_path` |The path to the buckets we wish to find the sum for (see <<buckets-path-syntax>> for more

View File

@ -93,6 +93,7 @@ set to `false` no mapping would get added as when `expand=false` the target mapp
stop word.
[float]
[[synonym-graph-tokenizer-ignore_case-deprecated]]
==== `tokenizer` and `ignore_case` are deprecated
The `tokenizer` parameter controls the tokenizers that will be used to

View File

@ -83,6 +83,7 @@ stop word.
[float]
[[synonym-tokenizer-ignore_case-deprecated]]
==== `tokenizer` and `ignore_case` are deprecated
The `tokenizer` parameter controls the tokenizers that will be used to

View File

@ -63,6 +63,7 @@ general, if you have a running system you don't wish to disturb then
`refresh=wait_for` is a smaller modification.
[float]
[[refresh_wait_for-force-refresh]]
=== `refresh=wait_for` Can Force a Refresh
If a `refresh=wait_for` request comes in when there are already

View File

@ -243,6 +243,7 @@ POST test/_update/1
// TEST[continued]
[float]
[[scripted_upsert]]
==== `scripted_upsert`
If you would like your script to run regardless of whether the document exists
@ -272,6 +273,7 @@ POST sessions/_update/dh3sgudg8gsrgl
// TEST[continued]
[float]
[[doc_as_upsert]]
==== `doc_as_upsert`
Instead of sending a partial `doc` plus an `upsert` doc, setting

View File

@ -505,6 +505,7 @@ This REST access pattern is so pervasive throughout all the API commands that if
Elasticsearch provides data manipulation and search capabilities in near real time. By default, you can expect a one second delay (refresh interval) from the time you index/update/delete your data until the time that it appears in your search results. This is an important distinction from other platforms like SQL wherein data is immediately available after a transaction is completed.
[float]
[[indexing-replacing-documents]]
=== Indexing/Replacing Documents
We've previously seen how we can index a single document. Let's recall that command again:

View File

@ -91,6 +91,7 @@ PUT index
// CONSOLE
[float]
[[default-dynamic-string-mapping]]
=== Don't use default dynamic string mappings
The default <<dynamic-mapping,dynamic string mappings>> will index string fields
@ -133,11 +134,13 @@ Larger shards are going to be more efficient at storing data. To increase the si
Keep in mind that large shard sizes come with drawbacks, such as long full recovery times.
[float]
[[disable-source]]
=== Disable `_source`
The <<mapping-source-field,`_source`>> field stores the original JSON body of the document. If you dont need access to it you can disable it. However, APIs that needs access to `_source` such as update and reindex wont work.
[float]
[[best-compression]]
=== Use `best_compression`
The `_source` and stored fields can easily take a non negligible amount of disk

View File

@ -17,6 +17,7 @@ it is advisable to avoid going beyond a couple tens of megabytes per request
even if larger requests seem to perform better.
[float]
[[multiple-workers-threads]]
=== Use multiple workers/threads to send data to Elasticsearch
A single thread sending bulk requests is unlikely to be able to max out the

View File

@ -161,6 +161,7 @@ GET index/_search
// TEST[continued]
[float]
[[map-ids-as-keyword]]
=== Consider mapping identifiers as `keyword`
The fact that some data is numeric does not mean it should always be mapped as a
@ -354,6 +355,7 @@ conjunctions faster at the cost of slightly slower indexing. Read more about it
in the <<index-modules-index-sorting-conjunctions,index sorting documentation>>.
[float]
[[preference-cache-optimization]]
=== Use `preference` to optimize cache utilization
There are multiple caches that can help with search performance, such as the

View File

@ -68,8 +68,9 @@ If the request does not encounter errors, you receive the following result:
The operating modes of ILM:
[[ilm-operating-modes]]
.ILM Operating Modes
[options="header"]
|===
|Name |Description
|RUNNING |Normal operation where all policies are executed as normal

View File

@ -28,7 +28,9 @@ new index.
The rollover action takes the following parameters:
[[rollover-action-params]]
.`rollover` Action Parameters
[options="header"]
|===
|Name |Description
|max_size |The maximum estimated size the primary shard of the index is allowed

View File

@ -277,6 +277,7 @@ Other index settings are available in index modules:
Control over the transaction log and background flush operations.
[float]
[[x-pack-index-settings]]
=== [xpack]#{xpack} index settings#
<<ilm-settings,{ilm-cap}>>::

View File

@ -26,7 +26,9 @@ will now have the rollover alias pointing to it as the write index with `is_writ
The available conditions are:
[[index-rollover-conditions]]
.`conditions` parameters
[options="header"]
|===
| Name | Description
| max_age | The maximum age of the index

View File

@ -50,6 +50,7 @@ Splitting works as follows:
had just been re-opened.
[float]
[[incremental-resharding]]
=== Why doesn't Elasticsearch support incremental resharding?
Going from `N` shards to `N+1` shards, aka. incremental resharding, is indeed a

View File

@ -78,12 +78,14 @@ include::common-options.asciidoc[]
}
--------------------------------------------------
// NOTCONSOLE
[[dissect-key-modifiers]]
==== Dissect key modifiers
Key modifiers can change the default behavior for dissection. Key modifiers may be found on the left or right
of the `%{keyname}` always inside the `%{` and `}`. For example `%{+keyname ->}` has the append and right padding
modifiers.
[[dissect-key-modifiers-table]]
.Dissect Key Modifiers
[options="header"]
|======
@ -132,6 +134,7 @@ Right padding modifier with empty key example
* level = WARN
|======
[[append-modifier]]
===== Append modifier (`+`)
[[dissect-modifier-append-key]]
Dissect supports appending two or more results together for the output.
@ -146,6 +149,7 @@ Append modifier example
* name = john jacob jingleheimer schmidt
|======
[[append-order-modifier]]
===== Append with order modifier (`+` and `/n`)
[[dissect-modifier-append-key-with-order]]
Dissect supports appending two or more results together for the output.
@ -160,6 +164,7 @@ Append with order modifier example
* name = schmidt,john,jingleheimer,jacob
|======
[[named-skip-key]]
===== Named skip key (`?`)
[[dissect-modifier-named-skip-key]]
Dissect supports ignoring matches in the final result. This can be done with an empty key `%{}`, but for readability
@ -174,6 +179,7 @@ Named skip key modifier example
* @timestamp = 30/Apr/1998:22:00:52 +0000
|======
[[reference-keys]]
===== Reference keys (`*` and `&`)
[[dissect-modifier-reference-keys]]
Dissect support using parsed values as the key/value pairings for the structured content. Imagine a system that

View File

@ -286,6 +286,7 @@ PUT my_index
--------------------------------------------------
// CONSOLE
[[text-only-mappings-strings]]
===== `text`-only mappings for strings
On the contrary to the previous example, if the only thing that you care about

View File

@ -11,6 +11,7 @@ Now the `_field_names` field only indexes the names of fields that have
or `norm` enabled the <<query-dsl-exists-query,`exists`>> query will still
be available but will not use the `_field_names` field.
[[disable-field-names]]
==== Disabling `_field_names`
Disabling `_field_names` is often not necessary because it no longer

View File

@ -6,6 +6,7 @@ at index time. The `_source` field itself is not indexed (and thus is not
searchable), but it is stored so that it can be returned when executing
_fetch_ requests, like <<docs-get,get>> or <<search-search,search>>.
[[disable-source-field]]
==== Disabling the `_source` field
Though very handy to have around, the source field does incur storage overhead

View File

@ -19,6 +19,7 @@ reading the entire inverted index for each segment from disk, inverting the
term ↔︎ document relationship, and storing the result in memory, in the JVM
heap.
[[fielddata-disabled-text-fields]]
==== Fielddata is disabled on `text` fields by default
Fielddata can consume a *lot* of heap space, especially when loading high
@ -75,6 +76,7 @@ PUT my_index
<1> Use the `my_field` field for searches.
<2> Use the `my_field.keyword` field for aggregations, sorting, or in scripts.
[[enable-fielddata-text-fields]]
==== Enabling fielddata on `text` fields
You can enable fielddata on an existing `text` field using the

View File

@ -91,6 +91,7 @@ become meaningless. Elasticsearch makes it easy to check how many documents
have malformed fields by using `exist` or `term` queries on the special
<<mapping-ignored-field,`_ignored`>> field.
[[json-object-limits]]
==== Limits for JSON Objects
You can't use `ignore_malformed` with the following datatypes:

View File

@ -216,6 +216,7 @@ GET twitter/_search
<1> The explicit `type` field takes the place of the implicit `_type` field.
[float]
[[parent-child-mapping-types]]
==== Parent/Child without mapping types
Previously, a parent-child relationship was represented by making one mapping

View File

@ -323,6 +323,7 @@ POST /example/_doc
// CONSOLE
[float]
[[linestring]]
===== http://geojson.org/geojson-spec.html#id3[LineString]
A `linestring` defined by an array of two or more positions. By
@ -357,6 +358,7 @@ The above `linestring` would draw a straight line starting at the White
House to the US Capitol Building.
[float]
[[polygon]]
===== http://www.geojson.org/geojson-spec.html#id4[Polygon]
A polygon is defined by a list of a list of points. The first and last
@ -473,6 +475,7 @@ POST /example/_doc
// CONSOLE
[float]
[[multipoint]]
===== http://www.geojson.org/geojson-spec.html#id5[MultiPoint]
The following is an example of a list of geojson points:
@ -503,6 +506,7 @@ POST /example/_doc
// CONSOLE
[float]
[[multilinestring]]
===== http://www.geojson.org/geojson-spec.html#id6[MultiLineString]
The following is an example of a list of geojson linestrings:
@ -535,6 +539,7 @@ POST /example/_doc
// CONSOLE
[float]
[[multipolygon]]
===== http://www.geojson.org/geojson-spec.html#id7[MultiPolygon]
The following is an example of a list of geojson polygons (second polygon contains a hole):
@ -567,6 +572,7 @@ POST /example/_doc
// CONSOLE
[float]
[[geometry_collection]]
===== http://geojson.org/geojson-spec.html#geometrycollection[Geometry Collection]
The following is an example of a collection of geojson geometry objects:

View File

@ -69,6 +69,7 @@ The following parameters are accepted by `ip` fields:
the <<mapping-source-field,`_source`>> field. Accepts `true` or `false`
(default).
[[query-ip-fields]]
==== Querying `ip` fields
The most common way to query ip addresses is to use the

View File

@ -65,6 +65,7 @@ GET my_index/_search
// CONSOLE
// TEST[continued]
[[nested-fields-array-objects]]
==== Using `nested` fields for arrays of objects
If you need to index arrays of objects and to maintain the independence of
@ -200,7 +201,7 @@ document is indexed as a separate document. To safeguard against ill-defined map
the number of nested fields that can be defined per index has been limited to 50. See
<<mapping-limit-settings>>.
[[limit-nested-json-objects-number]]
==== Limiting the number of `nested` json objects
Indexing a document with an array of 100 objects within a nested field, will actually
create 101 documents, as each nested object will be indexed as a separate document.

View File

@ -720,6 +720,7 @@ fail.
==== Limitations
[float]
[[parent-child]]
===== Parent/child
Because the `percolate` query is processing one document at a time, it doesn't support queries and filters that run

View File

@ -112,6 +112,7 @@ The following example shows the difference in years between the `date` fields da
`doc['date1'].date.year - doc['date0'].date.year`
[float]
[[geo-point-field-api]]
=== `geo_point` field API
[cols="<,<",options="header",]
|=======================================================================

View File

@ -694,6 +694,7 @@ GET /_snapshot/my_backup/snapshot_1,snapshot_2/_status
// TEST[continued]
[float]
[[monitor-snapshot-restore-progress]]
=== Monitoring snapshot/restore progress
There are several ways to monitor the progress of the snapshot and restores processes while they are running. Both

View File

@ -69,6 +69,7 @@ thread_pool:
The following are the types of thread pools and their respective parameters:
[float]
[[fixed]]
==== `fixed`
The `fixed` thread pool holds a fixed size of threads to handle the
@ -92,6 +93,7 @@ thread_pool:
--------------------------------------------------
[float]
[[fixed-auto-queue-size]]
==== `fixed_auto_queue_size`
experimental[]
@ -138,6 +140,7 @@ thread_pool:
--------------------------------------------------
[float]
[[scaling]]
==== `scaling`
The `scaling` thread pool holds a dynamic number of threads. This

View File

@ -58,6 +58,7 @@ POST _search
--------------------------------------------------
// CONSOLE
[[score-bool-filter]]
==== Scoring with `bool.filter`
Queries specified under the `filter` element have no effect on scoring --

View File

@ -44,6 +44,7 @@ These documents would *not* match the above query:
<3> The `user` field is missing completely.
[float]
[[null-value-mapping]]
==== `null_value` mapping
If the field mapping includes the <<null-value,`null_value`>> setting
@ -86,6 +87,7 @@ no values in the `user` field and thus would not match the `exists` filter:
--------------------------------------------------
// NOTCONSOLE
[[missing-query]]
==== `missing` query
There isn't a `missing` query. Instead use the `exists` query inside a

View File

@ -62,6 +62,7 @@ GET /_search
// CONSOLE
[float]
[[min-max-children]]
==== Min/Max Children
The `has_child` query allows you to specify that a minimum and/or maximum

View File

@ -18,6 +18,7 @@ GET /_search
--------------------------------------------------
// CONSOLE
[[ids-query-top-level-parameters]]
==== Top-level parameters for `ids`
[cols="v,v",options="header"]

View File

@ -21,6 +21,7 @@ GET /_search
<2> The fields to be queried.
[float]
[[field-boost]]
==== `fields` and per-field boosting
Fields can be specified with wildcards, eg:
@ -391,6 +392,7 @@ Also, accepts `analyzer`, `boost`, `operator`, `minimum_should_match`,
`lenient`, `zero_terms_query` and `cutoff_frequency`, as explained in
<<query-dsl-match-query, match query>>.
[[cross-field-analysis]]
===== `cross_field` and analysis
The `cross_field` type can only work in term-centric mode on fields that have
@ -499,6 +501,7 @@ which will be executed as:
blended("will", fields: [first, first.edge, last.edge, last])
blended("smith", fields: [first, first.edge, last.edge, last])
[[tie-breaker]]
===== `tie_breaker`
By default, each per-term `blended` query will use the best score returned by

View File

@ -220,7 +220,7 @@ field, whose only drawback is that scores will change if the document is
updated since update operations also update the value of the `_seq_no` field.
[[decay-functions]]
[[decay-functions-numeric-fields]]
===== Decay functions for numeric fields
You can read more about decay functions
{ref}/query-dsl-function-score-query.html#function-decay[here].
@ -310,10 +310,12 @@ Script Score Query will be a substitute for it.
Here we describe how Function Score Query's functions can be
equivalently implemented in Script Score Query:
[[script-score]]
===== `script_score`
What you used in `script_score` of the Function Score query, you
can copy into the Script Score query. No changes here.
[[weight]]
===== `weight`
`weight` function can be implemented in the Script Score query through
the following script:
@ -329,12 +331,13 @@ the following script:
--------------------------------------------------
// NOTCONSOLE
[[random-score]]
===== `random_score`
Use `randomScore` function
as described in <<random-score-function, random score function>>.
[[field-value-factor]]
===== `field_value_factor`
`field_value_factor` function can be easily implemented through script:
@ -384,7 +387,7 @@ through a script:
| `reciprocal` | `1.0 / doc['f'].value`
|=======================================================================
[[decay-functions]]
===== `decay functions`
Script Score query has equivalent <<decay-functions, decay functions>>
that can be used in script.

View File

@ -32,6 +32,7 @@ To help simplify the problem, we have limited search to just one rollup index at
may be able to open this up to multiple rollup jobs.
[float]
[[aggregate-stored-only]]
=== Can only aggregate what's been stored
A perhaps obvious limitation, but rollups can only aggregate on data that has been stored in the rollups. If you don't configure the

View File

@ -243,6 +243,7 @@ sufficient to see that a particular component of a query is slow, and not necess
the `advance` phase of that query is the cause, for example.
=======================================
[[query-section]]
==== `query` Section
The `query` section contains detailed timing of the query tree executed by Lucene on a particular shard.
@ -399,6 +400,7 @@ The meaning of the stats are as follows:
means the `nextDoc()` method was called on two different documents. This can be used to help judge
how selective queries are, by comparing counts between different query components.
[[collectors-section]]
==== `collectors` Section
The Collectors portion of the response shows high-level execution details. Lucene works by defining a "Collector"
@ -485,7 +487,7 @@ For reference, the various collector reasons are:
match_all query (which you will see added to the Query section) to collect your entire dataset
[[rewrite-section]]
==== `rewrite` Section
All queries in Lucene undergo a "rewriting" process. A query (and its sub-queries) may be rewritten one or
@ -694,6 +696,7 @@ Hopefully this will be fixed in future iterations, but it is a tricky problem to
[[search-profile-aggregations]]
=== Profiling Aggregations
[[agg-section]]
==== `aggregations` Section

View File

@ -136,6 +136,7 @@ The `metric` section determines which of the available evaluation metrics is goi
Currently, the following metrics are supported:
[float]
[[k-precision]]
==== Precision at K (P@k)
This metric measures the number of relevant results in the top k search results. Its a form of the well known https://en.wikipedia.org/wiki/Information_retrieval#Precision[Precision] metric that only looks at the top k documents. It is the fraction of relevant documents in those first k

View File

@ -39,7 +39,6 @@ endif::verifies[]
Supported cipher suites can be found in Oracle's http://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html[
Java Cryptography Architecture documentation]. Defaults to ``.
===== {component} TLS/SSL Key and Trusted Certificate Settings
The following settings are used to specify a private key, certificate, and the

View File

@ -173,6 +173,7 @@ include::xpack-indices.asciidoc[]
endif::include-xpack[]
[[deb-sysv-init-vs-systemd]]
==== SysV `init` vs `systemd`
include::init-systemd.asciidoc[]

View File

@ -287,6 +287,7 @@ comfortable with them adding the `--batch` flag to the plugin install command.
See {plugins}/_other_command_line_parameters.html[Plugin Management documentation]
for more details.
[[override-image-default]]
===== D. Override the image's default https://docs.docker.com/engine/reference/run/#cmd-default-command-or-options[CMD]
Options can be passed as command-line options to the {es} process by

View File

@ -160,6 +160,7 @@ include::xpack-indices.asciidoc[]
endif::include-xpack[]
[[rpm-sysv-init-vs-systemd]]
==== SysV `init` vs `systemd`
include::init-systemd.asciidoc[]

View File

@ -34,9 +34,11 @@ include::install/zip-windows-start.asciidoc[]
include::install/init-systemd.asciidoc[]
[float]
[[start-es-deb-init]]
include::install/deb-init.asciidoc[]
[float]
[[start-es-deb-systemd]]
include::install/systemd.asciidoc[]
[float]
@ -66,7 +68,9 @@ include::install/msi-windows-start.asciidoc[]
include::install/init-systemd.asciidoc[]
[float]
[[start-es-rpm-init]]
include::install/rpm-init.asciidoc[]
[float]
[[start-es-rpm-systemd]]
include::install/systemd.asciidoc[]

View File

@ -156,6 +156,7 @@ Opens up a {es-sql} connection to `server` on port `3456`, setting the JDBC conn
One can use JDBC through the official `java.sql` and `javax.sql` packages:
[[java-sql]]
==== `java.sql`
The former through `java.sql.Driver` and `DriverManager`:
@ -168,6 +169,7 @@ HTTP traffic. The port is by default 9200.
<2> Properties for connecting to Elasticsearch. An empty `Properties`
instance is fine for unsecured Elasticsearch.
[[javax-sql]]
==== `javax.sql`
Accessible through the `javax.sql.DataSource` API:

View File

@ -4,6 +4,7 @@
== SQL Limitations
[float]
[[sys-columns-describe-table-nested-fields]]
=== Nested fields in `SYS COLUMNS` and `DESCRIBE TABLE`
{es} has a special type of relationship fields called `nested` fields. In {es-sql} they can be used by referencing their inner
@ -51,6 +52,7 @@ This is because of the way nested queries work in {es}: the root nested field wi
pagination taking place on the **root nested document and not on its inner hits**.
[float]
[[normalized-keyword-fields]]
=== Normalized `keyword` fields
`keyword` fields in {es} can be normalized by defining a `normalizer`. Such fields are not supported in {es-sql}.
@ -108,6 +110,7 @@ But, if the sub-select would include a `GROUP BY` or `HAVING` or the enclosing `
FROM (SELECT ...) WHERE [simple_condition]`, this is currently **un-supported**.
[float]
[[first-last-agg-functions-having-clause]]
=== Using <<sql-functions-aggs-first, `FIRST`>>/<<sql-functions-aggs-last,`LAST`>> aggregation functions in `HAVING` clause
Using `FIRST` and `LAST` in the `HAVING` clause is not supported. The same applies to
@ -115,6 +118,7 @@ Using `FIRST` and `LAST` in the `HAVING` clause is not supported. The same appli
is of type <<keyword, `keyword`>> as they are internally translated to `FIRST` and `LAST`.
[float]
[[group-by-time]]
=== Using TIME data type in GROUP BY or <<sql-functions-grouping-histogram>>
Using `TIME` data type as a grouping key is currently not supported. For example:

View File

@ -7,6 +7,7 @@
In such a scenario, {es-sql} supports both security at the transport layer (by encrypting the communication between the consumer and the server) and authentication (for the access layer).
[float]
[[ssl-tls-config]]
==== SSL/TLS configuration
In case of an encrypted transport, the SSL/TLS support needs to be enabled in {es-sql} to properly establish communication with {es}. This is done by setting the `ssl` property to `true` or by using the `https` prefix in the URL. +