mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-02-17 02:14:54 +00:00
parent
db8788e5a2
commit
60876a0e32
@ -37,10 +37,10 @@ include-tagged::{doc-tests-file}[{api}-evaluation-outlierdetection]
|
|||||||
<2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) label for an example. Must be either true or false.
|
<2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) label for an example. Must be either true or false.
|
||||||
<3> Name of the field in the index. Its value denotes the probability (as per some ML algorithm) of the example being classified as positive.
|
<3> Name of the field in the index. Its value denotes the probability (as per some ML algorithm) of the example being classified as positive.
|
||||||
<4> The remaining parameters are the metrics to be calculated based on the two fields described above
|
<4> The remaining parameters are the metrics to be calculated based on the two fields described above
|
||||||
<5> https://en.wikipedia.org/wiki/Precision_and_recall#Precision[Precision] calculated at thresholds: 0.4, 0.5 and 0.6
|
<5> {wikipedia}/Precision_and_recall#Precision[Precision] calculated at thresholds: 0.4, 0.5 and 0.6
|
||||||
<6> https://en.wikipedia.org/wiki/Precision_and_recall#Recall[Recall] calculated at thresholds: 0.5 and 0.7
|
<6> {wikipedia}/Precision_and_recall#Recall[Recall] calculated at thresholds: 0.5 and 0.7
|
||||||
<7> https://en.wikipedia.org/wiki/Confusion_matrix[Confusion matrix] calculated at threshold 0.5
|
<7> {wikipedia}/Confusion_matrix[Confusion matrix] calculated at threshold 0.5
|
||||||
<8> https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve[AuC ROC] calculated and the curve points returned
|
<8> {wikipedia}/Receiver_operating_characteristic#Area_under_the_curve[AuC ROC] calculated and the curve points returned
|
||||||
|
|
||||||
===== Classification
|
===== Classification
|
||||||
|
|
||||||
@ -67,10 +67,10 @@ include-tagged::{doc-tests-file}[{api}-evaluation-regression]
|
|||||||
<2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) value for an example.
|
<2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) value for an example.
|
||||||
<3> Name of the field in the index. Its value denotes the predicted (as per some ML algorithm) value for the example.
|
<3> Name of the field in the index. Its value denotes the predicted (as per some ML algorithm) value for the example.
|
||||||
<4> The remaining parameters are the metrics to be calculated based on the two fields described above
|
<4> The remaining parameters are the metrics to be calculated based on the two fields described above
|
||||||
<5> https://en.wikipedia.org/wiki/Mean_squared_error[Mean squared error]
|
<5> {wikipedia}/Mean_squared_error[Mean squared error]
|
||||||
<6> Mean squared logarithmic error
|
<6> Mean squared logarithmic error
|
||||||
<7> https://en.wikipedia.org/wiki/Huber_loss#Pseudo-Huber_loss_function[Pseudo Huber loss]
|
<7> {wikipedia}/Huber_loss#Pseudo-Huber_loss_function[Pseudo Huber loss]
|
||||||
<8> https://en.wikipedia.org/wiki/Coefficient_of_determination[R squared]
|
<8> {wikipedia}/Coefficient_of_determination[R squared]
|
||||||
|
|
||||||
include::../execution.asciidoc[]
|
include::../execution.asciidoc[]
|
||||||
|
|
||||||
|
@ -24,7 +24,7 @@ milliseconds since an epoch of 1970-01-01 00:00:00 Zulu Time
|
|||||||
string:: a datetime representation as a sequence of characters defined by
|
string:: a datetime representation as a sequence of characters defined by
|
||||||
a standard format or a custom format; in Painless this is typically a
|
a standard format or a custom format; in Painless this is typically a
|
||||||
<<string-type, String>> of the standard format
|
<<string-type, String>> of the standard format
|
||||||
https://en.wikipedia.org/wiki/ISO_8601[ISO 8601]
|
{wikipedia}/ISO_8601[ISO 8601]
|
||||||
complex:: a datetime representation as a complex type
|
complex:: a datetime representation as a complex type
|
||||||
(<<reference-types, object>>) that abstracts away internal details of how the
|
(<<reference-types, object>>) that abstracts away internal details of how the
|
||||||
datetime is stored and often provides utilities for modification and
|
datetime is stored and often provides utilities for modification and
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
==== Debug.Explain
|
==== Debug.Explain
|
||||||
|
|
||||||
Painless doesn't have a
|
Painless doesn't have a
|
||||||
https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop[REPL]
|
{wikipedia}/Read%E2%80%93eval%E2%80%93print_loop[REPL]
|
||||||
and while it'd be nice for it to have one day, it wouldn't tell you the
|
and while it'd be nice for it to have one day, it wouldn't tell you the
|
||||||
whole story around debugging painless scripts embedded in Elasticsearch because
|
whole story around debugging painless scripts embedded in Elasticsearch because
|
||||||
the data that the scripts have access to or "context" is so important. For now
|
the data that the scripts have access to or "context" is so important. For now
|
||||||
|
@ -1,11 +1,11 @@
|
|||||||
[[modules-scripting-painless-dispatch]]
|
[[modules-scripting-painless-dispatch]]
|
||||||
=== How painless dispatches functions
|
=== How painless dispatches functions
|
||||||
|
|
||||||
Painless uses receiver, name, and https://en.wikipedia.org/wiki/Arity[arity]
|
Painless uses receiver, name, and {wikipedia}/Arity[arity]
|
||||||
for method dispatch. For example, `s.foo(a, b)` is resolved by first getting
|
for method dispatch. For example, `s.foo(a, b)` is resolved by first getting
|
||||||
the class of `s` and then looking up the method `foo` with two parameters. This
|
the class of `s` and then looking up the method `foo` with two parameters. This
|
||||||
is different from Groovy which uses the
|
is different from Groovy which uses the
|
||||||
https://en.wikipedia.org/wiki/Multiple_dispatch[runtime types] of the
|
{wikipedia}/Multiple_dispatch[runtime types] of the
|
||||||
parameters and Java which uses the compile time types of the parameters.
|
parameters and Java which uses the compile time types of the parameters.
|
||||||
|
|
||||||
The consequence of this that Painless doesn't support overloaded methods like
|
The consequence of this that Painless doesn't support overloaded methods like
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
[[search-aggregations-bucket-adjacency-matrix-aggregation]]
|
[[search-aggregations-bucket-adjacency-matrix-aggregation]]
|
||||||
=== Adjacency Matrix Aggregation
|
=== Adjacency Matrix Aggregation
|
||||||
|
|
||||||
A bucket aggregation returning a form of https://en.wikipedia.org/wiki/Adjacency_matrix[adjacency matrix].
|
A bucket aggregation returning a form of {wikipedia}/Adjacency_matrix[adjacency matrix].
|
||||||
The request provides a collection of named filter expressions, similar to the `filters` aggregation
|
The request provides a collection of named filter expressions, similar to the `filters` aggregation
|
||||||
request.
|
request.
|
||||||
Each bucket in the response represents a non-empty cell in the matrix of intersecting filters.
|
Each bucket in the response represents a non-empty cell in the matrix of intersecting filters.
|
||||||
@ -104,7 +104,7 @@ Response:
|
|||||||
==== Usage
|
==== Usage
|
||||||
On its own this aggregation can provide all of the data required to create an undirected weighted graph.
|
On its own this aggregation can provide all of the data required to create an undirected weighted graph.
|
||||||
However, when used with child aggregations such as a `date_histogram` the results can provide the
|
However, when used with child aggregations such as a `date_histogram` the results can provide the
|
||||||
additional levels of data required to perform https://en.wikipedia.org/wiki/Dynamic_network_analysis[dynamic network analysis]
|
additional levels of data required to perform {wikipedia}/Dynamic_network_analysis[dynamic network analysis]
|
||||||
where examining interactions _over time_ becomes important.
|
where examining interactions _over time_ becomes important.
|
||||||
|
|
||||||
==== Limitations
|
==== Limitations
|
||||||
|
@ -362,7 +362,7 @@ include::datehistogram-aggregation.asciidoc[tag=offset-note]
|
|||||||
The `geotile_grid` value source works on `geo_point` fields and groups points into buckets that represent
|
The `geotile_grid` value source works on `geo_point` fields and groups points into buckets that represent
|
||||||
cells in a grid. The resulting grid can be sparse and only contains cells
|
cells in a grid. The resulting grid can be sparse and only contains cells
|
||||||
that have matching data. Each cell corresponds to a
|
that have matching data. Each cell corresponds to a
|
||||||
https://en.wikipedia.org/wiki/Tiled_web_map[map tile] as used by many online map
|
{wikipedia}/Tiled_web_map[map tile] as used by many online map
|
||||||
sites. Each cell is labeled using a "{zoom}/{x}/{y}" format, where zoom is equal
|
sites. Each cell is labeled using a "{zoom}/{x}/{y}" format, where zoom is equal
|
||||||
to the user-specified precision.
|
to the user-specified precision.
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
A multi-bucket aggregation that works on `geo_point` fields and groups points into
|
A multi-bucket aggregation that works on `geo_point` fields and groups points into
|
||||||
buckets that represent cells in a grid. The resulting grid can be sparse and only
|
buckets that represent cells in a grid. The resulting grid can be sparse and only
|
||||||
contains cells that have matching data. Each cell corresponds to a
|
contains cells that have matching data. Each cell corresponds to a
|
||||||
https://en.wikipedia.org/wiki/Tiled_web_map[map tile] as used by many online map
|
{wikipedia}/Tiled_web_map[map tile] as used by many online map
|
||||||
sites. Each cell is labeled using a "{zoom}/{x}/{y}" format, where zoom is equal
|
sites. Each cell is labeled using a "{zoom}/{x}/{y}" format, where zoom is equal
|
||||||
to the user-specified precision.
|
to the user-specified precision.
|
||||||
|
|
||||||
|
@ -295,7 +295,7 @@ a multi-value metrics aggregation, and in case of a single-value metrics aggrega
|
|||||||
|
|
||||||
The path must be defined in the following form:
|
The path must be defined in the following form:
|
||||||
|
|
||||||
// https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_Form
|
// {wikipedia}/Extended_Backus%E2%80%93Naur_Form
|
||||||
[source,ebnf]
|
[source,ebnf]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
AGG_SEPARATOR = '>' ;
|
AGG_SEPARATOR = '>' ;
|
||||||
|
@ -67,7 +67,7 @@ from all the existing ones. At most `shard_size` total buckets are created.
|
|||||||
|
|
||||||
In the reduce step, the coordinating node sorts the buckets from all shards by their centroids. Then, the two buckets
|
In the reduce step, the coordinating node sorts the buckets from all shards by their centroids. Then, the two buckets
|
||||||
with the nearest centroids are repeatedly merged until the target number of buckets is achieved.
|
with the nearest centroids are repeatedly merged until the target number of buckets is achieved.
|
||||||
This merging procedure is a form of https://en.wikipedia.org/wiki/Hierarchical_clustering[agglomerative hierarchical clustering].
|
This merging procedure is a form of {wikipedia}/Hierarchical_clustering[agglomerative hierarchical clustering].
|
||||||
|
|
||||||
TIP: A shard can return fewer than `shard_size` buckets, but it cannot return more.
|
TIP: A shard can return fewer than `shard_size` buckets, but it cannot return more.
|
||||||
|
|
||||||
|
@ -7,7 +7,7 @@ A `boxplot` metrics aggregation that computes boxplot of numeric values extracte
|
|||||||
These values can be generated by a provided script or extracted from specific numeric or
|
These values can be generated by a provided script or extracted from specific numeric or
|
||||||
<<histogram,histogram fields>> in the documents.
|
<<histogram,histogram fields>> in the documents.
|
||||||
|
|
||||||
The `boxplot` aggregation returns essential information for making a https://en.wikipedia.org/wiki/Box_plot[box plot]: minimum, maximum
|
The `boxplot` aggregation returns essential information for making a {wikipedia}/Box_plot[box plot]: minimum, maximum
|
||||||
median, first quartile (25th percentile) and third quartile (75th percentile) values.
|
median, first quartile (25th percentile) and third quartile (75th percentile) values.
|
||||||
|
|
||||||
==== Syntax
|
==== Syntax
|
||||||
@ -129,7 +129,7 @@ https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf[C
|
|||||||
[WARNING]
|
[WARNING]
|
||||||
====
|
====
|
||||||
Boxplot as other percentile aggregations are also
|
Boxplot as other percentile aggregations are also
|
||||||
https://en.wikipedia.org/wiki/Nondeterministic_algorithm[non-deterministic].
|
{wikipedia}/Nondeterministic_algorithm[non-deterministic].
|
||||||
This means you can get slightly different results using the same data.
|
This means you can get slightly different results using the same data.
|
||||||
====
|
====
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
[[search-aggregations-metrics-geocentroid-aggregation]]
|
[[search-aggregations-metrics-geocentroid-aggregation]]
|
||||||
=== Geo Centroid Aggregation
|
=== Geo Centroid Aggregation
|
||||||
|
|
||||||
A metric aggregation that computes the weighted https://en.wikipedia.org/wiki/Centroid[centroid] from all coordinate values for geo fields.
|
A metric aggregation that computes the weighted {wikipedia}/Centroid[centroid] from all coordinate values for geo fields.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
[[search-aggregations-metrics-median-absolute-deviation-aggregation]]
|
[[search-aggregations-metrics-median-absolute-deviation-aggregation]]
|
||||||
=== Median Absolute Deviation Aggregation
|
=== Median Absolute Deviation Aggregation
|
||||||
|
|
||||||
This `single-value` aggregation approximates the https://en.wikipedia.org/wiki/Median_absolute_deviation[median absolute deviation]
|
This `single-value` aggregation approximates the {wikipedia}/Median_absolute_deviation[median absolute deviation]
|
||||||
of its search results.
|
of its search results.
|
||||||
|
|
||||||
Median absolute deviation is a measure of variability. It is a robust
|
Median absolute deviation is a measure of variability. It is a robust
|
||||||
|
@ -254,7 +254,7 @@ it. It would not be the case on more skewed distributions.
|
|||||||
[WARNING]
|
[WARNING]
|
||||||
====
|
====
|
||||||
Percentile aggregations are also
|
Percentile aggregations are also
|
||||||
https://en.wikipedia.org/wiki/Nondeterministic_algorithm[non-deterministic].
|
{wikipedia}/Nondeterministic_algorithm[non-deterministic].
|
||||||
This means you can get slightly different results using the same data.
|
This means you can get slightly different results using the same data.
|
||||||
====
|
====
|
||||||
|
|
||||||
|
@ -15,7 +15,7 @@ The string stats aggregation returns the following results:
|
|||||||
* `min_length` - The length of the shortest term.
|
* `min_length` - The length of the shortest term.
|
||||||
* `max_length` - The length of the longest term.
|
* `max_length` - The length of the longest term.
|
||||||
* `avg_length` - The average length computed over all terms.
|
* `avg_length` - The average length computed over all terms.
|
||||||
* `entropy` - The https://en.wikipedia.org/wiki/Entropy_(information_theory)[Shannon Entropy] value computed over all terms collected by
|
* `entropy` - The {wikipedia}/Entropy_(information_theory)[Shannon Entropy] value computed over all terms collected by
|
||||||
the aggregation. Shannon entropy quantifies the amount of information contained in the field. It is a very useful metric for
|
the aggregation. Shannon entropy quantifies the amount of information contained in the field. It is a very useful metric for
|
||||||
measuring a wide range of properties of a data set, such as diversity, similarity, randomness etc.
|
measuring a wide range of properties of a data set, such as diversity, similarity, randomness etc.
|
||||||
|
|
||||||
|
@ -8,7 +8,7 @@ tokens, it also records the following:
|
|||||||
* The `positionLength`, the number of positions that a token spans
|
* The `positionLength`, the number of positions that a token spans
|
||||||
|
|
||||||
Using these, you can create a
|
Using these, you can create a
|
||||||
https://en.wikipedia.org/wiki/Directed_acyclic_graph[directed acyclic graph],
|
{wikipedia}/Directed_acyclic_graph[directed acyclic graph],
|
||||||
called a _token graph_, for a stream. In a token graph, each position represents
|
called a _token graph_, for a stream. In a token graph, each position represents
|
||||||
a node. Each token represents an edge or arc, pointing to the next position.
|
a node. Each token represents an edge or arc, pointing to the next position.
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
<titleabbrev>CJK bigram</titleabbrev>
|
<titleabbrev>CJK bigram</titleabbrev>
|
||||||
++++
|
++++
|
||||||
|
|
||||||
Forms https://en.wikipedia.org/wiki/Bigram[bigrams] out of CJK (Chinese,
|
Forms {wikipedia}/Bigram[bigrams] out of CJK (Chinese,
|
||||||
Japanese, and Korean) tokens.
|
Japanese, and Korean) tokens.
|
||||||
|
|
||||||
This filter is included in {es}'s built-in <<cjk-analyzer,CJK language
|
This filter is included in {es}'s built-in <<cjk-analyzer,CJK language
|
||||||
@ -161,7 +161,7 @@ All non-CJK input is passed through unmodified.
|
|||||||
`output_unigrams`
|
`output_unigrams`
|
||||||
(Optional, boolean)
|
(Optional, boolean)
|
||||||
If `true`, emit tokens in both bigram and
|
If `true`, emit tokens in both bigram and
|
||||||
https://en.wikipedia.org/wiki/N-gram[unigram] form. If `false`, a CJK character
|
{wikipedia}/N-gram[unigram] form. If `false`, a CJK character
|
||||||
is output in unigram form when it has no adjacent characters. Defaults to
|
is output in unigram form when it has no adjacent characters. Defaults to
|
||||||
`false`.
|
`false`.
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
<titleabbrev>Common grams</titleabbrev>
|
<titleabbrev>Common grams</titleabbrev>
|
||||||
++++
|
++++
|
||||||
|
|
||||||
Generates https://en.wikipedia.org/wiki/Bigram[bigrams] for a specified set of
|
Generates {wikipedia}/Bigram[bigrams] for a specified set of
|
||||||
common words.
|
common words.
|
||||||
|
|
||||||
For example, you can specify `is` and `the` as common words. This filter then
|
For example, you can specify `is` and `the` as common words. This filter then
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
<titleabbrev>Edge n-gram</titleabbrev>
|
<titleabbrev>Edge n-gram</titleabbrev>
|
||||||
++++
|
++++
|
||||||
|
|
||||||
Forms an https://en.wikipedia.org/wiki/N-gram[n-gram] of a specified length from
|
Forms an {wikipedia}/N-gram[n-gram] of a specified length from
|
||||||
the beginning of a token.
|
the beginning of a token.
|
||||||
|
|
||||||
For example, you can use the `edge_ngram` token filter to change `quick` to
|
For example, you can use the `edge_ngram` token filter to change `quick` to
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
<titleabbrev>Elision</titleabbrev>
|
<titleabbrev>Elision</titleabbrev>
|
||||||
++++
|
++++
|
||||||
|
|
||||||
Removes specified https://en.wikipedia.org/wiki/Elision[elisions] from
|
Removes specified {wikipedia}/Elision[elisions] from
|
||||||
the beginning of tokens. For example, you can use this filter to change
|
the beginning of tokens. For example, you can use this filter to change
|
||||||
`l'avion` to `avion`.
|
`l'avion` to `avion`.
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
<titleabbrev>MinHash</titleabbrev>
|
<titleabbrev>MinHash</titleabbrev>
|
||||||
++++
|
++++
|
||||||
|
|
||||||
Uses the https://en.wikipedia.org/wiki/MinHash[MinHash] technique to produce a
|
Uses the {wikipedia}/MinHash[MinHash] technique to produce a
|
||||||
signature for a token stream. You can use MinHash signatures to estimate the
|
signature for a token stream. You can use MinHash signatures to estimate the
|
||||||
similarity of documents. See <<analysis-minhash-tokenfilter-similarity-search>>.
|
similarity of documents. See <<analysis-minhash-tokenfilter-similarity-search>>.
|
||||||
|
|
||||||
@ -95,8 +95,8 @@ locality sensitive hashing (LSH).
|
|||||||
|
|
||||||
Depending on what constitutes the similarity between documents,
|
Depending on what constitutes the similarity between documents,
|
||||||
various LSH functions https://arxiv.org/abs/1408.2927[have been proposed].
|
various LSH functions https://arxiv.org/abs/1408.2927[have been proposed].
|
||||||
For https://en.wikipedia.org/wiki/Jaccard_index[Jaccard similarity], a popular
|
For {wikipedia}/Jaccard_index[Jaccard similarity], a popular
|
||||||
LSH function is https://en.wikipedia.org/wiki/MinHash[MinHash].
|
LSH function is {wikipedia}/MinHash[MinHash].
|
||||||
A general idea of the way MinHash produces a signature for a document
|
A general idea of the way MinHash produces a signature for a document
|
||||||
is by applying a random permutation over the whole index vocabulary (random
|
is by applying a random permutation over the whole index vocabulary (random
|
||||||
numbering for the vocabulary), and recording the minimum value for this permutation
|
numbering for the vocabulary), and recording the minimum value for this permutation
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
<titleabbrev>N-gram</titleabbrev>
|
<titleabbrev>N-gram</titleabbrev>
|
||||||
++++
|
++++
|
||||||
|
|
||||||
Forms https://en.wikipedia.org/wiki/N-gram[n-grams] of specified lengths from
|
Forms {wikipedia}/N-gram[n-grams] of specified lengths from
|
||||||
a token.
|
a token.
|
||||||
|
|
||||||
For example, you can use the `ngram` token filter to change `fox` to
|
For example, you can use the `ngram` token filter to change `fox` to
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
<titleabbrev>Shingle</titleabbrev>
|
<titleabbrev>Shingle</titleabbrev>
|
||||||
++++
|
++++
|
||||||
|
|
||||||
Add shingles, or word https://en.wikipedia.org/wiki/N-gram[n-grams], to a token
|
Add shingles, or word {wikipedia}/N-gram[n-grams], to a token
|
||||||
stream by concatenating adjacent tokens. By default, the `shingle` token filter
|
stream by concatenating adjacent tokens. By default, the `shingle` token filter
|
||||||
outputs two-word shingles and unigrams.
|
outputs two-word shingles and unigrams.
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
<titleabbrev>Stop</titleabbrev>
|
<titleabbrev>Stop</titleabbrev>
|
||||||
++++
|
++++
|
||||||
|
|
||||||
Removes https://en.wikipedia.org/wiki/Stop_words[stop words] from a token
|
Removes {wikipedia}/Stop_words[stop words] from a token
|
||||||
stream.
|
stream.
|
||||||
|
|
||||||
When not customized, the filter removes the following English stop words by
|
When not customized, the filter removes the following English stop words by
|
||||||
|
@ -6,7 +6,7 @@
|
|||||||
|
|
||||||
The `edge_ngram` tokenizer first breaks text down into words whenever it
|
The `edge_ngram` tokenizer first breaks text down into words whenever it
|
||||||
encounters one of a list of specified characters, then it emits
|
encounters one of a list of specified characters, then it emits
|
||||||
https://en.wikipedia.org/wiki/N-gram[N-grams] of each word where the start of
|
{wikipedia}/N-gram[N-grams] of each word where the start of
|
||||||
the N-gram is anchored to the beginning of the word.
|
the N-gram is anchored to the beginning of the word.
|
||||||
|
|
||||||
Edge N-Grams are useful for _search-as-you-type_ queries.
|
Edge N-Grams are useful for _search-as-you-type_ queries.
|
||||||
|
@ -6,7 +6,7 @@
|
|||||||
|
|
||||||
The `ngram` tokenizer first breaks text down into words whenever it encounters
|
The `ngram` tokenizer first breaks text down into words whenever it encounters
|
||||||
one of a list of specified characters, then it emits
|
one of a list of specified characters, then it emits
|
||||||
https://en.wikipedia.org/wiki/N-gram[N-grams] of each word of the specified
|
{wikipedia}/N-gram[N-grams] of each word of the specified
|
||||||
length.
|
length.
|
||||||
|
|
||||||
N-grams are like a sliding window that moves across the word - a continuous
|
N-grams are like a sliding window that moves across the word - a continuous
|
||||||
|
@ -20,7 +20,7 @@ parameter also support _multi-target syntax_.
|
|||||||
|
|
||||||
In multi-target syntax, you can use a comma-separated list to execute a request across multiple resources, such as
|
In multi-target syntax, you can use a comma-separated list to execute a request across multiple resources, such as
|
||||||
data streams, indices, or index aliases: `test1,test2,test3`. You can also use
|
data streams, indices, or index aliases: `test1,test2,test3`. You can also use
|
||||||
https://en.wikipedia.org/wiki/Glob_(programming)[glob-like] wildcard (`*`)
|
{wikipedia}/Glob_(programming)[glob-like] wildcard (`*`)
|
||||||
expressions to target any
|
expressions to target any
|
||||||
resources that match the pattern: `test*` or `*test` or `te*t` or `*test*.
|
resources that match the pattern: `test*` or `*test` or `te*t` or `*test*.
|
||||||
|
|
||||||
|
@ -25,7 +25,7 @@ track cluster health alongside log files and alerting systems, the API returns
|
|||||||
timestamps in two formats:
|
timestamps in two formats:
|
||||||
|
|
||||||
* `HH:MM:SS`, which is human-readable but includes no date information.
|
* `HH:MM:SS`, which is human-readable but includes no date information.
|
||||||
* https://en.wikipedia.org/wiki/Unix_time[Unix `epoch` time], which is
|
* {wikipedia}/Unix_time[Unix `epoch` time], which is
|
||||||
machine-sortable and includes date information. This is useful for cluster
|
machine-sortable and includes date information. This is useful for cluster
|
||||||
recoveries that take multiple days.
|
recoveries that take multiple days.
|
||||||
|
|
||||||
@ -51,7 +51,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time]
|
|||||||
|
|
||||||
`ts` (timestamps)::
|
`ts` (timestamps)::
|
||||||
(Optional, boolean) If `true`, returns `HH:MM:SS` and
|
(Optional, boolean) If `true`, returns `HH:MM:SS` and
|
||||||
https://en.wikipedia.org/wiki/Unix_time[Unix `epoch`] timestamps. Defaults to
|
{wikipedia}/Unix_time[Unix `epoch`] timestamps. Defaults to
|
||||||
`true`.
|
`true`.
|
||||||
|
|
||||||
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v]
|
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v]
|
||||||
@ -63,7 +63,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v]
|
|||||||
[[cat-health-api-example-timestamp]]
|
[[cat-health-api-example-timestamp]]
|
||||||
===== Example with a timestamp
|
===== Example with a timestamp
|
||||||
By default, the cat health API returns `HH:MM:SS` and
|
By default, the cat health API returns `HH:MM:SS` and
|
||||||
https://en.wikipedia.org/wiki/Unix_time[Unix `epoch`] timestamps. For example:
|
{wikipedia}/Unix_time[Unix `epoch`] timestamps. For example:
|
||||||
|
|
||||||
[source,console]
|
[source,console]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -241,7 +241,7 @@ Sync ID of the shard.
|
|||||||
|
|
||||||
`unassigned.at`, `ua`::
|
`unassigned.at`, `ua`::
|
||||||
Time at which the shard became unassigned in
|
Time at which the shard became unassigned in
|
||||||
https://en.wikipedia.org/wiki/List_of_UTC_time_offsets[Coordinated Universal
|
{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal
|
||||||
Time (UTC)].
|
Time (UTC)].
|
||||||
|
|
||||||
`unassigned.details`, `ud`::
|
`unassigned.details`, `ud`::
|
||||||
@ -249,7 +249,7 @@ Details about why the shard became unassigned.
|
|||||||
|
|
||||||
`unassigned.for`, `uf`::
|
`unassigned.for`, `uf`::
|
||||||
Time at which the shard was requested to be unassigned in
|
Time at which the shard was requested to be unassigned in
|
||||||
https://en.wikipedia.org/wiki/List_of_UTC_time_offsets[Coordinated Universal
|
{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal
|
||||||
Time (UTC)].
|
Time (UTC)].
|
||||||
|
|
||||||
[[reason-unassigned]]
|
[[reason-unassigned]]
|
||||||
|
@ -57,14 +57,14 @@ version.
|
|||||||
* `SUCCESS`: The snapshot process completed with a full success.
|
* `SUCCESS`: The snapshot process completed with a full success.
|
||||||
|
|
||||||
`start_epoch`, `ste`, `startEpoch`::
|
`start_epoch`, `ste`, `startEpoch`::
|
||||||
(Default) https://en.wikipedia.org/wiki/Unix_time[Unix `epoch` time] at which
|
(Default) {wikipedia}/Unix_time[Unix `epoch` time] at which
|
||||||
the snapshot process started.
|
the snapshot process started.
|
||||||
|
|
||||||
`start_time`, `sti`, `startTime`::
|
`start_time`, `sti`, `startTime`::
|
||||||
(Default) `HH:MM:SS` time at which the snapshot process started.
|
(Default) `HH:MM:SS` time at which the snapshot process started.
|
||||||
|
|
||||||
`end_epoch`, `ete`, `endEpoch`::
|
`end_epoch`, `ete`, `endEpoch`::
|
||||||
(Default) https://en.wikipedia.org/wiki/Unix_time[Unix `epoch` time] at which
|
(Default) {wikipedia}/Unix_time[Unix `epoch` time] at which
|
||||||
the snapshot process ended.
|
the snapshot process ended.
|
||||||
|
|
||||||
`end_time`, `eti`, `endTime`::
|
`end_time`, `eti`, `endTime`::
|
||||||
|
@ -182,7 +182,7 @@ Contains statistics for the node.
|
|||||||
`timestamp`::
|
`timestamp`::
|
||||||
(integer)
|
(integer)
|
||||||
Time the node stats were collected for this response. Recorded in milliseconds
|
Time the node stats were collected for this response. Recorded in milliseconds
|
||||||
since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch].
|
since the {wikipedia}/Unix_time[Unix Epoch].
|
||||||
|
|
||||||
`name`::
|
`name`::
|
||||||
(string)
|
(string)
|
||||||
@ -824,7 +824,7 @@ type filters for <<parent-join,join>> fields.
|
|||||||
`max_unsafe_auto_id_timestamp`::
|
`max_unsafe_auto_id_timestamp`::
|
||||||
(integer)
|
(integer)
|
||||||
Time of the most recently retried indexing request. Recorded in milliseconds
|
Time of the most recently retried indexing request. Recorded in milliseconds
|
||||||
since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch].
|
since the {wikipedia}/Unix_time[Unix Epoch].
|
||||||
|
|
||||||
`file_sizes`::
|
`file_sizes`::
|
||||||
(object)
|
(object)
|
||||||
@ -953,7 +953,7 @@ Contains statistics about the operating system for the node.
|
|||||||
`timestamp`::
|
`timestamp`::
|
||||||
(integer)
|
(integer)
|
||||||
Last time the operating system statistics were refreshed. Recorded in
|
Last time the operating system statistics were refreshed. Recorded in
|
||||||
milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch].
|
milliseconds since the {wikipedia}/Unix_time[Unix Epoch].
|
||||||
|
|
||||||
`cpu`::
|
`cpu`::
|
||||||
(object)
|
(object)
|
||||||
@ -1178,7 +1178,7 @@ Contains process statistics for the node.
|
|||||||
`timestamp`::
|
`timestamp`::
|
||||||
(integer)
|
(integer)
|
||||||
Last time the statistics were refreshed. Recorded in milliseconds
|
Last time the statistics were refreshed. Recorded in milliseconds
|
||||||
since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch].
|
since the {wikipedia}/Unix_time[Unix Epoch].
|
||||||
|
|
||||||
`open_file_descriptors`::
|
`open_file_descriptors`::
|
||||||
(integer)
|
(integer)
|
||||||
@ -1650,7 +1650,7 @@ Contains file store statistics for the node.
|
|||||||
`timestamp`::
|
`timestamp`::
|
||||||
(integer)
|
(integer)
|
||||||
Last time the file stores statistics were refreshed. Recorded in
|
Last time the file stores statistics were refreshed. Recorded in
|
||||||
milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch].
|
milliseconds since the {wikipedia}/Unix_time[Unix Epoch].
|
||||||
|
|
||||||
`total`::
|
`total`::
|
||||||
(object)
|
(object)
|
||||||
|
@ -74,7 +74,7 @@ Unique identifier for the cluster.
|
|||||||
|
|
||||||
`timestamp`::
|
`timestamp`::
|
||||||
(integer)
|
(integer)
|
||||||
https://en.wikipedia.org/wiki/Unix_time[Unix timestamp], in milliseconds, of
|
{wikipedia}/Unix_time[Unix timestamp], in milliseconds, of
|
||||||
the last time the cluster statistics were refreshed.
|
the last time the cluster statistics were refreshed.
|
||||||
|
|
||||||
`status`::
|
`status`::
|
||||||
@ -447,7 +447,7 @@ assigned to selected nodes.
|
|||||||
|
|
||||||
`max_unsafe_auto_id_timestamp`::
|
`max_unsafe_auto_id_timestamp`::
|
||||||
(integer)
|
(integer)
|
||||||
https://en.wikipedia.org/wiki/Unix_time[Unix timestamp], in milliseconds, of
|
{wikipedia}/Unix_time[Unix timestamp], in milliseconds, of
|
||||||
the most recently retried indexing request.
|
the most recently retried indexing request.
|
||||||
|
|
||||||
`file_sizes`::
|
`file_sizes`::
|
||||||
|
@ -28,7 +28,7 @@ on each node in the cluster.
|
|||||||
Usernames and roles must be at least 1 and no more than 1024 characters. They
|
Usernames and roles must be at least 1 and no more than 1024 characters. They
|
||||||
can contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces, punctuation,
|
can contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces, punctuation,
|
||||||
and printable symbols in the
|
and printable symbols in the
|
||||||
https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block].
|
{wikipedia}/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block].
|
||||||
Leading or trailing whitespace is not allowed.
|
Leading or trailing whitespace is not allowed.
|
||||||
|
|
||||||
Passwords must be at least 6 characters long.
|
Passwords must be at least 6 characters long.
|
||||||
|
@ -241,7 +241,7 @@ Field used to sort events with the same
|
|||||||
Schema (ECS)].
|
Schema (ECS)].
|
||||||
+
|
+
|
||||||
By default, matching events in the search response are sorted by timestamp,
|
By default, matching events in the search response are sorted by timestamp,
|
||||||
converted to milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix
|
converted to milliseconds since the {wikipedia}/Unix_time[Unix
|
||||||
epoch], in ascending order. If two or more events share the same timestamp, this
|
epoch], in ascending order. If two or more events share the same timestamp, this
|
||||||
field is used to sort the events in ascending, lexicographic order.
|
field is used to sort the events in ascending, lexicographic order.
|
||||||
|
|
||||||
@ -257,7 +257,7 @@ Defaults to `@timestamp`, as defined in the
|
|||||||
does not contain the `@timestamp` field, this value is required.
|
does not contain the `@timestamp` field, this value is required.
|
||||||
|
|
||||||
Events in the API response are sorted by this field's value, converted to
|
Events in the API response are sorted by this field's value, converted to
|
||||||
milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix epoch], in
|
milliseconds since the {wikipedia}/Unix_time[Unix epoch], in
|
||||||
ascending order.
|
ascending order.
|
||||||
|
|
||||||
The timestamp field is typically mapped as a <<date,`date`>> or
|
The timestamp field is typically mapped as a <<date,`date`>> or
|
||||||
@ -509,7 +509,7 @@ GET /my-index-000001/_eql/search
|
|||||||
|
|
||||||
The API returns the following response. Matching events in the `hits.events`
|
The API returns the following response. Matching events in the `hits.events`
|
||||||
property are sorted by <<eql-search-api-timestamp-field,timestamp>>, converted
|
property are sorted by <<eql-search-api-timestamp-field,timestamp>>, converted
|
||||||
to milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix epoch],
|
to milliseconds since the {wikipedia}/Unix_time[Unix epoch],
|
||||||
in ascending order.
|
in ascending order.
|
||||||
|
|
||||||
If two or more events share the same timestamp, the
|
If two or more events share the same timestamp, the
|
||||||
|
@ -70,7 +70,7 @@ GET /my-index-000001/_eql/search
|
|||||||
|
|
||||||
The API returns the following response. Matching events are included in the
|
The API returns the following response. Matching events are included in the
|
||||||
`hits.events` property. These events are sorted by timestamp, converted to
|
`hits.events` property. These events are sorted by timestamp, converted to
|
||||||
milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix epoch], in
|
milliseconds since the {wikipedia}/Unix_time[Unix epoch], in
|
||||||
ascending order.
|
ascending order.
|
||||||
|
|
||||||
[source,console-result]
|
[source,console-result]
|
||||||
|
@ -189,7 +189,7 @@ If `true`, matching is case-sensitive. Defaults to `false`.
|
|||||||
=== `cidrMatch`
|
=== `cidrMatch`
|
||||||
|
|
||||||
Returns `true` if an IP address is contained in one or more provided
|
Returns `true` if an IP address is contained in one or more provided
|
||||||
https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing[CIDR] blocks.
|
{wikipedia}/Classless_Inter-Domain_Routing[CIDR] blocks.
|
||||||
|
|
||||||
[%collapsible]
|
[%collapsible]
|
||||||
====
|
====
|
||||||
@ -219,8 +219,8 @@ cidrMatch(source.address, null) // returns null
|
|||||||
`<ip_address>`::
|
`<ip_address>`::
|
||||||
(Required, string or `null`)
|
(Required, string or `null`)
|
||||||
IP address. Supports
|
IP address. Supports
|
||||||
https://en.wikipedia.org/wiki/IPv4[IPv4] and
|
{wikipedia}/IPv4[IPv4] and
|
||||||
https://en.wikipedia.org/wiki/IPv6[IPv6] addresses. If `null`, the function
|
{wikipedia}/IPv6[IPv6] addresses. If `null`, the function
|
||||||
returns `null`.
|
returns `null`.
|
||||||
+
|
+
|
||||||
If using a field as the argument, this parameter supports only the <<ip,`ip`>>
|
If using a field as the argument, this parameter supports only the <<ip,`ip`>>
|
||||||
|
@ -19,7 +19,7 @@ experimental::[]
|
|||||||
|
|
||||||
Returns up to a specified number of events or sequences, starting with the
|
Returns up to a specified number of events or sequences, starting with the
|
||||||
earliest matches. Works similarly to the
|
earliest matches. Works similarly to the
|
||||||
https://en.wikipedia.org/wiki/Head_(Unix)[Unix head command].
|
{wikipedia}/Head_(Unix)[Unix head command].
|
||||||
|
|
||||||
[%collapsible]
|
[%collapsible]
|
||||||
====
|
====
|
||||||
@ -53,7 +53,7 @@ Maximum number of matching events or sequences to return.
|
|||||||
|
|
||||||
Returns up to a specified number of events or sequences, starting with the most
|
Returns up to a specified number of events or sequences, starting with the most
|
||||||
recent matches. Works similarly to the
|
recent matches. Works similarly to the
|
||||||
https://en.wikipedia.org/wiki/Tail_(Unix)[Unix tail command].
|
{wikipedia}/Tail_(Unix)[Unix tail command].
|
||||||
|
|
||||||
[%collapsible]
|
[%collapsible]
|
||||||
====
|
====
|
||||||
|
@ -118,7 +118,7 @@ expect that if either node fails then {es} can elect the remaining node as the
|
|||||||
master, but it is impossible to tell the difference between the failure of a
|
master, but it is impossible to tell the difference between the failure of a
|
||||||
remote node and a mere loss of connectivity between the nodes. If both nodes
|
remote node and a mere loss of connectivity between the nodes. If both nodes
|
||||||
were capable of running independent elections, a loss of connectivity would
|
were capable of running independent elections, a loss of connectivity would
|
||||||
lead to a https://en.wikipedia.org/wiki/Split-brain_(computing)[split-brain
|
lead to a {wikipedia}/Split-brain_(computing)[split-brain
|
||||||
problem] and therefore data loss. {es} avoids this and
|
problem] and therefore data loss. {es} avoids this and
|
||||||
protects your data by electing neither node as master until that node can be
|
protects your data by electing neither node as master until that node can be
|
||||||
sure that it has the latest cluster state and that there is no other master in
|
sure that it has the latest cluster state and that there is no other master in
|
||||||
@ -291,7 +291,7 @@ zone as the master but it is impossible to tell the difference between the
|
|||||||
failure of a remote zone and a mere loss of connectivity between the zones. If
|
failure of a remote zone and a mere loss of connectivity between the zones. If
|
||||||
both zones were capable of running independent elections then a loss of
|
both zones were capable of running independent elections then a loss of
|
||||||
connectivity would lead to a
|
connectivity would lead to a
|
||||||
https://en.wikipedia.org/wiki/Split-brain_(computing)[split-brain problem] and
|
{wikipedia}/Split-brain_(computing)[split-brain problem] and
|
||||||
therefore data loss. {es} avoids this and protects your data by not electing
|
therefore data loss. {es} avoids this and protects your data by not electing
|
||||||
a node from either zone as master until that node can be sure that it has the
|
a node from either zone as master until that node can be sure that it has the
|
||||||
latest cluster state and that there is no other master in the cluster. This may
|
latest cluster state and that there is no other master in the cluster. This may
|
||||||
|
@ -171,7 +171,7 @@ When Elasticsearch stores `_source`, it compresses multiple documents at once
|
|||||||
in order to improve the overall compression ratio. For instance it is very
|
in order to improve the overall compression ratio. For instance it is very
|
||||||
common that documents share the same field names, and quite common that they
|
common that documents share the same field names, and quite common that they
|
||||||
share some field values, especially on fields that have a low cardinality or
|
share some field values, especially on fields that have a low cardinality or
|
||||||
a https://en.wikipedia.org/wiki/Zipf%27s_law[zipfian] distribution.
|
a {wikipedia}/Zipf%27s_law[zipfian] distribution.
|
||||||
|
|
||||||
By default documents are compressed together in the order that they are added
|
By default documents are compressed together in the order that they are added
|
||||||
to the index. If you enabled <<index-modules-index-sorting,index sorting>>
|
to the index. If you enabled <<index-modules-index-sorting,index sorting>>
|
||||||
|
@ -82,7 +82,7 @@ statistics.
|
|||||||
=== Incorporating static relevance signals into the score
|
=== Incorporating static relevance signals into the score
|
||||||
|
|
||||||
Many domains have static signals that are known to be correlated with relevance.
|
Many domains have static signals that are known to be correlated with relevance.
|
||||||
For instance https://en.wikipedia.org/wiki/PageRank[PageRank] and url length are
|
For instance {wikipedia}/PageRank[PageRank] and url length are
|
||||||
two commonly used features for web search in order to tune the score of web
|
two commonly used features for web search in order to tune the score of web
|
||||||
pages independently of the query.
|
pages independently of the query.
|
||||||
|
|
||||||
|
@ -352,11 +352,11 @@ in the <<index-modules-index-sorting-conjunctions,index sorting documentation>>.
|
|||||||
=== Use `preference` to optimize cache utilization
|
=== Use `preference` to optimize cache utilization
|
||||||
|
|
||||||
There are multiple caches that can help with search performance, such as the
|
There are multiple caches that can help with search performance, such as the
|
||||||
https://en.wikipedia.org/wiki/Page_cache[filesystem cache], the
|
{wikipedia}/Page_cache[filesystem cache], the
|
||||||
<<shard-request-cache,request cache>> or the <<query-cache,query cache>>. Yet
|
<<shard-request-cache,request cache>> or the <<query-cache,query cache>>. Yet
|
||||||
all these caches are maintained at the node level, meaning that if you run the
|
all these caches are maintained at the node level, meaning that if you run the
|
||||||
same request twice in a row, have 1 <<glossary-replica-shard,replica>> or more
|
same request twice in a row, have 1 <<glossary-replica-shard,replica>> or more
|
||||||
and use https://en.wikipedia.org/wiki/Round-robin_DNS[round-robin], the default
|
and use {wikipedia}/Round-robin_DNS[round-robin], the default
|
||||||
routing algorithm, then those two requests will go to different shard copies,
|
routing algorithm, then those two requests will go to different shard copies,
|
||||||
preventing node-level caches from helping.
|
preventing node-level caches from helping.
|
||||||
|
|
||||||
|
@ -82,7 +82,7 @@ indices.
|
|||||||
|
|
||||||
The +default+ value compresses stored data with LZ4
|
The +default+ value compresses stored data with LZ4
|
||||||
compression, but this can be set to +best_compression+
|
compression, but this can be set to +best_compression+
|
||||||
which uses https://en.wikipedia.org/wiki/DEFLATE[DEFLATE] for a higher
|
which uses {wikipedia}/DEFLATE[DEFLATE] for a higher
|
||||||
compression ratio, at the expense of slower stored fields performance.
|
compression ratio, at the expense of slower stored fields performance.
|
||||||
If you are updating the compression type, the new one will be applied
|
If you are updating the compression type, the new one will be applied
|
||||||
after segments are merged. Segment merging can be forced using
|
after segments are merged. Segment merging can be forced using
|
||||||
|
@ -146,7 +146,7 @@ Total size, in bytes, of all shards for the data stream's backing indices.
|
|||||||
`maximum_timestamp`::
|
`maximum_timestamp`::
|
||||||
(integer)
|
(integer)
|
||||||
The data stream's highest `@timestamp` value, converted to milliseconds since
|
The data stream's highest `@timestamp` value, converted to milliseconds since
|
||||||
the https://en.wikipedia.org/wiki/Unix_time[Unix epoch].
|
the {wikipedia}/Unix_time[Unix epoch].
|
||||||
+
|
+
|
||||||
[NOTE]
|
[NOTE]
|
||||||
=====
|
=====
|
||||||
|
@ -7,7 +7,7 @@
|
|||||||
|
|
||||||
Similar to the <<grok-processor,Grok Processor>>, dissect also extracts structured fields out of a single text field
|
Similar to the <<grok-processor,Grok Processor>>, dissect also extracts structured fields out of a single text field
|
||||||
within a document. However unlike the <<grok-processor,Grok Processor>>, dissect does not use
|
within a document. However unlike the <<grok-processor,Grok Processor>>, dissect does not use
|
||||||
https://en.wikipedia.org/wiki/Regular_expression[Regular Expressions]. This allows dissect's syntax to be simple and for
|
{wikipedia}/Regular_expression[Regular Expressions]. This allows dissect's syntax to be simple and for
|
||||||
some cases faster than the <<grok-processor,Grok Processor>>.
|
some cases faster than the <<grok-processor,Grok Processor>>.
|
||||||
|
|
||||||
Dissect matches a single text field against a defined pattern.
|
Dissect matches a single text field against a defined pattern.
|
||||||
|
@ -16,7 +16,7 @@ The only similarities which can be used out of the box, without any further
|
|||||||
configuration are:
|
configuration are:
|
||||||
|
|
||||||
`BM25`::
|
`BM25`::
|
||||||
The https://en.wikipedia.org/wiki/Okapi_BM25[Okapi BM25 algorithm]. The
|
The {wikipedia}/Okapi_BM25[Okapi BM25 algorithm]. The
|
||||||
algorithm used by default in {es} and Lucene.
|
algorithm used by default in {es} and Lucene.
|
||||||
|
|
||||||
`classic`::
|
`classic`::
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
++++
|
++++
|
||||||
|
|
||||||
The `binary` type accepts a binary value as a
|
The `binary` type accepts a binary value as a
|
||||||
https://en.wikipedia.org/wiki/Base64[Base64] encoded string. The field is not
|
{wikipedia}/Base64[Base64] encoded string. The field is not
|
||||||
stored by default and is not searchable:
|
stored by default and is not searchable:
|
||||||
|
|
||||||
[source,console]
|
[source,console]
|
||||||
|
@ -103,7 +103,7 @@ format was changed early on to conform to the format used by GeoJSON.
|
|||||||
|
|
||||||
[NOTE]
|
[NOTE]
|
||||||
A point can be expressed as a {wikipedia}/Geohash[geohash].
|
A point can be expressed as a {wikipedia}/Geohash[geohash].
|
||||||
Geohashes are https://en.wikipedia.org/wiki/Base32[base32] encoded strings of
|
Geohashes are {wikipedia}/Base32[base32] encoded strings of
|
||||||
the bits of the latitude and longitude interleaved. Each character in a geohash
|
the bits of the latitude and longitude interleaved. Each character in a geohash
|
||||||
adds additional 5 bits to the precision. So the longer the hash, the more
|
adds additional 5 bits to the precision. So the longer the hash, the more
|
||||||
precise it is. For the indexing purposed geohashs are translated into
|
precise it is. For the indexing purposed geohashs are translated into
|
||||||
|
@ -4,8 +4,8 @@
|
|||||||
<titleabbrev>IP</titleabbrev>
|
<titleabbrev>IP</titleabbrev>
|
||||||
++++
|
++++
|
||||||
|
|
||||||
An `ip` field can index/store either https://en.wikipedia.org/wiki/IPv4[IPv4] or
|
An `ip` field can index/store either {wikipedia}/IPv4[IPv4] or
|
||||||
https://en.wikipedia.org/wiki/IPv6[IPv6] addresses.
|
{wikipedia}/IPv6[IPv6] addresses.
|
||||||
|
|
||||||
[source,console]
|
[source,console]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -80,7 +80,7 @@ The following parameters are accepted by `ip` fields:
|
|||||||
==== Querying `ip` fields
|
==== Querying `ip` fields
|
||||||
|
|
||||||
The most common way to query ip addresses is to use the
|
The most common way to query ip addresses is to use the
|
||||||
https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation[CIDR]
|
{wikipedia}/Classless_Inter-Domain_Routing#CIDR_notation[CIDR]
|
||||||
notation: `[ip_address]/[prefix_length]`. For instance:
|
notation: `[ip_address]/[prefix_length]`. For instance:
|
||||||
|
|
||||||
[source,console]
|
[source,console]
|
||||||
|
@ -12,8 +12,8 @@ The following range types are supported:
|
|||||||
`long_range`:: A range of signed 64-bit integers with a minimum value of +-2^63^+ and maximum of +2^63^-1+.
|
`long_range`:: A range of signed 64-bit integers with a minimum value of +-2^63^+ and maximum of +2^63^-1+.
|
||||||
`double_range`:: A range of double-precision 64-bit IEEE 754 floating point values.
|
`double_range`:: A range of double-precision 64-bit IEEE 754 floating point values.
|
||||||
`date_range`:: A range of date values represented as unsigned 64-bit integer milliseconds elapsed since system epoch.
|
`date_range`:: A range of date values represented as unsigned 64-bit integer milliseconds elapsed since system epoch.
|
||||||
`ip_range` :: A range of ip values supporting either https://en.wikipedia.org/wiki/IPv4[IPv4] or
|
`ip_range` :: A range of ip values supporting either {wikipedia}/IPv4[IPv4] or
|
||||||
https://en.wikipedia.org/wiki/IPv6[IPv6] (or mixed) addresses.
|
{wikipedia}/IPv6[IPv6] (or mixed) addresses.
|
||||||
|
|
||||||
Below is an example of configuring a mapping with various range fields followed by an example that indexes several range types.
|
Below is an example of configuring a mapping with various range fields followed by an example that indexes several range types.
|
||||||
|
|
||||||
@ -178,7 +178,7 @@ This query produces a similar result:
|
|||||||
==== IP Range
|
==== IP Range
|
||||||
|
|
||||||
In addition to the range format above, IP ranges can be provided in
|
In addition to the range format above, IP ranges can be provided in
|
||||||
https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation[CIDR] notation:
|
{wikipedia}/Classless_Inter-Domain_Routing#CIDR_notation[CIDR] notation:
|
||||||
|
|
||||||
[source,console]
|
[source,console]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -125,7 +125,7 @@ which outputs a prediction of values.
|
|||||||
|
|
||||||
`mse`:::
|
`mse`:::
|
||||||
(Optional, object) Average squared difference between the predicted values and the actual (`ground truth`) value.
|
(Optional, object) Average squared difference between the predicted values and the actual (`ground truth`) value.
|
||||||
For more information, read https://en.wikipedia.org/wiki/Mean_squared_error[this wiki article].
|
For more information, read {wikipedia}/Mean_squared_error[this wiki article].
|
||||||
|
|
||||||
`msle`:::
|
`msle`:::
|
||||||
(Optional, object) Average squared difference between the logarithm of the predicted values and the logarithm of the actual
|
(Optional, object) Average squared difference between the logarithm of the predicted values and the logarithm of the actual
|
||||||
@ -133,11 +133,11 @@ which outputs a prediction of values.
|
|||||||
|
|
||||||
`huber`:::
|
`huber`:::
|
||||||
(Optional, object) Pseudo Huber loss function.
|
(Optional, object) Pseudo Huber loss function.
|
||||||
For more information, read https://en.wikipedia.org/wiki/Huber_loss#Pseudo-Huber_loss_function[this wiki article].
|
For more information, read {wikipedia}/Huber_loss#Pseudo-Huber_loss_function[this wiki article].
|
||||||
|
|
||||||
`r_squared`:::
|
`r_squared`:::
|
||||||
(Optional, object) Proportion of the variance in the dependent variable that is predictable from the independent variables.
|
(Optional, object) Proportion of the variance in the dependent variable that is predictable from the independent variables.
|
||||||
For more information, read https://en.wikipedia.org/wiki/Coefficient_of_determination[this wiki article].
|
For more information, read {wikipedia}/Coefficient_of_determination[this wiki article].
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -271,12 +271,12 @@ This `aggregated_output` type works with binary classification (classification
|
|||||||
for values [0, 1]). It multiplies the outputs (in the case of the `ensemble`
|
for values [0, 1]). It multiplies the outputs (in the case of the `ensemble`
|
||||||
model, the inference model values) by the supplied `weights`. The resulting
|
model, the inference model values) by the supplied `weights`. The resulting
|
||||||
vector is summed and passed to a
|
vector is summed and passed to a
|
||||||
https://en.wikipedia.org/wiki/Sigmoid_function[`sigmoid` function]. The result
|
{wikipedia}/Sigmoid_function[`sigmoid` function]. The result
|
||||||
of the `sigmoid` function is considered the probability of class 1 (`P_1`),
|
of the `sigmoid` function is considered the probability of class 1 (`P_1`),
|
||||||
consequently, the probability of class 0 is `1 - P_1`. The class with the
|
consequently, the probability of class 0 is `1 - P_1`. The class with the
|
||||||
highest probability (either 0 or 1) is then returned. For more information about
|
highest probability (either 0 or 1) is then returned. For more information about
|
||||||
logistic regression, see
|
logistic regression, see
|
||||||
https://en.wikipedia.org/wiki/Logistic_regression[this wiki article].
|
{wikipedia}/Logistic_regression[this wiki article].
|
||||||
+
|
+
|
||||||
.Properties of `logistic_regression`
|
.Properties of `logistic_regression`
|
||||||
[%collapsible%open]
|
[%collapsible%open]
|
||||||
|
@ -599,7 +599,7 @@ Advanced configuration option. The shrinkage applied to the weights. Smaller
|
|||||||
values result in larger forests which have a better generalization error.
|
values result in larger forests which have a better generalization error.
|
||||||
However, the smaller the value the longer the training will take. For more
|
However, the smaller the value the longer the training will take. For more
|
||||||
information, about shrinkage, see
|
information, about shrinkage, see
|
||||||
https://en.wikipedia.org/wiki/Gradient_boosting#Shrinkage[this wiki article]. By
|
{wikipedia}/Gradient_boosting#Shrinkage[this wiki article]. By
|
||||||
default, this value is calcuated during hyperparameter optimization.
|
default, this value is calcuated during hyperparameter optimization.
|
||||||
end::eta[]
|
end::eta[]
|
||||||
|
|
||||||
|
@ -53,7 +53,7 @@ The max size of allowed headers. Defaults to `8KB`.
|
|||||||
Support for compression when possible (with Accept-Encoding). If HTTPS is enabled, defaults to `false`. Otherwise, defaults to `true`.
|
Support for compression when possible (with Accept-Encoding). If HTTPS is enabled, defaults to `false`. Otherwise, defaults to `true`.
|
||||||
+
|
+
|
||||||
Disabling compression for HTTPS mitigates potential security risks, such as a
|
Disabling compression for HTTPS mitigates potential security risks, such as a
|
||||||
https://en.wikipedia.org/wiki/BREACH[BREACH attack]. To compress HTTPS traffic,
|
{wikipedia}/BREACH[BREACH attack]. To compress HTTPS traffic,
|
||||||
you must explicitly set `http.compression` to `true`.
|
you must explicitly set `http.compression` to `true`.
|
||||||
// end::http-compression-tag[]
|
// end::http-compression-tag[]
|
||||||
|
|
||||||
@ -64,7 +64,7 @@ Defines the compression level to use for HTTP responses. Valid values are in the
|
|||||||
// tag::http-cors-enabled-tag[]
|
// tag::http-cors-enabled-tag[]
|
||||||
`http.cors.enabled` {ess-icon}::
|
`http.cors.enabled` {ess-icon}::
|
||||||
Enable or disable cross-origin resource sharing, which determines whether a browser on another origin can execute requests against {es}. Set to `true` to enable {es} to process pre-flight
|
Enable or disable cross-origin resource sharing, which determines whether a browser on another origin can execute requests against {es}. Set to `true` to enable {es} to process pre-flight
|
||||||
https://en.wikipedia.org/wiki/Cross-origin_resource_sharing[CORS] requests.
|
{wikipedia}/Cross-origin_resource_sharing[CORS] requests.
|
||||||
{es} will respond to those requests with the `Access-Control-Allow-Origin` header if the `Origin` sent in the request is permitted by the `http.cors.allow-origin` list. Set to `false` (the default) to make {es} ignore the `Origin` request header, effectively disabling CORS requests because {es} will never respond with the `Access-Control-Allow-Origin` response header.
|
{es} will respond to those requests with the `Access-Control-Allow-Origin` header if the `Origin` sent in the request is permitted by the `http.cors.allow-origin` list. Set to `false` (the default) to make {es} ignore the `Origin` request header, effectively disabling CORS requests because {es} will never respond with the `Access-Control-Allow-Origin` response header.
|
||||||
+
|
+
|
||||||
NOTE: If the client does not send a pre-flight request with an `Origin` header or it does not check the response headers from the server to validate the
|
NOTE: If the client does not send a pre-flight request with an `Origin` header or it does not check the response headers from the server to validate the
|
||||||
@ -124,7 +124,7 @@ The maximum number of warning headers in client HTTP responses. Defaults to `unb
|
|||||||
The maximum total size of warning headers in client HTTP responses. Defaults to `unbounded`.
|
The maximum total size of warning headers in client HTTP responses. Defaults to `unbounded`.
|
||||||
|
|
||||||
`http.tcp.no_delay`::
|
`http.tcp.no_delay`::
|
||||||
Enable or disable the https://en.wikipedia.org/wiki/Nagle%27s_algorithm[TCP no delay]
|
Enable or disable the {wikipedia}/Nagle%27s_algorithm[TCP no delay]
|
||||||
setting. Defaults to `network.tcp.no_delay`.
|
setting. Defaults to `network.tcp.no_delay`.
|
||||||
|
|
||||||
`http.tcp.keep_alive`::
|
`http.tcp.keep_alive`::
|
||||||
|
@ -34,7 +34,7 @@ at least some of the other nodes in the cluster. This setting provides the
|
|||||||
initial list of addresses this node will try to contact. Accepts IP addresses
|
initial list of addresses this node will try to contact. Accepts IP addresses
|
||||||
or hostnames. If a hostname lookup resolves to multiple IP addresses then each
|
or hostnames. If a hostname lookup resolves to multiple IP addresses then each
|
||||||
IP address will be used for discovery.
|
IP address will be used for discovery.
|
||||||
https://en.wikipedia.org/wiki/Round-robin_DNS[Round robin DNS] -- returning a
|
{wikipedia}/Round-robin_DNS[Round robin DNS] -- returning a
|
||||||
different IP from a list on each lookup -- can be used for discovery; non-
|
different IP from a list on each lookup -- can be used for discovery; non-
|
||||||
existent IP addresses will throw exceptions and cause another DNS lookup on the
|
existent IP addresses will throw exceptions and cause another DNS lookup on the
|
||||||
next round of pinging (subject to <<networkaddress-cache-ttl,JVM DNS
|
next round of pinging (subject to <<networkaddress-cache-ttl,JVM DNS
|
||||||
@ -126,7 +126,7 @@ Any component that uses TCP (like the <<modules-http,HTTP>> and
|
|||||||
<<modules-transport,transport>> layers) share the following settings:
|
<<modules-transport,transport>> layers) share the following settings:
|
||||||
|
|
||||||
`network.tcp.no_delay`::
|
`network.tcp.no_delay`::
|
||||||
Enable or disable the https://en.wikipedia.org/wiki/Nagle%27s_algorithm[TCP no delay]
|
Enable or disable the {wikipedia}/Nagle%27s_algorithm[TCP no delay]
|
||||||
setting. Defaults to `true`.
|
setting. Defaults to `true`.
|
||||||
|
|
||||||
`network.tcp.keep_alive`::
|
`network.tcp.keep_alive`::
|
||||||
|
@ -54,7 +54,7 @@ TCP keep-alives apply to all kinds of long-lived connections and not just to
|
|||||||
transport connections.
|
transport connections.
|
||||||
|
|
||||||
`transport.tcp.no_delay`::
|
`transport.tcp.no_delay`::
|
||||||
Enable or disable the https://en.wikipedia.org/wiki/Nagle%27s_algorithm[TCP no delay]
|
Enable or disable the {wikipedia}/Nagle%27s_algorithm[TCP no delay]
|
||||||
setting. Defaults to `network.tcp.no_delay`.
|
setting. Defaults to `network.tcp.no_delay`.
|
||||||
|
|
||||||
`transport.tcp.keep_alive`::
|
`transport.tcp.keep_alive`::
|
||||||
|
@ -309,19 +309,19 @@ There are a number of options for the `field_value_factor` function:
|
|||||||
| Modifier | Meaning
|
| Modifier | Meaning
|
||||||
|
|
||||||
| `none` | Do not apply any multiplier to the field value
|
| `none` | Do not apply any multiplier to the field value
|
||||||
| `log` | Take the https://en.wikipedia.org/wiki/Common_logarithm[common logarithm] of the field value.
|
| `log` | Take the {wikipedia}/Common_logarithm[common logarithm] of the field value.
|
||||||
Because this function will return a negative value and cause an error if used on values
|
Because this function will return a negative value and cause an error if used on values
|
||||||
between 0 and 1, it is recommended to use `log1p` instead.
|
between 0 and 1, it is recommended to use `log1p` instead.
|
||||||
| `log1p` | Add 1 to the field value and take the common logarithm
|
| `log1p` | Add 1 to the field value and take the common logarithm
|
||||||
| `log2p` | Add 2 to the field value and take the common logarithm
|
| `log2p` | Add 2 to the field value and take the common logarithm
|
||||||
| `ln` | Take the https://en.wikipedia.org/wiki/Natural_logarithm[natural logarithm] of the field value.
|
| `ln` | Take the {wikipedia}/Natural_logarithm[natural logarithm] of the field value.
|
||||||
Because this function will return a negative value and cause an error if used on values
|
Because this function will return a negative value and cause an error if used on values
|
||||||
between 0 and 1, it is recommended to use `ln1p` instead.
|
between 0 and 1, it is recommended to use `ln1p` instead.
|
||||||
| `ln1p` | Add 1 to the field value and take the natural logarithm
|
| `ln1p` | Add 1 to the field value and take the natural logarithm
|
||||||
| `ln2p` | Add 2 to the field value and take the natural logarithm
|
| `ln2p` | Add 2 to the field value and take the natural logarithm
|
||||||
| `square` | Square the field value (multiply it by itself)
|
| `square` | Square the field value (multiply it by itself)
|
||||||
| `sqrt` | Take the https://en.wikipedia.org/wiki/Square_root[square root] of the field value
|
| `sqrt` | Take the {wikipedia}/Square_root[square root] of the field value
|
||||||
| `reciprocal` | https://en.wikipedia.org/wiki/Multiplicative_inverse[Reciprocate] the field value, same as `1/x` where `x` is the field's value
|
| `reciprocal` | {wikipedia}/Multiplicative_inverse[Reciprocate] the field value, same as `1/x` where `x` is the field's value
|
||||||
|=======================================================================
|
|=======================================================================
|
||||||
|
|
||||||
`missing`::
|
`missing`::
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
++++
|
++++
|
||||||
|
|
||||||
Returns documents that contain terms similar to the search term, as measured by
|
Returns documents that contain terms similar to the search term, as measured by
|
||||||
a https://en.wikipedia.org/wiki/Levenshtein_distance[Levenshtein edit distance].
|
a {wikipedia}/Levenshtein_distance[Levenshtein edit distance].
|
||||||
|
|
||||||
An edit distance is the number of one-character changes needed to turn one term
|
An edit distance is the number of one-character changes needed to turn one term
|
||||||
into another. These changes can include:
|
into another. These changes can include:
|
||||||
|
@ -16,7 +16,7 @@ following queries:
|
|||||||
|
|
||||||
To execute them, Lucene changes these queries to a simpler form, such as a
|
To execute them, Lucene changes these queries to a simpler form, such as a
|
||||||
<<query-dsl-bool-query, `bool` query>> or a
|
<<query-dsl-bool-query, `bool` query>> or a
|
||||||
https://en.wikipedia.org/wiki/Bit_array[bit set].
|
{wikipedia}/Bit_array[bit set].
|
||||||
|
|
||||||
The `rewrite` parameter determines:
|
The `rewrite` parameter determines:
|
||||||
|
|
||||||
|
@ -165,7 +165,7 @@ value for a <<number,numeric>> field, are ignored. Defaults to `false`.
|
|||||||
+
|
+
|
||||||
--
|
--
|
||||||
(Optional, integer) Maximum number of
|
(Optional, integer) Maximum number of
|
||||||
https://en.wikipedia.org/wiki/Deterministic_finite_automaton[automaton states]
|
{wikipedia}/Deterministic_finite_automaton[automaton states]
|
||||||
required for the query. Default is `10000`.
|
required for the query. Default is `10000`.
|
||||||
|
|
||||||
{es} uses https://lucene.apache.org/core/[Apache Lucene] internally to parse
|
{es} uses https://lucene.apache.org/core/[Apache Lucene] internally to parse
|
||||||
@ -217,9 +217,9 @@ information, see the <<query-dsl-multi-term-rewrite, `rewrite` parameter>>.
|
|||||||
+
|
+
|
||||||
--
|
--
|
||||||
(Optional, string)
|
(Optional, string)
|
||||||
https://en.wikipedia.org/wiki/List_of_UTC_time_offsets[Coordinated Universal
|
{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal
|
||||||
Time (UTC) offset] or
|
Time (UTC) offset] or
|
||||||
https://en.wikipedia.org/wiki/List_of_tz_database_time_zones[IANA time zone]
|
{wikipedia}/List_of_tz_database_time_zones[IANA time zone]
|
||||||
used to convert `date` values in the query string to UTC.
|
used to convert `date` values in the query string to UTC.
|
||||||
|
|
||||||
Valid values are ISO 8601 UTC offsets, such as `+01:00` or -`08:00`, and IANA
|
Valid values are ISO 8601 UTC offsets, such as `+01:00` or -`08:00`, and IANA
|
||||||
|
@ -66,7 +66,7 @@ For valid syntax, see <<mapping-date-format,`format`>>.
|
|||||||
====
|
====
|
||||||
If a `format` and `date` value are incomplete, {es} replaces any missing year,
|
If a `format` and `date` value are incomplete, {es} replaces any missing year,
|
||||||
month, or date component with the start of
|
month, or date component with the start of
|
||||||
https://en.wikipedia.org/wiki/Unix_time[Unix time], which is January 1st, 1970.
|
{wikipedia}/Unix_time[Unix time], which is January 1st, 1970.
|
||||||
|
|
||||||
For example, if the `format` value is `dd`, {es} converts a `gte` value of `10`
|
For example, if the `format` value is `dd`, {es} converts a `gte` value of `10`
|
||||||
to `1970-01-10T00:00:00.000Z`.
|
to `1970-01-10T00:00:00.000Z`.
|
||||||
@ -95,9 +95,9 @@ Matches documents with a range field value entirely within the query's range.
|
|||||||
+
|
+
|
||||||
--
|
--
|
||||||
(Optional, string)
|
(Optional, string)
|
||||||
https://en.wikipedia.org/wiki/List_of_UTC_time_offsets[Coordinated Universal
|
{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal
|
||||||
Time (UTC) offset] or
|
Time (UTC) offset] or
|
||||||
https://en.wikipedia.org/wiki/List_of_tz_database_time_zones[IANA time zone]
|
{wikipedia}/List_of_tz_database_time_zones[IANA time zone]
|
||||||
used to convert `date` values in the query to UTC.
|
used to convert `date` values in the query to UTC.
|
||||||
|
|
||||||
Valid values are ISO 8601 UTC offsets, such as `+01:00` or -`08:00`, and IANA
|
Valid values are ISO 8601 UTC offsets, such as `+01:00` or -`08:00`, and IANA
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
++++
|
++++
|
||||||
|
|
||||||
Returns documents that contain terms matching a
|
Returns documents that contain terms matching a
|
||||||
https://en.wikipedia.org/wiki/Regular_expression[regular expression].
|
{wikipedia}/Regular_expression[regular expression].
|
||||||
|
|
||||||
A regular expression is a way to match patterns in data using placeholder
|
A regular expression is a way to match patterns in data using placeholder
|
||||||
characters, called operators. For a list of operators supported by the
|
characters, called operators. For a list of operators supported by the
|
||||||
@ -71,7 +71,7 @@ expression syntax>>.
|
|||||||
+
|
+
|
||||||
--
|
--
|
||||||
(Optional, integer) Maximum number of
|
(Optional, integer) Maximum number of
|
||||||
https://en.wikipedia.org/wiki/Deterministic_finite_automaton[automaton states]
|
{wikipedia}/Deterministic_finite_automaton[automaton states]
|
||||||
required for the query. Default is `10000`.
|
required for the query. Default is `10000`.
|
||||||
|
|
||||||
{es} uses https://lucene.apache.org/core/[Apache Lucene] internally to parse
|
{es} uses https://lucene.apache.org/core/[Apache Lucene] internally to parse
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
[[regexp-syntax]]
|
[[regexp-syntax]]
|
||||||
== Regular expression syntax
|
== Regular expression syntax
|
||||||
|
|
||||||
A https://en.wikipedia.org/wiki/Regular_expression[regular expression] is a way to
|
A {wikipedia}/Regular_expression[regular expression] is a way to
|
||||||
match patterns in data using placeholder characters, called operators.
|
match patterns in data using placeholder characters, called operators.
|
||||||
|
|
||||||
{es} supports regular expressions in the following queries:
|
{es} supports regular expressions in the following queries:
|
||||||
@ -44,7 +44,7 @@ backslash or surround it with double quotes. For example:
|
|||||||
=== Standard operators
|
=== Standard operators
|
||||||
|
|
||||||
Lucene's regular expression engine does not use the
|
Lucene's regular expression engine does not use the
|
||||||
https://en.wikipedia.org/wiki/Perl_Compatible_Regular_Expressions[Perl
|
{wikipedia}/Perl_Compatible_Regular_Expressions[Perl
|
||||||
Compatible Regular Expressions (PCRE)] library, but it does support the
|
Compatible Regular Expressions (PCRE)] library, but it does support the
|
||||||
following standard operators.
|
following standard operators.
|
||||||
|
|
||||||
|
@ -19,7 +19,7 @@ examples.
|
|||||||
|
|
||||||
Similar to the `geo_shape` query, the `shape` query uses
|
Similar to the `geo_shape` query, the `shape` query uses
|
||||||
http://geojson.org[GeoJSON] or
|
http://geojson.org[GeoJSON] or
|
||||||
https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry[Well Known Text]
|
{wikipedia}/Well-known_text_representation_of_geometry[Well Known Text]
|
||||||
(WKT) to represent shapes.
|
(WKT) to represent shapes.
|
||||||
|
|
||||||
Given the following index:
|
Given the following index:
|
||||||
|
@ -39,7 +39,7 @@ Returns documents that contain terms within a provided range.
|
|||||||
|
|
||||||
<<query-dsl-regexp-query,`regexp` query>>::
|
<<query-dsl-regexp-query,`regexp` query>>::
|
||||||
Returns documents that contain terms matching a
|
Returns documents that contain terms matching a
|
||||||
https://en.wikipedia.org/wiki/Regular_expression[regular expression].
|
{wikipedia}/Regular_expression[regular expression].
|
||||||
|
|
||||||
<<query-dsl-term-query,`term` query>>::
|
<<query-dsl-term-query,`term` query>>::
|
||||||
Returns documents that contain an exact term in a provided field.
|
Returns documents that contain an exact term in a provided field.
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
|
|
||||||
While Elasticsearch contributors make every effort to prevent scripts from
|
While Elasticsearch contributors make every effort to prevent scripts from
|
||||||
running amok, security is something best done in
|
running amok, security is something best done in
|
||||||
https://en.wikipedia.org/wiki/Defense_in_depth_(computing)[layers] because
|
{wikipedia}/Defense_in_depth_(computing)[layers] because
|
||||||
all software has bugs and it is important to minimize the risk of failure in
|
all software has bugs and it is important to minimize the risk of failure in
|
||||||
any security layer. Find below rules of thumb for how to keep Elasticsearch
|
any security layer. Find below rules of thumb for how to keep Elasticsearch
|
||||||
from being a vulnerability.
|
from being a vulnerability.
|
||||||
@ -63,7 +63,7 @@ preventing them from being able to do things like write files and listen to
|
|||||||
sockets.
|
sockets.
|
||||||
|
|
||||||
Elasticsearch uses
|
Elasticsearch uses
|
||||||
https://en.wikipedia.org/wiki/Seccomp[seccomp] in Linux,
|
{wikipedia}/Seccomp[seccomp] in Linux,
|
||||||
https://www.chromium.org/developers/design-documents/sandbox/osx-sandboxing-design[Seatbelt]
|
https://www.chromium.org/developers/design-documents/sandbox/osx-sandboxing-design[Seatbelt]
|
||||||
in macOS, and
|
in macOS, and
|
||||||
https://msdn.microsoft.com/en-us/library/windows/desktop/ms684147[ActiveProcessLimit]
|
https://msdn.microsoft.com/en-us/library/windows/desktop/ms684147[ActiveProcessLimit]
|
||||||
|
@ -214,7 +214,7 @@ will be used. The following metrics are supported:
|
|||||||
|
|
||||||
This metric measures the proportion of relevant results in the top k search results.
|
This metric measures the proportion of relevant results in the top k search results.
|
||||||
It's a form of the well-known
|
It's a form of the well-known
|
||||||
https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision[Precision]
|
{wikipedia}/Evaluation_measures_(information_retrieval)#Precision[Precision]
|
||||||
metric that only looks at the top k documents. It is the fraction of relevant
|
metric that only looks at the top k documents. It is the fraction of relevant
|
||||||
documents in those first k results. A precision at 10 (P@10) value of 0.6 then
|
documents in those first k results. A precision at 10 (P@10) value of 0.6 then
|
||||||
means 6 out of the 10 top hits are relevant with respect to the user's
|
means 6 out of the 10 top hits are relevant with respect to the user's
|
||||||
@ -269,7 +269,7 @@ If set to 'true', unlabeled documents are ignored and neither count as relevant
|
|||||||
|
|
||||||
This metric measures the total number of relevant results in the top k search
|
This metric measures the total number of relevant results in the top k search
|
||||||
results. It's a form of the well-known
|
results. It's a form of the well-known
|
||||||
https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Recall[Recall]
|
{wikipedia}/Evaluation_measures_(information_retrieval)#Recall[Recall]
|
||||||
metric. It is the fraction of relevant documents in those first k results
|
metric. It is the fraction of relevant documents in those first k results
|
||||||
relative to all possible relevant results. A recall at 10 (R@10) value of 0.5 then
|
relative to all possible relevant results. A recall at 10 (R@10) value of 0.5 then
|
||||||
means 4 out of 8 relevant documents, with respect to the user's information
|
means 4 out of 8 relevant documents, with respect to the user's information
|
||||||
@ -322,7 +322,7 @@ For every query in the test suite, this metric calculates the reciprocal of the
|
|||||||
rank of the first relevant document. For example, finding the first relevant
|
rank of the first relevant document. For example, finding the first relevant
|
||||||
result in position 3 means the reciprocal rank is 1/3. The reciprocal rank for
|
result in position 3 means the reciprocal rank is 1/3. The reciprocal rank for
|
||||||
each query is averaged across all queries in the test suite to give the
|
each query is averaged across all queries in the test suite to give the
|
||||||
https://en.wikipedia.org/wiki/Mean_reciprocal_rank[mean reciprocal rank].
|
{wikipedia}/Mean_reciprocal_rank[mean reciprocal rank].
|
||||||
|
|
||||||
[source,console]
|
[source,console]
|
||||||
--------------------------------
|
--------------------------------
|
||||||
@ -360,7 +360,7 @@ in the query. Defaults to 10.
|
|||||||
===== Discounted cumulative gain (DCG)
|
===== Discounted cumulative gain (DCG)
|
||||||
|
|
||||||
In contrast to the two metrics above,
|
In contrast to the two metrics above,
|
||||||
https://en.wikipedia.org/wiki/Discounted_cumulative_gain[discounted cumulative gain]
|
{wikipedia}/Discounted_cumulative_gain[discounted cumulative gain]
|
||||||
takes both the rank and the rating of the search results into account.
|
takes both the rank and the rating of the search results into account.
|
||||||
|
|
||||||
The assumption is that highly relevant documents are more useful for the user
|
The assumption is that highly relevant documents are more useful for the user
|
||||||
@ -395,7 +395,7 @@ The `dcg` metric takes the following optional parameters:
|
|||||||
|Parameter |Description
|
|Parameter |Description
|
||||||
|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter
|
|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter
|
||||||
in the query. Defaults to 10.
|
in the query. Defaults to 10.
|
||||||
|`normalize` | If set to `true`, this metric will calculate the https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG[Normalized DCG].
|
|`normalize` | If set to `true`, this metric will calculate the {wikipedia}/Discounted_cumulative_gain#Normalized_DCG[Normalized DCG].
|
||||||
|=======================================================================
|
|=======================================================================
|
||||||
|
|
||||||
|
|
||||||
|
@ -100,7 +100,7 @@ POST my-index-000001/_search
|
|||||||
<<mapping-date-format,date format>>. <<spatial_datatypes, Spatial fields>>
|
<<mapping-date-format,date format>>. <<spatial_datatypes, Spatial fields>>
|
||||||
accept either `geojson` for http://www.geojson.org[GeoJSON] (the default)
|
accept either `geojson` for http://www.geojson.org[GeoJSON] (the default)
|
||||||
or `wkt` for
|
or `wkt` for
|
||||||
https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry[Well Known Text].
|
{wikipedia}/Well-known_text_representation_of_geometry[Well Known Text].
|
||||||
Other field types do not support the `format` parameter.
|
Other field types do not support the `format` parameter.
|
||||||
|
|
||||||
The values are returned as a flat list in the `fields` section in each hit:
|
The values are returned as a flat list in the `fields` section in each hit:
|
||||||
|
@ -11,7 +11,7 @@
|
|||||||
You do not need to configure any settings to use {ml}. It is enabled by default.
|
You do not need to configure any settings to use {ml}. It is enabled by default.
|
||||||
|
|
||||||
IMPORTANT: {ml-cap} uses SSE4.2 instructions, so will only work on machines whose
|
IMPORTANT: {ml-cap} uses SSE4.2 instructions, so will only work on machines whose
|
||||||
CPUs https://en.wikipedia.org/wiki/SSE4#Supporting_CPUs[support] SSE4.2. If you
|
CPUs {wikipedia}/SSE4#Supporting_CPUs[support] SSE4.2. If you
|
||||||
run {es} on older hardware you must disable {ml} (by setting `xpack.ml.enabled`
|
run {es} on older hardware you must disable {ml} (by setting `xpack.ml.enabled`
|
||||||
to `false`).
|
to `false`).
|
||||||
|
|
||||||
|
@ -77,7 +77,7 @@ heap size check, you must configure the <<heap-size,heap size>>.
|
|||||||
=== File descriptor check
|
=== File descriptor check
|
||||||
|
|
||||||
File descriptors are a Unix construct for tracking open "files". In Unix
|
File descriptors are a Unix construct for tracking open "files". In Unix
|
||||||
though, https://en.wikipedia.org/wiki/Everything_is_a_file[everything is
|
though, {wikipedia}/Everything_is_a_file[everything is
|
||||||
a file]. For example, "files" could be a physical file, a virtual file
|
a file]. For example, "files" could be a physical file, a virtual file
|
||||||
(e.g., `/proc/loadavg`), or network sockets. Elasticsearch requires
|
(e.g., `/proc/loadavg`), or network sockets. Elasticsearch requires
|
||||||
lots of file descriptors (e.g., every shard is composed of multiple
|
lots of file descriptors (e.g., every shard is composed of multiple
|
||||||
|
@ -83,7 +83,7 @@ via <<systemd,systemd>>.
|
|||||||
==== Systemd configuration
|
==== Systemd configuration
|
||||||
|
|
||||||
When using the RPM or Debian packages on systems that use
|
When using the RPM or Debian packages on systems that use
|
||||||
https://en.wikipedia.org/wiki/Systemd[systemd], system limits must be
|
{wikipedia}/Systemd[systemd], system limits must be
|
||||||
specified via systemd.
|
specified via systemd.
|
||||||
|
|
||||||
The systemd service file (`/usr/lib/systemd/system/elasticsearch.service`)
|
The systemd service file (`/usr/lib/systemd/system/elasticsearch.service`)
|
||||||
|
@ -9,7 +9,7 @@ NOTE: This documentation while trying to be complete, does assume the reader has
|
|||||||
|
|
||||||
As a general rule, {es-sql} as the name indicates provides a SQL interface to {es}. As such, it follows the SQL terminology and conventions first, whenever possible. However the backing engine itself is {es} for which {es-sql} was purposely created hence why features or concepts that are not available, or cannot be mapped correctly, in SQL appear
|
As a general rule, {es-sql} as the name indicates provides a SQL interface to {es}. As such, it follows the SQL terminology and conventions first, whenever possible. However the backing engine itself is {es} for which {es-sql} was purposely created hence why features or concepts that are not available, or cannot be mapped correctly, in SQL appear
|
||||||
in {es-sql}.
|
in {es-sql}.
|
||||||
Last but not least, {es-sql} tries to obey the https://en.wikipedia.org/wiki/Principle_of_least_astonishment[principle of least surprise], though as all things in the world, everything is relative.
|
Last but not least, {es-sql} tries to obey the {wikipedia}/Principle_of_least_astonishment[principle of least surprise], though as all things in the world, everything is relative.
|
||||||
|
|
||||||
=== Mapping concepts across SQL and {es}
|
=== Mapping concepts across SQL and {es}
|
||||||
|
|
||||||
|
@ -76,7 +76,7 @@ s|Description
|
|||||||
|
|
||||||
|csv
|
|csv
|
||||||
|text/csv
|
|text/csv
|
||||||
|https://en.wikipedia.org/wiki/Comma-separated_values[Comma-separated values]
|
|{wikipedia}/Comma-separated_values[Comma-separated values]
|
||||||
|
|
||||||
|json
|
|json
|
||||||
|application/json
|
|application/json
|
||||||
@ -84,7 +84,7 @@ s|Description
|
|||||||
|
|
||||||
|tsv
|
|tsv
|
||||||
|text/tab-separated-values
|
|text/tab-separated-values
|
||||||
|https://en.wikipedia.org/wiki/Tab-separated_values[Tab-separated values]
|
|{wikipedia}/Tab-separated_values[Tab-separated values]
|
||||||
|
|
||||||
|txt
|
|txt
|
||||||
|text/plain
|
|text/plain
|
||||||
@ -92,7 +92,7 @@ s|Description
|
|||||||
|
|
||||||
|yaml
|
|yaml
|
||||||
|application/yaml
|
|application/yaml
|
||||||
|https://en.wikipedia.org/wiki/YAML[YAML] (YAML Ain't Markup Language) human-readable format
|
|{wikipedia}/YAML[YAML] (YAML Ain't Markup Language) human-readable format
|
||||||
|
|
||||||
3+h| Binary Formats
|
3+h| Binary Formats
|
||||||
|
|
||||||
@ -102,7 +102,7 @@ s|Description
|
|||||||
|
|
||||||
|smile
|
|smile
|
||||||
|application/smile
|
|application/smile
|
||||||
|https://en.wikipedia.org/wiki/Smile_(data_interchange_format)[Smile] binary data format similar to CBOR
|
|{wikipedia}/Smile_(data_interchange_format)[Smile] binary data format similar to CBOR
|
||||||
|
|
||||||
|===
|
|===
|
||||||
|
|
||||||
|
@ -25,7 +25,7 @@ AVG(numeric_field) <1>
|
|||||||
|
|
||||||
*Output*: `double` numeric value
|
*Output*: `double` numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Arithmetic_mean[Average] (arithmetic mean) of input values.
|
*Description*: Returns the {wikipedia}/Arithmetic_mean[Average] (arithmetic mean) of input values.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -424,7 +424,7 @@ KURTOSIS(field_name) <1>
|
|||||||
|
|
||||||
*Description*:
|
*Description*:
|
||||||
|
|
||||||
https://en.wikipedia.org/wiki/Kurtosis[Quantify] the shape of the distribution of input values in the field `field_name`.
|
{wikipedia}/Kurtosis[Quantify] the shape of the distribution of input values in the field `field_name`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -458,7 +458,7 @@ MAD(field_name) <1>
|
|||||||
|
|
||||||
*Description*:
|
*Description*:
|
||||||
|
|
||||||
https://en.wikipedia.org/wiki/Median_absolute_deviation[Measure] the variability of the input values in the field `field_name`.
|
{wikipedia}/Median_absolute_deviation[Measure] the variability of the input values in the field `field_name`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -490,7 +490,7 @@ PERCENTILE(
|
|||||||
|
|
||||||
*Description*:
|
*Description*:
|
||||||
|
|
||||||
Returns the nth https://en.wikipedia.org/wiki/Percentile[percentile] (represented by `numeric_exp` parameter)
|
Returns the nth {wikipedia}/Percentile[percentile] (represented by `numeric_exp` parameter)
|
||||||
of input values in the field `field_name`.
|
of input values in the field `field_name`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
@ -523,7 +523,7 @@ PERCENTILE_RANK(
|
|||||||
|
|
||||||
*Description*:
|
*Description*:
|
||||||
|
|
||||||
Returns the nth https://en.wikipedia.org/wiki/Percentile_rank[percentile rank] (represented by `numeric_exp` parameter)
|
Returns the nth {wikipedia}/Percentile_rank[percentile rank] (represented by `numeric_exp` parameter)
|
||||||
of input values in the field `field_name`.
|
of input values in the field `field_name`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
@ -553,7 +553,7 @@ SKEWNESS(field_name) <1>
|
|||||||
|
|
||||||
*Description*:
|
*Description*:
|
||||||
|
|
||||||
https://en.wikipedia.org/wiki/Skewness[Quantify] the asymmetric distribution of input values in the field `field_name`.
|
{wikipedia}/Skewness[Quantify] the asymmetric distribution of input values in the field `field_name`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -587,7 +587,7 @@ STDDEV_POP(field_name) <1>
|
|||||||
|
|
||||||
*Description*:
|
*Description*:
|
||||||
|
|
||||||
Returns the https://en.wikipedia.org/wiki/Standard_deviations[population standard deviation] of input values in the field `field_name`.
|
Returns the {wikipedia}/Standard_deviations[population standard deviation] of input values in the field `field_name`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -616,7 +616,7 @@ STDDEV_SAMP(field_name) <1>
|
|||||||
|
|
||||||
*Description*:
|
*Description*:
|
||||||
|
|
||||||
Returns the https://en.wikipedia.org/wiki/Standard_deviations[sample standard deviation] of input values in the field `field_name`.
|
Returns the {wikipedia}/Standard_deviations[sample standard deviation] of input values in the field `field_name`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -674,7 +674,7 @@ VAR_POP(field_name) <1>
|
|||||||
|
|
||||||
*Description*:
|
*Description*:
|
||||||
|
|
||||||
Returns the https://en.wikipedia.org/wiki/Variance[population variance] of input values in the field `field_name`.
|
Returns the {wikipedia}/Variance[population variance] of input values in the field `field_name`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -704,7 +704,7 @@ VAR_SAMP(field_name) <1>
|
|||||||
|
|
||||||
*Description*:
|
*Description*:
|
||||||
|
|
||||||
Returns the https://en.wikipedia.org/wiki/Variance[sample variance] of input values in the field `field_name`.
|
Returns the {wikipedia}/Variance[sample variance] of input values in the field `field_name`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -890,7 +890,7 @@ ISO_DAY_OF_WEEK(datetime_exp) <1>
|
|||||||
|
|
||||||
*Output*: integer
|
*Output*: integer
|
||||||
|
|
||||||
*Description*: Extract the day of the week from a date/datetime, following the https://en.wikipedia.org/wiki/ISO_week_date[ISO 8601 standard].
|
*Description*: Extract the day of the week from a date/datetime, following the {wikipedia}/ISO_week_date[ISO 8601 standard].
|
||||||
Monday is `1`, Tuesday is `2`, etc.
|
Monday is `1`, Tuesday is `2`, etc.
|
||||||
|
|
||||||
[source, sql]
|
[source, sql]
|
||||||
@ -913,7 +913,7 @@ ISO_WEEK_OF_YEAR(datetime_exp) <1>
|
|||||||
|
|
||||||
*Output*: integer
|
*Output*: integer
|
||||||
|
|
||||||
*Description*: Extract the week of the year from a date/datetime, following https://en.wikipedia.org/wiki/ISO_week_date[ISO 8601 standard]. The first week
|
*Description*: Extract the week of the year from a date/datetime, following {wikipedia}/ISO_week_date[ISO 8601 standard]. The first week
|
||||||
of a year is the first week with a majority (4 or more) of its days in January.
|
of a year is the first week with a majority (4 or more) of its days in January.
|
||||||
|
|
||||||
[source, sql]
|
[source, sql]
|
||||||
|
@ -25,7 +25,7 @@ ABS(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: numeric
|
*Output*: numeric
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Absolute_value[absolute value] of `numeric_exp`. The return type is the same as the input type.
|
*Description*: Returns the {wikipedia}/Absolute_value[absolute value] of `numeric_exp`. The return type is the same as the input type.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -47,7 +47,7 @@ CBRT(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Cube_root[cube root] of `numeric_exp`.
|
*Description*: Returns the {wikipedia}/Cube_root[cube root] of `numeric_exp`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -89,7 +89,7 @@ E()
|
|||||||
|
|
||||||
*Output*: `2.718281828459045`
|
*Output*: `2.718281828459045`
|
||||||
|
|
||||||
*Description*: Returns https://en.wikipedia.org/wiki/E_%28mathematical_constant%29[Euler's number].
|
*Description*: Returns {wikipedia}/E_%28mathematical_constant%29[Euler's number].
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -111,7 +111,7 @@ EXP(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns https://en.wikipedia.org/wiki/Exponential_function[Euler's number at the power] of `numeric_exp` e^numeric_exp^.
|
*Description*: Returns {wikipedia}/Exponential_function[Euler's number at the power] of `numeric_exp` e^numeric_exp^.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -177,7 +177,7 @@ LOG(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Natural_logarithm[natural logarithm] of `numeric_exp`.
|
*Description*: Returns the {wikipedia}/Natural_logarithm[natural logarithm] of `numeric_exp`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -199,7 +199,7 @@ LOG10(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Common_logarithm[base 10 logarithm] of `numeric_exp`.
|
*Description*: Returns the {wikipedia}/Common_logarithm[base 10 logarithm] of `numeric_exp`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -219,7 +219,7 @@ PI()
|
|||||||
|
|
||||||
*Output*: `3.141592653589793`
|
*Output*: `3.141592653589793`
|
||||||
|
|
||||||
*Description*: Returns https://en.wikipedia.org/wiki/Pi[PI number].
|
*Description*: Returns {wikipedia}/Pi[PI number].
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -348,7 +348,7 @@ SQRT(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns https://en.wikipedia.org/wiki/Square_root[square root] of `numeric_exp`.
|
*Description*: Returns {wikipedia}/Square_root[square root] of `numeric_exp`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -406,7 +406,7 @@ ACOS(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Inverse_trigonometric_functions[arccosine] of `numeric_exp` as an angle, expressed in radians.
|
*Description*: Returns the {wikipedia}/Inverse_trigonometric_functions[arccosine] of `numeric_exp` as an angle, expressed in radians.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -428,7 +428,7 @@ ASIN(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Inverse_trigonometric_functions[arcsine] of `numeric_exp` as an angle, expressed in radians.
|
*Description*: Returns the {wikipedia}/Inverse_trigonometric_functions[arcsine] of `numeric_exp` as an angle, expressed in radians.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -450,7 +450,7 @@ ATAN(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Inverse_trigonometric_functions[arctangent] of `numeric_exp` as an angle, expressed in radians.
|
*Description*: Returns the {wikipedia}/Inverse_trigonometric_functions[arctangent] of `numeric_exp` as an angle, expressed in radians.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -475,7 +475,7 @@ ATAN2(
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Atan2[arctangent of the `ordinate` and `abscisa` coordinates] specified as an angle, expressed in radians.
|
*Description*: Returns the {wikipedia}/Atan2[arctangent of the `ordinate` and `abscisa` coordinates] specified as an angle, expressed in radians.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -497,7 +497,7 @@ COS(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Trigonometric_functions#cosine[cosine] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians.
|
*Description*: Returns the {wikipedia}/Trigonometric_functions#cosine[cosine] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -519,7 +519,7 @@ COSH(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Hyperbolic_function[hyperbolic cosine] of `numeric_exp`.
|
*Description*: Returns the {wikipedia}/Hyperbolic_function[hyperbolic cosine] of `numeric_exp`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -541,7 +541,7 @@ COT(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Trigonometric_functions#Cosecant,_secant,_and_cotangent[cotangent] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians.
|
*Description*: Returns the {wikipedia}/Trigonometric_functions#Cosecant,_secant,_and_cotangent[cotangent] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -563,8 +563,8 @@ DEGREES(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Convert from https://en.wikipedia.org/wiki/Radian[radians]
|
*Description*: Convert from {wikipedia}/Radian[radians]
|
||||||
to https://en.wikipedia.org/wiki/Degree_(angle)[degrees].
|
to {wikipedia}/Degree_(angle)[degrees].
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -586,8 +586,8 @@ RADIANS(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Convert from https://en.wikipedia.org/wiki/Degree_(angle)[degrees]
|
*Description*: Convert from {wikipedia}/Degree_(angle)[degrees]
|
||||||
to https://en.wikipedia.org/wiki/Radian[radians].
|
to {wikipedia}/Radian[radians].
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -609,7 +609,7 @@ SIN(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Trigonometric_functions#sine[sine] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians.
|
*Description*: Returns the {wikipedia}/Trigonometric_functions#sine[sine] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -631,7 +631,7 @@ SINH(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Hyperbolic_function[hyperbolic sine] of `numeric_exp`.
|
*Description*: Returns the {wikipedia}/Hyperbolic_function[hyperbolic sine] of `numeric_exp`.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
@ -653,7 +653,7 @@ TAN(numeric_exp) <1>
|
|||||||
|
|
||||||
*Output*: double numeric value
|
*Output*: double numeric value
|
||||||
|
|
||||||
*Description*: Returns the https://en.wikipedia.org/wiki/Trigonometric_functions#tangent[tangent] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians.
|
*Description*: Returns the {wikipedia}/Trigonometric_functions#tangent[tangent] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians.
|
||||||
|
|
||||||
["source","sql",subs="attributes,macros"]
|
["source","sql",subs="attributes,macros"]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -43,7 +43,7 @@ For more information about the native realm, see
|
|||||||
[[username-validation]]
|
[[username-validation]]
|
||||||
NOTE: Usernames must be at least 1 and no more than 1024 characters. They can
|
NOTE: Usernames must be at least 1 and no more than 1024 characters. They can
|
||||||
contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces, punctuation, and
|
contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces, punctuation, and
|
||||||
printable symbols in the https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block]. Leading or trailing whitespace is not allowed.
|
printable symbols in the {wikipedia}/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block]. Leading or trailing whitespace is not allowed.
|
||||||
|
|
||||||
--
|
--
|
||||||
|
|
||||||
|
@ -35,7 +35,7 @@ A role is defined by the following JSON structure:
|
|||||||
[[valid-role-name]]
|
[[valid-role-name]]
|
||||||
NOTE: Role names must be at least 1 and no more than 1024 characters. They can
|
NOTE: Role names must be at least 1 and no more than 1024 characters. They can
|
||||||
contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces,
|
contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces,
|
||||||
punctuation, and printable symbols in the https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block].
|
punctuation, and printable symbols in the {wikipedia}/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block].
|
||||||
Leading or trailing whitespace is not allowed.
|
Leading or trailing whitespace is not allowed.
|
||||||
|
|
||||||
[[roles-indices-priv]]
|
[[roles-indices-priv]]
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
=== HTTP/REST clients and security
|
=== HTTP/REST clients and security
|
||||||
|
|
||||||
The {es} {security-features} work with standard HTTP
|
The {es} {security-features} work with standard HTTP
|
||||||
https://en.wikipedia.org/wiki/Basic_access_authentication[basic authentication]
|
{wikipedia}/Basic_access_authentication[basic authentication]
|
||||||
headers to authenticate users. Since Elasticsearch is stateless, this header must
|
headers to authenticate users. Since Elasticsearch is stateless, this header must
|
||||||
be sent with every request:
|
be sent with every request:
|
||||||
|
|
||||||
|
@ -442,7 +442,7 @@ bin/elasticsearch-keystore add xpack.notification.email.account.exchange_account
|
|||||||
===== Configuring HTML sanitization options
|
===== Configuring HTML sanitization options
|
||||||
|
|
||||||
The `email` action supports sending messages with an HTML body. However, for
|
The `email` action supports sending messages with an HTML body. However, for
|
||||||
security reasons, {watcher} https://en.wikipedia.org/wiki/HTML_sanitization[sanitizes]
|
security reasons, {watcher} {wikipedia}/HTML_sanitization[sanitizes]
|
||||||
the HTML.
|
the HTML.
|
||||||
|
|
||||||
You can control which HTML features are allowed or disallowed by configuring the
|
You can control which HTML features are allowed or disallowed by configuring the
|
||||||
|
Loading…
x
Reference in New Issue
Block a user