This aggregation will perform normalizations of metrics
for a given series of data in the form of bucket values.
The aggregations supports the following normalizations
- rescale 0-1
- rescale 0-100
- percentage of sum
- mean normalization
- z-score normalization
- softmax normalization
To specify which normalization is to be used, it can be specified
in the normalize agg's `normalizer` field.
For example:
```
{
"normalize": {
"buckets_path": <>,
"normalizer": "percent"
}
}
```
* [DOCS] Add info about ILM and unallocated shards.
* Incorporated review feedback.
* Update docs/reference/ilm/actions/ilm-allocate.asciidoc
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
* Apply suggestions from code review
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
* Fix xref
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
This adds a few things to the `breakdown` of the profiler:
* `histogram` aggregations now contain `total_buckets` which is the
count of buckets that they collected. This could be useful when
debugging a histogram inside of another bucketing agg that is fairly
selective.
* All bucketing aggs that can delay their sub-aggregations will now add
a list of delayed sub-aggregations. This is useful because we
sometimes have fairly involved logic around which sub-aggregations get
delayed and this will save you from having to guess.
* Aggregtations wrapped in the `MultiBucketAggregatorWrapper` can't
accurately add anything to the breakdown. Instead they the wrapper
adds a marker entry `"multi_bucket_aggregator_wrapper": true` so we
can be quickly pick out such aggregations when debugging.
It also fixes a bug where `_count` breakdown entries were contributing
to the overall `time_in_nanos`. They didn't add a large amount of time
so it is unlikely that this caused a big problem, but I was there.
To support the arbitrary breakdown data this reworks the profiler so
that the `breakdown` can contain any data that is supported by
`StreamOutput#writeGenericValue(Object)` and
`XContentBuilder#value(Object)`.
This optional parameter can only be a string. To test out a transient custom
analysis chain, users are expected to use the 'tokenizer', 'filter', and
'char_filter' parameters.
Today we report some statistics in terms of Lucene-level documents, which
differ from Elasticsearch-level documents in a number of ways and include
things like document tombstones which users cannot directly observe. This
commit clarifies the internal nature of these statistics.
Closes#56497
The docs pattern url was using `*` which means zero or many instead
of `?` which means zero or one. The pattern url returned in error
messages was not in sync with the one in the docs.
Fixes: #56476
(cherry picked from commit 1a5945c3962cdda21482f4b0b3e0ca508534c2c4)
* [DOCS] Promote cron expressions info from Watcher to a separate topic.
* Fix table error
* Fixed xref
* Apply suggestions from code review
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
* Incorporated review feedback
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
'system' indices will carry special meaning in the future this commit
removes the system from the name to avoid confusion. (technically
these indices will be hidden not system)
* QL: case sensitive support in EQL (#56404)
* adds a generic startsWith function to QL
* modifies the existent EQL startsWith function to be case sensitive
aware
* improves the existent EQL startsWith function to use a prefix query
when the function is used in a case sensitive context. Same improvement
is used in SQL's newly added STARTS_WITH function.
* adds case sensitivity to EQL configuration through a case_sensitive
parameter in the eql request, as established in #54411.
The case_sensitive parameter can be specified when running queries
(default is case insensitive)
(cherry picked from commit ee5a09ea840167566e34c28c8225dc38bc6a7ae8)
Similar to what the moving function aggregation does, except merging windows of percentiles
sketches together instead of cumulatively merging final metrics
This commit removes the `prefer_v2_templates` flag and setting. This was a brief setting that
allowed specifying whether V1 or V2 template should be used when an index is created. It has been
removed in favor of V2 templates always having priority.
Relates to #53101Resolves#56528
This is not a breaking change because this flag was never in a released version.
* [DOCS] Align with ILM changes.
* Apply suggestions from code review
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
* Incorporated review comments.
Changes:
* Moves the document request body parameters for the search API
from the Request body search page to the Search API reference page.
* Relocates a search request body example from the Request body search
page to the Search API reference page.
* Adds a note to any duplicated query and request body parameters.
* Add xpack setting deprecations to deprecation API
The deprecated settings showed up in the deprecation log file by
default, but I did not add them to the deprecation API. This commit
fixes that. Now if you use one of the deprecated basic feature
enablement settings, calling _monitoring/deprecations will inform you of
that fact.
* Remove incorrectly backported settings documents
It seems that I backported these docs to the wrong place in #56061,
in #55980, and in #56167. I hope they're in the right place now.
Co-authored-by: debadair <debadair@elastic.co>