Commit Graph

2028 Commits

Author SHA1 Message Date
Dan Hermann a84ff81743
Data stream support for get field mappings API (#58488) (#58766) 2020-06-30 13:45:04 -05:00
Julie Tibshirani ab65a57d70
Merge mappings for composable index templates (#58709)
This PR implements recursive mapping merging for composable index templates.

When creating an index, we perform the following:
* Add each component template mapping in order, merging each one in after the
last.
* Merge in the index template mappings (if present).
* Merge in the mappings on the index request itself (if present).

Some principles:
* All 'structural' changes are disallowed (but everything else is fine). An
object mapper can never be changed between `type: object` and `type: nested`. A
field mapper can never be changed to an object mapper, and vice versa.
* Generally, each section is merged recursively. This includes `object`
mappings, as well as root options like `dynamic_templates` and `meta`. Once we
reach 'leaf components' like field definitions, they always overwrite an
existing one instead of being merged.

Relates to #53101.
2020-06-30 08:01:37 -07:00
Yannick Welsch b885cbff1a
Add index block api (#58716)
Adds an API for putting an index block in place, which also ensures for write blocks that, once successfully returning to
the user, all shards of the index are properly accounting for the block, for example that all in-flight writes to an index have
been completed after adding the write block.

This API allows coordinating more complex workflows, where it is crucial that an index is no longer receiving writes after
the API completes, useful for example when marking an index as read-only during an upgrade in order to reindex its
documents.
2020-06-30 14:06:52 +02:00
Nik Everett 03e6d1b535
Add Variable Width Histogram Aggregation (backport of #42035) (#58440)
Implements a new histogram aggregation called `variable_width_histogram` which
dynamically determines bucket intervals based on document groupings. These
groups are determined by running a one-pass clustering algorithm on each shard
and then reducing each shard's clusters using an agglomerative
clustering algorithm.

This PR addresses #9572.

The shard-level clustering is done in one pass to minimize memory overhead. The
algorithm was lightly inspired by
[this paper](https://ieeexplore.ieee.org/abstract/document/1198387). It fetches
a small number of documents to sample the data and determine initial clusters.
Subsequent documents are then placed into one of these clusters, or a new one
if they are an outlier. This algorithm is described in more details in the
aggregation's docs.

At reduce time, a
[hierarchical agglomerative clustering](https://en.wikipedia.org/wiki/Hierarchical_clustering)
algorithm inspired by [this paper](https://arxiv.org/abs/1802.00304)
continually merges the closest buckets from all shards (based on their
centroids) until the target number of buckets is reached.

The final values produced by this aggregation are approximate. Each bucket's
min value is used as its key in the histogram. Furthermore, buckets are merged
based on their centroids and not their bounds. So it is possible that adjacent
buckets will overlap after reduction. Because each bucket's key is its min,
this overlap is not shown in the final histogram. However, when such overlap
occurs, we set the key of the bucket with the larger centroid to the midpoint
between its minimum and the smaller bucket’s maximum:
`min[large] = (min[large] + max[small]) / 2`. This heuristic is expected to
increases the accuracy of the clustering.

Nodes are unable to share centroids during the shard-level clustering phase. In
the future, resolving https://github.com/elastic/elasticsearch/issues/50863
would let us solve this issue.

It doesn’t make sense for this aggregation to support the `min_doc_count`
parameter, since clusters are determined dynamically. The `order` parameter is
not supported here to keep this large PR from becoming too complex.

Co-authored-by: James Dorfman <jamesdorfman@users.noreply.github.com>
2020-06-25 11:40:47 -04:00
Rory Hunter ebe1d9cdbe Update rest-api-spec keyword list
Follow-up to 35aecf4c9aa. Somehow I missed the fact that there's an ILM
API named `retry`, which is a keyword in Ruby. I've removed it from the
keywords list.
2020-06-25 09:55:13 +01:00
Rory Hunter e413de4203 Validate that REST API names do not contain keywords (#58452)
If an API name (or components of a name) overlaps with a reserved word in
the programming language for an ES client, then it's possible that the code
that is generated from the API will not compile. This PR adds validation to
check for such overlaps.
2020-06-25 09:48:54 +01:00
Martijn van Groningen f4fad9c65a
Re-enable data streams yaml tests in bwc mode (#58500)
Backport of #58403 to 7.x branch.
2020-06-24 16:59:51 +02:00
Martijn van Groningen 7dda9934f9
Keep track of timestamp_field mapping as part of a data stream (#58400)
Backporting #58096 to 7.x branch.
Relates to #53100

* use mapping source direcly instead of using mapper service to extract the relevant mapping details
* moved assertion to TimestampField class and added helper method for tests
* Improved logic that inserts timestamp field mapping into an mapping.
If the timestamp field path consisted out of object fields and
if the final mapping did not contain the parent field then an error
occurred, because the prior logic assumed that the object field existed.
2020-06-22 17:46:38 +02:00
Jim Ferenczi 82db0b575c
Allow index filtering in field capabilities API (#57276) (#58299)
This change allows to use an `index_filter` in the
field capabilities API. Indices are filtered from
the response if the provided query rewrites to `match_none`
on every shard:

````
GET metrics-*
{
  "index_filter": {
    "bool": {
      "must": [
        "range": {
          "@timestamp": {
            "gt": "2019"
          }
        }
      }
  }
}
````

The filtering is done on a best-effort basis, it uses the can match phase
to rewrite queries to `match_none` instead of fully executing the request.
The first shard that can match the filter is used to create the field
capabilities response for the entire index.

Closes #56195
2020-06-18 10:23:26 +02:00
Rory Hunter e065d6cc91 Rename dangling index APIs (#58266)
The dangling_indices.import API name could cause issues in the client
libs because import is a reserved word in many languages. Rename the
API to avoid this, and rename the other APIs for consistency.

Related to #48366.
2020-06-18 08:58:32 +01:00
Nik Everett ab2c6d9696
Save memory when auto_date_histogram is not on top (backport of #57304) (#58190)
This builds an `auto_date_histogram` aggregator that natively aggregates
from many buckets and uses it when the `auto_date_histogram` used to use
`asMultiBucketAggregator` which should save a significant amount of
memory in those cases. In particular, this happens when
`auto_date_histogram` is a sub-aggregator of a multi-bucketing aggregator
like `terms` or `histogram` or `filters`. For the most part we preserve
the original implementation when `auto_date_histogram` only collects from
a single bucket.

It isn't possible to "just port the aggregator" without taking a pretty
significant performance hit because we used to rewrite all of the
buckets every time we switched to a coarser and coarser rounding
configuration. Without some major surgery to how to delay sub-aggs
we'd end up rewriting the delay list zillions of time if there are many
buckets.

The multi-bucket version of the aggregator has a "budget" of "wasted"
buckets and only rewrites all of the buckets when we exceed that budget.
Now that we don't rebucket every time we increase the rounding we can no
longer get an accurate count of the number of buckets! So instead the
aggregator uses an estimate of the number of buckets to trigger switching
to a coarser rounding. This estimate is likely to be *terrible* when
buckets are far apart compared to the rounding. So it also uses the
difference between the first and last bucket to trigger switching to a
coarser rounding. Which covers for the shortcomings of the bucket
estimation technique pretty well. It also causes the aggregator to emit
fewer buckets in cases where they'd be reduced together on the
coordinating node. This is wonderful! But probably fairly rare.

All of that does buy us some speed improvements when the aggregator is
a child of multi-bucket aggregator:
Without metrics or time zone: 25% faster
With metrics: 15% faster
With time zone: 22% faster

Relates to #56487
2020-06-17 08:48:41 -04:00
Rory Hunter 03369e0980
Implement dangling indices API (#58176)
Backport of #50920. Part of #48366. Implement an API for listing,
importing and deleting dangling indices.

Co-authored-by: David Turner <david.turner@elastic.co>
2020-06-16 21:50:38 +01:00
Dan Hermann 911d46370e
Prohibit clone, shrink, and split on a data stream's write index 2020-06-16 10:53:20 -05:00
Dan Hermann 17f3318732
[7.x] Resolve index API (#58037) 2020-06-12 15:41:32 -05:00
Nik Everett 5056f2792d
Skip max_buckets test when it is flaky (#58038)
Before #57042 the max_buckets test would consistently pass because the
request would consistently fail. In particular, the request would fail on
the data node. After #57042 it only fails on the coordinating node. When
the max_buckets test is run in a mixed version cluster it consistently
fails on *either* the data node or the coordinating node. Except when
the coordinating node is missing #43095. In that case if the one data
node has #57042 and one does not, *and* the one that doesn't gets the
request first, fails it as expected, and then the coordinating node
retries the request on the node with #57042. When that happens the
request fails mysteriously with "partial shard failures" as the error
message but not partial failures reported. This is *exactly* the bug
fixed in #43095.

This updates the test to be skipped in mixed version clusters without
 #43095 because they *sometimes* fail the test spuriously. The request
fails in those cases, just like we expect, but with a mysterious error
message.

Closes #57657
2020-06-12 15:06:56 -04:00
Mayya Sharipova 8bd0147ba7
Correct how meta-field is defined for pre 7.8 hits (#57951)
We keep a static list of meta-fields: META_FIELDS_BEFORE_7_8
as it was before.
This is done to ensure the backwards compatability with pre 7.8 nodes.

Closes #57831
2020-06-12 09:39:53 -04:00
Martijn van Groningen 01d8bb8cfa
Enforce valid field mapping exists for timestamp_field in templates. (#58036)
Backport of #57741 to 7.x branch.

Relates to #53100
2020-06-12 15:24:42 +02:00
Martijn van Groningen f4199f2ee0
Prohibit append-only writes targeting backing indices directly. (#58025)
Backport of #57788 to 7.x branch.

Append-only writes can only target the corresponding data stream.

Relates to #53100
2020-06-12 13:17:55 +02:00
Russ Cam f51f9b19c7 Mark Component and Index template APIs as experimental (#57910)
This commit marks the Component Template and
Index Template APIs as experimental.

(cherry picked from commit a85f2bede8eb632e3837ac7630f8dfdf46da6b52)
2020-06-10 14:07:09 +10:00
Dan Hermann b501b282f8
Change default backing index naming scheme 2020-06-09 09:31:34 -05:00
Lee Hinman 6e8cf0973f
[7.x] Disallow merging existing mapping field definitions in templates (#57701) (#57822)
Backports the following commits to 7.x:

    Disallow merging existing mapping field definitions in templates (#57701)
2020-06-08 12:56:09 -06:00
Benjamin Trent 16fcb64c99
Test mute (#57832) 2020-06-08 14:03:16 -04:00
Nik Everett ee0ce8ffaf
Fix a bug with missing fields in sig_terms (#57757)
When you run a `significant_terms` aggregation on a field and it *is*
mapped but there aren't any values for it then the count of the
documents that match the query on that shard still have to be added to
the overall doc count. I broke that in #57361. This fixes that.

Closes #57402
2020-06-08 10:07:14 -04:00
Dan Hermann 3fe93e24a6
[7.x] Prohibit closing the write index for a data stream (#57740) 2020-06-05 11:14:43 -05:00
Nik Everett 94b3eed6be Re-mute test
Tracked in #57402
2020-06-05 10:52:24 -04:00
Nik Everett de27253d87 Drop skip on test after backporting fix
Fixed in 98c379c507.

Closes #57402
2020-06-04 16:04:18 -04:00
Nik Everett 98c379c507
Merge remaining sig_terms into terms (#57397) (#57687)
Merges the remaining implementation of `significant_terms` into `terms`
so that we can more easilly make them work properly without
`asMultiBucketAggregator` which *should* save memory and speed them up.

Relates #56487
2020-06-04 14:32:32 -04:00
Nik Everett 97c06816a4
Fix an optimization in terms agg (backport #57438) (#57547)
When the `terms` agg runs against strings and uses global ordinals it
has an optimization when it collects segments that only ever have a
single value for the particular string. This is *very* common. But I
broke it in #57241. This fixes that optimization and adds `debug`
information that you can use to see how often we collect segments of
each type. And adds a test to make sure that I don't break the
optimization again.

We also had a specialiation for when there isn't a filter on the terms
to aggregate. I had removed that specialization in #57241 which resulted
in some slow down as well. This adds it back but in a more clear way.
And, hopefully, a way that is marginally faster when there *is* a
filter.

Closes #57407
2020-06-02 14:57:45 -04:00
David Kyle 4d54bb3917
Correct expected warning in indices.create yml tests (#57409)
v2 index is now composable, v1 is now legacy
2020-06-01 19:22:30 +01:00
David Kyle 82f27ef128
Mute msearch yml test (#57406)
For #57402
2020-06-01 13:02:46 +01:00
Nik Everett 4263c25b2f
Save memory when histogram agg is not on top (backport of #57277) (#57377)
This saves some memory when the `histogram` aggregation is not a top
level aggregation by dropping `asMultiBucketAggregator` in favor of
natively implementing multi-bucket storage in the aggregator. For the
most part this just uses the `LongKeyedBucketOrds` that we built the
first time we did this.
2020-05-29 15:07:37 -04:00
Martijn van Groningen d8928b3f48
fixed allowed warnings in yaml test 2020-05-29 13:37:49 +02:00
Martijn van Groningen 04ef39da77
Change cluster info actions to be able to resolve data streams. (#57343)
Backport of #56878 to 7.x branch.

With this change the following APIs will be able to resolve data streams:
get index, get mappings and ilm explain APIs.

Relates to #53100
2020-05-29 12:17:53 +02:00
Russ Cam 2a9073d4c1 Deprecate local param in get_mapping.json (#57265)
Relates: elastic/elasticsearch#55014

This commit deprecates the local param in get_mapping.json.
This parameter is a no-op and field mappings are always retrieved locally.

(cherry picked from commit 0b041cccd894f01d723fb2979f70c1cf279700a6)
2020-05-29 12:25:41 +10:00
Nik Everett b9fe10866e
Make global ords terms simpler to understand (backport of #57241) (#57311)
When the `terms` enum operates on non-numeric data it can collect it via
global ordinals. It actually has two separate collection strategies for,
one "dense" and one "remapping". Each of *those* strategies has two
"iteration" strategies that it uses to build buckets, depending on
whether or not we need buckets with `0` docs in them. Previously this
was done with several `null` checks and never really explained. This
change replaces those checks with two `CollectionStrategy` classes which
have good stuff like documentation.
2020-05-28 16:52:35 -04:00
Martijn van Groningen 225ccd1cfa
Ensure template exists when creating data stream (#57275)
Backporting #56888 to 7.x branch.

Limit the creation of data streams only for namespaces that have a composable template with a data stream definition.

This way we ensure that mappings/settings have been specified and will be used at data stream creation and data stream rollover.

Also remove `timestamp_field` parameter from create data stream request and
let the create data stream api resolve the timestamp field
from the data stream definition snippet inside a composable template.

Relates to #53100
2020-05-28 15:08:25 +02:00
Dan Hermann 2738998ebb
Limit _cat/indices test to versions with fix (#57244) (#57256) 2020-05-27 16:57:24 -05:00
Lee Hinman c0f732b9f6
[7.x] Rename template V2 classes to ComposableTemplate (#57183) (#57232)
Backports the following commits to 7.x:

    Rename template V2 classes to ComposableTemplate (#57183)
2020-05-27 11:01:59 -06:00
Nik Everett 4d5be7c817
Save memory on numeric sig terms when not top (backport of #56789) (#57221)
This saves memory when running numeric significant terms which are not
at the top level by merging its collection into numeric terms and relying
on the optimization that we made in #55873.
2020-05-27 12:03:28 -04:00
Jake Landis 920677af6f
7.x only REST specification fixes (#56736)
Fixes for the REST specification specific to 7.x

* remove ignore "cat.thread_pool.json" and add the "" as valid option. #55984 deprecated this field since it these params here have no effect on this specific API
* remove ignore "indices.put_mapping.json" by adding the required / in the path to pass validation.
2020-05-26 12:33:57 -05:00
Dan Hermann c5f61fe24c
Handle exceptions when building _cat/indices response 2020-05-25 09:59:24 -05:00
James Rodewig b3426dd558
[DOCS] Add delete snapshot repo API docs (#57043)
Changes:

* Adds API reference docs for the delete snapshot repo API.

* Corrects an error in the delete snapshot repo API spec. Comma-separated
repository names are not supported.

* Relocates the existing delete snapshot repo API example docs.
2020-05-21 14:47:07 -04:00
Nik Everett 8b9c4eb3e0
Save memory when date_histogram is not on top (#56921) (#56960)
When `date_histogram` is a sub-aggregator it used to allocate a bunch of
objects for every one of it's parent's buckets. This uses the data
structures that we built in #55873 rework the `date_histogram`
aggregator instead of all of the allocation.

Part of #56487
2020-05-19 17:36:55 -04:00
Ioannis Kakavas 0eb81870de
Adjust version mute for reload secure settings (#56938) (#56951)
We can safely run the reload_secure_settings tests
after 7.7.0 , the relevant changes have long been
backported there
2020-05-20 00:24:29 +03:00
Ioannis Kakavas 38e55cd348
Adjust reload keystore test to pass in FIPS (#56889) (#56940)
In KeystoreWrapper class we determine if the error to decrypt a
given keystore is caused by a wrong password based on the exception
that the SunJCE implementation of AES is
throwing(AEADBadTagException). Other implementations from other
Security Providers fail with a different exception and as such we
cannot differentiate between a corrupted file and a wrong password
in a foolproof way.
As in other tests such as in
KeyStoreWrapperTests#testDecryptKeyStoreWithWrongPassword
we handle this by matching both possible exception messages.
2020-05-19 18:11:43 +03:00
James Rodewig ecf6d8f974
[DOCS] Fix component template API link in JSON specs (#56884) (#56945)
Co-authored-by: Tomas Della Vedova <delvedor@users.noreply.github.com>
2020-05-19 11:00:15 -04:00
Lee Hinman e208925465
[7.x] Add template simulation API for simulating template composition (#56842) (#56924) 2020-05-19 08:12:21 -06:00
Dan Hermann 66871c5342
[7.x] Rename endpoint from plural "_data_streams" to singular "_data_stream" (#56825) 2020-05-15 10:27:53 -05:00
Ryan Ernst 9fb80d3827
Move publishing configuration to a separate plugin (#56727)
This is another part of the breakup of the massive BuildPlugin. This PR
moves the code for configuring publications to a separate plugin. Most
of the time these publications are jar files, but this also supports the
zip publication we have for integ tests.
2020-05-14 20:23:07 -07:00
Lee Hinman a73d7d9e2b
[7.x] Don't allow invalid template combinations (#56397) (#56795)
Backports the following commits to 7.x:

- Don't allow invalid template combinations (#56397)
2020-05-14 16:20:53 -06:00