Commit Graph

2360 Commits

Author SHA1 Message Date
Yannick Welsch a610513ec7 Provide repository-level stats for searchable snapshots (#55051)
Provides basic repository-level stats that will allow us to get some insight into how many
requests are actually being made by the underlying SDK. Currently only tracks GET and LIST
calls for S3 repositories. Most of the code is unfortunately boiler plate to add a new endpoint
that will help us better understand some of the low-level dynamics of searchable snapshots.
2020-04-14 14:34:08 +02:00
William Brafford 52bebec51f
NodeInfo response should use a collection rather than fields (#54460) (#55132)
This is a first cut at giving NodeInfo the ability to carry a flexible
list of heterogeneous info responses. The trick is to be able to
serialize and deserialize an arbitrary list of blocks of information. It
is convenient to be able to deserialize into usable Java objects so that
we can aggregate nodes stats for the cluster stats endpoint.

In order to provide a little bit of clarity about which objects can and
can't be used as info blocks, I've introduced a new interface called
"ReportingService."

I have removed the hard-coded getters (e.g., getOs()) in favor of a
flexible method that can return heterogeneous kinds of info blocks
(e.g., getInfo(OsInfo.class)). Taking a class as an argument removes the
need to cast in the client code.
2020-04-13 17:18:39 -04:00
Nik Everett c00811f3a3
Make some agg tests easier to read (#54954) (#55079)
We added a fancy method to provide random realistic test data to the
reduction tests in #54910. This uses that to remove some of the more
esoteric machinations in the agg tests. This will marginally increase
the coverage of the serialiation tests and, more importantly, remove
some mysterious value generation code that only really made sense for
random reduction tests but was used all over the place. It doesn't, on
the other hand, make the tests shorter. Just *hopefully* more clear.

I only cleaned up a few tests this way. If we like this it'd probably be
worth grabbing others.
2020-04-10 14:15:30 -04:00
Martijn van Groningen 7f38b146b3
Temporarily preserve data streams after each yaml rest test has executed. (#54959) (#55007)
Instead delete the data streams manually, until client yaml test runners
have been updated to also delete all data streams after each yaml test.

Relates to #53100
2020-04-09 14:44:57 +02:00
Tal Levy 254d1e3543
[7.x] Create new `geo` module and migrate geo_shape registration (#53562) (#54924)
This commit introduces a new `geo` module that is intended
to be contain all the geo-spatial-specific features in server.

As a first step, the responsibility of registering the geo_shape
field mapper is moved to this module.

Co-authored-by: Nicholas Knize <nknize@gmail.com>
2020-04-07 16:30:58 -07:00
Tim Brooks 619028c33e
Implement transport circuit breaking in aggregator (#54927)
This commit moves the action name validation and circuit breaking into
the InboundAggregator. This work is valuable because it lays the
groundwork for incrementally circuit breaking as data is received.

This PR includes the follow behavioral change:

Handshakes contribute to circuit breaking, but cannot be broken. They
currently do not contribute nor are they broken.
2020-04-07 17:10:31 -06:00
Tim Brooks c7053ef824
Use TransportChannel in TransportHandshaker (#54921)
Currently the TransportHandshaker has a specialized codepath for sending
a response. In other work, we are going to start having handshakes
contribute to circuit breaking (while not being breakable). This commit
moves in that direction by allowing the handshaker to responding using a
standard TcpTransportChannel similar to other requests.
2020-04-07 15:37:15 -06:00
Nik Everett ce7ae4a7d1
Remove pipline aggs from agg result tree (backport of #54716) (#54920)
This removes pipeline aggregators from the aggregation result tree
except for a single field used for backwards compatibility with pre-7.8
versions of Elasticsearch. That field isn't populated unless we are
serializing to pre-7.8 Elasticsearch. So, good news! We no longer build
pipeline aggregators on the data node. Most of the time.
2020-04-07 17:22:23 -04:00
Nik Everett 100f7258c7
Improve agg reduce tests (#54910) (#54914)
This allows subclasses of `InternalAggregationTestCase` to make a `List`
of values to reduce so that it can make values that are realistic
*together*. The first use of this is with `InternalTTest` which uses it
to make results that don't cause their `sum` field to wrap. It'd likely
be useful for a ton of other aggs but just one for now.
2020-04-07 17:22:04 -04:00
Tim Brooks 9cf2406cf1
Move network stats marking into InboundPipeline (#54908)
This is a follow-up to #48263. It moves the inbound stats tracking
inside of the InboundPipeline.
2020-04-07 13:34:05 -06:00
Nik Everett 3c56e0de42
Fix scripted metric in ccs (backport of #54776) (#54888)
`scripted_metric` did not work with cross cluster search because it
assumed that you'd never perform a partial reduction, serialize the
results, and then perform a final reduction. That
serialized-after-partial-reduction step was broken.

This is also required to support #54758.
2020-04-07 10:43:00 -04:00
Tanguy Leroux 4d36917e52
Merge feature/searchable-snapshots branch into 7.x (#54803) (#54825)
This is a backport of #54803 for 7.x.

This pull request cherry picks the squashed commit from #54803 with the additional commits:

    6f50c92 which adjusts master code to 7.x
    a114549 to mute a failing ILM test (#54818)
    48cbca1 and 50186b2 that cleans up and fixes the previous test
    aae12bb that adds a missing feature flag (#54861)
    6f330e3 that adds missing serialization bits (#54864)
    bf72c02 that adjust the version in YAML tests
    a51955f that adds some plumbing for the transport client used in integration tests

Co-authored-by: David Turner <david.turner@elastic.co>
Co-authored-by: Yannick Welsch <yannick@welsch.lu>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Andrei Dan <andrei.dan@elastic.co>
2020-04-07 13:28:53 +02:00
Costin Leau a7c31a7632 Test: Include ML ILM policy in EsRestTest (#54773)
This should avoid REST failures caused by the inability to delete said
policy
Fix #54759

(cherry picked from commit 3ba5e02b713c03b1bdf14e0367a2bce68c35dd30)
2020-04-05 20:10:17 +03:00
Jason Tedor 05c5529b2d
Clean up a few instances of "MetaData"
We recently cleaned up the use of the word "metadata" across the
codebase. A few additional uses have trickled in, likely from
in-progress work. This commit cleans up these last few instances.

Relates #54519
2020-04-04 10:55:09 -04:00
Christoph Büscher 8c9ac14a98
Rename field name constants in AbstractBuilderTestCase (#53234)
Some field name constants were not updaten when we moved from "string" to "text"
and "keyword" fields. Renaming them makes it easier and faster to know which
field type is used in test subclassing this base test case.
2020-04-03 17:28:22 +02:00
Nik Everett 54ea4f4f50 Begin to drop pipeline aggs from the result tree (backport of #54311) (#54659)
Removes pipeline aggregations from the aggregation result tree as they
are no longer used. This stops us from building the pipeline aggregators
at all on data nodes except for backwards compatibility serialization.
This will save a tiny bit of space in the aggregation tree which is
lovely, but the biggest benefit is that it is a step towards simplifying
pipeline aggregators.

This only does about half of the work to remove the pipeline aggs from
the tree. Removing all of it would, well, double the size of the change
and make it harder to review.
2020-04-02 16:45:12 -04:00
Nik Everett a5adac0d1e
Fix pipeline agg serialization for ccs (backport of #54282) (#54468)
This fixes pipeline aggregations used in cross cluster search from an older
version of Elasticsearch to a newer version of Elasticsearch. I broke
this in #53730 when I was too aggressive in shutting off serialization
of pipeline aggs. In particular, this comes up when the coordinating
node is pre-7.8.0 and the gateway node is on or after 7.8.0.

The fix is another step down the line to remove pipeline aggregators
from the aggregation tree. Sort of. It create a new
`List<PipelineAggregator>` member in `InternalAggregation` *but* it is
only used for bwc serialization and it is fed by the mechanism
established in #53730 to read the pipelines from the
2020-04-02 10:35:40 -04:00
Andy Bristol eb14635f1f
add tests to StatsAggregatorTests (#53768)
Adds tests for supported ValuesSourceTypes, unmapped fields, scripting,
and the missing param. The tests for unmapped fields and scripting are
migrated from the StatsIT integration test
2020-04-01 17:07:51 -07:00
William Brafford 958e9d1b78
Refactor nodes stats request builders to match requests (#54363) (#54604)
* Refactor nodes stats request builders to match requests (#54363)

* Remove hard-coded setters from NodesInfoRequestBuilder

* Remove hard-coded setters from NodesStatsRequest

* Use static imports to reduce clutter

* Remove uses of old info APIs
2020-04-01 17:03:04 -04:00
Ioannis Kakavas c9ffa379ba
[7.x] Add end to end QA authentication test (#54215) (#54567)
Use the same ES cluster as both an SP and an IDP and perform
IDP initiated and SP initiated SSO. The REST client plays the role
of both the Cloud UI and Kibana in these flows

Backport of #54215

* fix compilation issues
2020-04-01 18:35:21 +03:00
David Turner 6d976e1468 Resolve some coordination-layer TODOs (#54511)
This commit removes a handful of TODO comments in the cluster coordination
layer that no longer apply.

Relates #32006
2020-04-01 12:36:18 +01:00
Lee Hinman 987b6ecffa
[7.x] Be lenient when deleting index and component templates a… (#54538)
We previously checked for a 405 response, but on 7.x BWC tests may be hitting versions that don't
support the 405 response (returning 400 or 500), so be more lenient in those cases.

Relates to #54513
2020-03-31 16:24:05 -06:00
Jason Tedor 63e5f2b765
Rename META_DATA to METADATA
This is a follow up to a previous commit that renamed MetaData to
Metadata in all of the places. In that commit in master, we renamed
META_DATA to METADATA, but lost this on the backport. This commit
addresses that.
2020-03-31 17:30:51 -04:00
Jason Tedor 5fcda57b37
Rename MetaData to Metadata in all of the places (#54519)
This is a simple naming change PR, to fix the fact that "metadata" is a
single English word, and for too long we have not followed general
naming conventions for it. We are also not consistent about it, for
example, METADATA instead of META_DATA if we were trying to be
consistent with MetaData (although METADATA is correct when considered
in the context of "metadata"). This was a simple find and replace across
the code base, only taking a few minutes to fix this naming issue
forever.
2020-03-31 17:24:38 -04:00
Zachary Tong c9db2de41d
[7.x] Comprehensively test supported/unsupported field type:agg combinations (#54451)
* Comprehensively test supported/unsupported field type:agg combinations (#52493)

This adds a test to AggregatorTestCase that allows us to programmatically
verify that an aggregator supports or does not support a particular
field type.  It fetches the list of registered field type parsers,
creates a MappedFieldType from the parser and then attempts to run
a basic agg against the field.

A supplied list of supported VSTypes are then compared against the
output (success or exception) and suceeds or fails the test accordingly.

Co-Authored-By: Mark Tozzi <mark.tozzi@gmail.com>
* Skip fields that are not aggregatable

* Use newIndexSearcher() to avoid incompatible readers (#52723)

Lucene's `newSearcher()` can generate readers like ParallelCompositeReader
which we can't use.  We need to instead use our helper `newIndexSearcher`
2020-03-31 14:35:03 -04:00
Lee Hinman a3d1945254
[7.x] Add warnings/errors when V2 templates would match same i… (#54449)
* Add warnings/errors when V2 templates would match same indices… (#54367)

* Add warnings/errors when V2 templates would match same indices as V1

With the introduction of V2 index templates, we want to warn users that templates they put in place
might not take precedence (because v2 templates are going to "win"). This adds this validation at
`PUT` time for both V1 and V2 templates with the following rules:

** When creating or updating a V2 template
- If the v2 template would match indices for an existing v1 template or templates, provide a
warning (through the deprecation logging so it shows up to the client) as well as logging the
warning

The v2 warning looks like:

```
index template [my-v2-template] has index patterns [foo-*] matching patterns from existing older
templates [old-v1-template,match-all-template] with patterns (old-v1-template =>
[foo*],match-all-template => [*]); this template [my-v2-template] will take
precedence during new index creation
```

** When creating a V1 template
- If the v1 template is for index patterns of `"*"` and a v2 template exists, warn that the v2
template may take precedence
- If the v1 template is for index patterns other than all indices, and a v2 template exists that
would match, throw an error preventing creation of the v1 template

** When updating a V1 template (without changing its existing `index_patterns`!)
- If the v1 template is for index patterns that would match an existing v2 template, warn that the
v2 template may take precedence.

The v1 warning looks like:

```
template [my-v1-template] has index patterns [*] matching patterns from existing index templates
[existing-v2-template] with patterns (existing-v2-template => [foo*]); this template [my-v1-template] may be ignored in favor of an index template at index creation time
```

And the v1 error looks like:

```
template [my-v1-template] has index patterns [foo*] matching patterns from existing index templates
[existing-v2-template] with patterns (existing-v2-template => [f*]), use index templates (/_index_template) instead
```

Relates to #53101

* Remove v2 index and component templates when cleaning up tests

* Finish half-finished comment sentence

* Guard template removal and ignore for earlier versions of ES

Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>

* Also ignore 500 errors when clearing index template v2 templates

Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
2020-03-30 13:25:50 -06:00
Tim Brooks 2ccddbfa88
Move transport decoding and aggregation to server (#54360)
Currently all of our transport protocol decoding and aggregation occurs
in the individual transport modules. This means that each implementation
(test, netty, nio) must implement this logic. Additionally, it means
that the entire message has been read from the network before the server
package receives it.

This commit creates a pipeline in server which can be passed arbitrary
bytes to handle. Internally, the pipeline will decode, decompress, and
aggregate the messages. Additionally, this allows us to run many
megabytes of bytes through the pipeline in tests to ensure that the
logic works.

This work will enable future work:

Circuit breaking or backoff logic based on message type and byte
in the content aggregator.
Sharing bytes with the application layer using the ref counted
releasable network bytes.
Improved network monitoring based specifically on channels.
Finally, this fixes the bug where we do not circuit break on the correct
message size when compression is enabled.
2020-03-27 14:13:10 -06:00
Stuart Tettemer 1630de4a42
Scripting: stats per context in nodes stats (#54008) (#54357)
Adds script cache stats to `_node/stats`.
If using the general cache:
```
      "script_cache": {
        "sum": {
          "compilations": 12,
          "cache_evictions": 9,
          "compilation_limit_triggered": 5
        }
      }

```
If using context caches:
```
      "script_cache": {
        "sum": {
          "compilations": 13,
          "cache_evictions": 9,
          "compilation_limit_triggered": 5
        },
        "contexts": [
          {
            "context": "aggregation_selector",
            "compilations": 8,
            "cache_evictions": 6,
            "compilation_limit_triggered": 3
          },
          {
            "context": "aggs",
            "compilations": 5,
            "cache_evictions": 3,
            "compilation_limit_triggered": 2
          },
```
Backport of: 32f46f2
Refs: #50152
2020-03-27 12:26:00 -06:00
Yannick Welsch 8176f1c7f8
Delegate toXContent logic from ClusterState to its member classes (#54192)
Backport of #52743

Co-authored-by: Yannick Welsch <yannick@welsch.lu>
2020-03-26 15:08:30 +01:00
Yannick Welsch 1ba6783780 Schedule commands in current thread context (#54187)
Changes ThreadPool's schedule method to run the schedule task in the context of the thread
that scheduled the task.

This is the more sensible default for this method, and eliminates a range of bugs where the
current thread context is mistakenly dropped.

Closes #17143
2020-03-26 10:07:59 +01:00
Nik Everett 8f40f1435a
Save a little space in agg tree (backport of #53730) (#54213)
This drop the "top level" pipeline aggregators from the aggregation
result tree which should save a little memory and a few serialization
bytes. Perhaps more imporantly, this provides a mechanism by which we
can remove *all* pipelines from the aggregation result tree. This will
save quite a bit of space when pipelines are deep in the tree.

Sadly, doing this isn't simple because of backwards compatibility. Nodes
before 7.7.0 *need* those pipelines. We provide them by setting passing
a `Supplier<PipelineTree>` into the root of the aggregation tree that we
only call if we need to serialize to a version before 7.7.0.

This solution works for cross cluster search because we always reduce
the aggregations in each remote cluster and then forward them back to
the coordinating node. Its quite possible that the coordinating node
needs the pipeline (say it is version 7.1.0) and the gateway node in the
remote cluster doesn't (version 7.7.0). In that case the data nodes
won't send the pipeline aggregations back to the gateway node.
Critically, the gateway node *will* send the pipeline aggregations back
to the coordinating node. This is all managed with that
`Supplier<PipelineTree>`, but *how* it is managed is a bit tricky.
2020-03-25 15:51:16 -04:00
Yannick Welsch e006d1f6cf Use special XContent registry for node tool (#54050)
Fixes an issue where the elasticsearch-node command-line tools would not work correctly
because PersistentTasksCustomMetaData contains named XContent from plugins. This PR
makes it so that the parsing for all custom metadata is skipped, even if the core system would
know how to handle it.

Closes #53549
2020-03-24 17:40:51 +01:00
Tanguy Leroux dea8a31480 Wait for Active license before running CCR API tests (#53966)
DocsClientYamlTestSuiteIT sometimes fails for CCR 
related tests because tests are started before the license 
is fully applied and active within the cluster. The first 
tests to be executed then fails with the error noticed 
in #53430. This can be easily reproduced locally by 
only running CCR docs tests.

This commit adds some @Before logic in 
DocsClientYamlTestSuiteIT so that it waits for the 
license to be active before running CCR tests.

Closes #53430
2020-03-24 14:29:45 +01:00
Nik Everett b9bfba2c8b
Move pipeline agg validation to coordinating node (backport of #53669) (#54019)
This moves the pipeline aggregation validation from the data node to the
coordinating node so that we, eventually, can stop sending pipeline
aggregations to the data nodes entirely. In fact, it moves it into the
"request validation" stage so multiple errors can be accumulated and
sent back to the requester for the entire request. We can't always take
advantage of that, but it'll be nice for folks not to have to play
whack-a-mole with validation.

This is implemented by replacing `PipelineAggretionBuilder#validate`
with:
```
protected abstract void validate(ValidationContext context);
```

The `ValidationContext` handles the accumulation of validation failures,
provides access to the aggregation's siblings, and implements a few
validation utility methods.
2020-03-23 17:22:56 -04:00
Marios Trivyzas 3a3e964956
Reduce performance impact of ExitableDirectoryReader (#53978) (#54014)
Benchmarking showed that the effect of the ExitableDirectoryReader
is reduced considerably when checking every 8191 docs. Moreover,
set the cancellable task before calling QueryPhase#preProcess()
and make sure we don't wrap with an ExitableDirectoryReader at all
when lowLevelCancellation is set to false to avoid completely any
performance impact.

Follows: #52822
Follows: #53166
Follows: #53496

(cherry picked from commit cdc377e8e74d3ca6c231c36dc5e80621aab47c69)
2020-03-23 21:30:34 +01:00
Ryan Ernst 960d1fb578
Revert "Introduce system index APIs for Kibana (#53035)" (#53992)
This reverts commit c610e0893d.

backport of #53912
2020-03-23 10:29:35 -07:00
Armin Braun 5b9864db2c
Better Incrementality for Snapshots of Unchanged Shards (#52182) (#53984)
Use sequence numbers and force merge UUID to determine whether a shard has changed or not instead before falling back to comparing files to get incremental snapshots on primary fail-over.
2020-03-23 16:43:41 +01:00
Tim Vernum cde8725e3c
Create API Key on behalf of other user (#53943)
This change adds a "grant API key action"

   POST /_security/api_key/grant

that creates a new API key using the privileges of one user ("the
system user") to execute the action, but creates the API key with
the roles of the second user ("the end user").

This allows a system (such as Kibana) to create API keys representing
the identity and access of an authenticated user without requiring
that user to have permission to create API keys on their own.

This also creates a new QA project for security on trial licenses and runs
the API key tests there

Backport of: #52886
2020-03-23 18:50:07 +11:00
David Turner adfeb50a53 Use consistent threadpools in CoordinatorTests (#53868)
Today in the `CoordinatorTests` each node uses multiple threadpools. This is
mostly fine as they are almost completely stateless, except for the
`ThreadContext`: by using multiple threadpools we cannot make assertions that
the thread context is/isn't preserved as we expect. This commit consolidates
the threadpool instances in use so that each node uses just one.
2020-03-20 16:22:42 +01:00
Armin Braun 7d66c7e25c
Rethrow Failed Assertion in BlobStoreTestUtils (#53859) (#53863)
This fixes two issues:

1. Currently, the future here is never resolved on assertion error so a failing test would take a full minute
to complete until the future times out.
2. S3 tests overide this method to busy assert on this method. This only works if an assertion error makes it
to the calling thread.

Closes #53508
2020-03-20 15:31:48 +01:00
Alan Woodward 10a2a95b25 Fix backport merge error in ESTestCase
We were ignoring the `stripXContentPosition` boolean when checking for
warnings.
2020-03-20 12:12:47 +00:00
Alan Woodward d23112f441 Report parser name and location in XContent deprecation warnings (#53805)
It's simple to deprecate a field used in an ObjectParser just by adding deprecation
markers to the relevant ParseField objects. The warnings themselves don't currently
have any context - they simply say that a deprecated field has been used, but not
where in the input xcontent it appears. This commit adds the parent object parser
name and XContentLocation to these deprecation messages.

Note that the context is automatically stripped from warning messages when they
are asserted on by integration tests and REST tests, because randomization of
xcontent type during these tests means that the XContentLocation is not constant
2020-03-20 11:52:55 +00:00
David Turner 7d3ac4f57d Revert "Apply cluster states in system context (#53785)"
This reverts commit 4178c57410.
2020-03-19 15:20:36 +00:00
David Turner 4178c57410 Apply cluster states in system context (#53785)
Today cluster states are sometimes (rarely) applied in the default context
rather than system context, which means that any appliers which capture their
contexts cannot do things like remote transport actions when security is
enabled.

There are at least two ways that we end up applying the cluster state in the
default context:

1. locally applying a cluster state that indicates that the master has failed
2. the elected master times out while waiting for a response from another node

This commit ensures that cluster states are always applied in the system
context.

Mitigates #53751
2020-03-19 14:48:55 +00:00
Stuart Tettemer cdbee32f55
Scripting: Per-context script cache, default off (#52855) (#53756)
* Adds per context settings:
  `script.context.${CONTEXT}.cache_max_size` ~
  `script.cache.max_size`

  `script.context.${CONTEXT}.cache_expire` ~
  `script.cache.expire`

  `script.context.${CONTEXT}.max_compilations_rate` ~
  `script.max_compilations_rate`

* Context cache is used if:
  `script.max_compilations_rate=use-context`.  This
  value is dynamically updatable, so users can
  switch back to the general cache if desired.

* Settings for context caches take the first value
  that applies:
  1) Context specific settings if set, eg
     `script.context.ingest.cache_max_size`
  2) Correlated general setting is set to the non-default
     value, eg `script.cache.max_size`
  3) Context default

The reason for 2's inclusion is to allow an easy
transition for users who've customized their general
cache settings.

Using the general cache settings for the context caches
results in higher effective settings, since they are
multiplied across the number of contexts.  So a general
cache max size of 200 will become 200 * # of contexts.
However, this behavior it will avoid users snapping to a
value that is too low for them.

Backport of: #52855
Refs: #50152
2020-03-18 14:44:04 -06:00
Ryan Ernst 5c472fcb47 Upgrade jackson to 2.10.3 and GeoIP to 2.13.1 (#53642)
Re-applies the change from #53523 along with test fixes.

closes #53626
closes #53624
closes #53622
closes #53625

Co-authored-by: Nik Everett <nik9000@gmail.com>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Jake Landis <jake.landis@elastic.co>
2020-03-17 10:28:51 -07:00
Jason Tedor 8c96112613
Fix compilation in simple transport test case base
This commit fixes compilation after a backport gone bad due to a
difference in the minimum Java version between 7.x and master.
2020-03-16 21:44:58 -04:00
Jason Tedor 01d2339883
Invoke response handler on failure to send (#53631)
Today it can happen that a transport message fails to send (for example,
because a transport interceptor rejects the request). In this case, the
response handler is never invoked, which can lead to necessary cleanups
not being performed. There are two ways to handle this. One is to expect
every callsite that sends a message to try/catch these exceptions and
handle them appropriately. The other is merely to invoke the response
handler to handle the exception, which is already equipped to handle
transport exceptions.
2020-03-16 21:28:24 -04:00
Nik Everett f0beab4041
Stop using round-tripped PipelineAggregators (backport of #53423) (#53629)
This begins to clean up how `PipelineAggregator`s and executed.
Previously, we would create the `PipelineAggregator`s on the data nodes
and embed them in the aggregation tree. When it came time to execute the
pipeline aggregation we'd use the `PipelineAggregator`s that were on the
first shard's results. This is inefficient because:
1. The data node needs to make the `PipelineAggregator` only to
   serialize it and then throw it away.
2. The coordinating node needs to deserialize all of the
   `PipelineAggregator`s even though it only needs one of them.
3. You end up with many `PipelineAggregator` instances when you only
   really *need* one per pipeline.
4. `PipelineAggregator` needs to implement serialization.

This begins to undo these by building the `PipelineAggregator`s directly
on the coordinating node and using those instead of the
`PipelineAggregator`s in the aggregtion tree. In a follow up change
we'll stop serializing the `PipelineAggregator`s to node versions that
support this behavior. And, one day, we'll be able to remove
`PipelineAggregator` from the aggregation result tree entirely.

Importantly, this doesn't change how pipeline aggregations are declared
or parsed or requested. They are still part of the `AggregationBuilder`
tree because *that* makes sense.
2020-03-16 16:15:23 -04:00
Jim Ferenczi e6680be0b1
Add new x-pack endpoints to track the progress of a search asynchronously (#49931) (#53591)
This change introduces a new API in x-pack basic that allows to track the progress of a search.
Users can submit an asynchronous search through a new endpoint called `_async_search` that
works exactly the same as the `_search` endpoint but instead of blocking and returning the final response when available, it returns a response after a provided `wait_for_completion` time.

````
GET my_index_pattern*/_async_search?wait_for_completion=100ms
{
  "aggs": {
    "date_histogram": {
      "field": "@timestamp",
      "fixed_interval": "1h"
    }
  }
}
````

If after 100ms the final response is not available, a `partial_response` is included in the body:

````
{
  "id": "9N3J1m4BgyzUDzqgC15b",
  "version": 1,
  "is_running": true,
  "is_partial": true,
  "response": {
   "_shards": {
       "total": 100,
       "successful": 5,
       "failed": 0
    },
    "total_hits": {
      "value": 1653433,
      "relation": "eq"
    },
    "aggs": {
      ...
    }
  }
}
````

The partial response contains the total number of requested shards, the number of shards that successfully returned and the number of shards that failed.
It also contains the total hits as well as partial aggregations computed from the successful shards.
To continue to monitor the progress of the search users can call the get `_async_search` API like the following:

````
GET _async_search/9N3J1m4BgyzUDzqgC15b/?wait_for_completion=100ms
````

That returns a new response that can contain the same partial response than the previous call if the search didn't progress, in such case the returned `version`
should be the same. If new partial results are available, the version is incremented and the `partial_response` contains the updated progress.
Finally if the response is fully available while or after waiting for completion, the `partial_response` is replaced by a `response` section that contains the usual _search response:

````
{
  "id": "9N3J1m4BgyzUDzqgC15b",
  "version": 10,
  "is_running": false,
  "response": {
     "is_partial": false,
     ...
  }
}
````

Asynchronous search are stored in a restricted index called `.async-search` if they survive (still running) after the initial submit. Each request has a keep alive that defaults to 5 days but this value can be changed/updated any time:
`````
GET my_index_pattern*/_async_search?wait_for_completion=100ms&keep_alive=10d
`````
The default can be changed when submitting the search, the example above raises the default value for the search to `10d`.
`````
GET _async_search/9N3J1m4BgyzUDzqgC15b/?wait_for_completion=100ms&keep_alive=10d
`````
The time to live for a specific search can be extended when getting the progress/result. In the example above we extend the keep alive to 10 more days.
A background service that runs only on the node that holds the first primary shard of the `async-search` index is responsible for deleting the expired results. It runs every hour but the expiration is also checked by running queries (if they take longer than the keep_alive) and when getting a result.

Like a normal `_search`, if the http channel that is used to submit a request is closed before getting a response, the search is automatically cancelled. Note that this behavior is only for the submit API, subsequent GET requests will not cancel if they are closed.

Asynchronous search are not persistent, if the coordinator node crashes or is restarted during the search, the asynchronous search will stop. To know if the search is still running or not the response contains a field called `is_running` that indicates if the task is up or not. It is the responsibility of the user to resume an asynchronous search that didn't reach a final response by re-submitting the query. However final responses and failures are persisted in a system index that allows
to retrieve a response even if the task finishes.

````
DELETE _async_search/9N3J1m4BgyzUDzqgC15b
````

The response is also not stored if the initial submit action returns a final response. This allows to not add any overhead to queries that completes within the initial `wait_for_completion`.

The `.async-search` index is a restricted index (should be migrated to a system index in +8.0) that is accessible only through the async search APIs. These APIs also ensure that only the user that submitted the initial query can retrieve or delete the running search. Note that admins/superusers would still be able to cancel the search task through the task manager like any other tasks.

Relates #49091

Co-authored-by: Luca Cavanna <javanna@users.noreply.github.com>
2020-03-16 15:31:27 +01:00