Commit Graph

248 Commits

Author SHA1 Message Date
Julie Tibshirani 638a719370
Ensure that ip_range aggregations always return bucket keys. (#30701) 2018-05-24 08:55:14 -07:00
Piotr Prądzyński a0a8c4f186 filters agg docs duplicated 'bucket' word removal (#30677)
In one place word 'bucket' was duplicated.
2018-05-17 15:21:50 +01:00
Piotr Prądzyński cefbd29db3 top_hits doc example description update (#30676)
Example description does not fit example code.
2018-05-17 15:21:25 +01:00
Zachary Tong df853c49c0
Add a MovingFunction pipeline aggregation, deprecate MovingAvg agg (#29594)
This pipeline aggregation gives the user the ability to script functions that "move" across a window
of data, instead of single data points.  It is the scripted version of MovingAvg pipeline agg.

Through custom script contexts, we expose a number of convenience methods:

 - MovingFunctions.max()
 - MovingFunctions.min()
 - MovingFunctions.sum()
 - MovingFunctions.unweightedAvg()
 - MovingFunctions.linearWeightedAvg()
 - MovingFunctions.ewma()
 - MovingFunctions.holt()
 - MovingFunctions.holtWinters()
 - MovingFunctions.stdDev()

The user can also define any arbitrary logic via their own scripting, or combine with the above methods.
2018-05-16 10:57:00 -04:00
Jason Tedor 4a4e3d70d5
Default to one shard (#30539)
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.

Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
2018-05-14 12:22:35 -04:00
Karim Frenn 3acca0b35c [Docs] Fix typo in cardinality-aggregation.asciidoc (#30434) 2018-05-08 16:12:36 +02:00
Christoph Büscher 21dbf9fab0
Document time unit limitations for date histograms (#30177)
Adding some allowed abbreviated values for intervals in date histograms
as well as documenting the limitations of intervals larger than days.

Closes #23294
2018-04-26 19:44:21 +02:00
Adrien Grand ebd6b5b7ba
Deprecate filtering on `_type`. (#29468)
As indices are only allowed to have one type now, and types are going away in
the future, we should deprecate filtering by `_type`.

Relates #15613
2018-04-13 09:07:51 +02:00
D Pinto 8d6a368402 [Docs] Correct typo in pipeline.asciidoc (#29431) 2018-04-10 10:42:07 +02:00
Jim Ferenczi 5288235ca3
Optimize the composite aggregation for match_all and range queries (#28745)
This change refactors the composite aggregation to add an execution mode that visits documents in the order of the values
present in the leading source of the composite definition. This mode does not need to visit all documents since it can early terminate
the collection when the leading source value is greater than the lowest value in the queue.
Instead of collecting the documents in the order of their doc_id, this mode uses the inverted lists (or the bkd tree for numerics) to collect documents
in the order of the values present in the leading source.
For instance the following aggregation:

```
"composite" : {
  "sources" : [
    { "value1": { "terms" : { "field": "timestamp", "order": "asc" } } }
  ],
  "size": 10
}
```
... can use the field `timestamp` to collect the documents with the 10 lowest values for the field instead of visiting all documents.
For composite aggregation with more than one source the execution can early terminate as soon as one of the 10 lowest values produces enough
composite buckets. For instance if visiting the first two lowest timestamp created 10 composite buckets we can early terminate the collection since it
is guaranteed that the third lowest timestamp cannot create a composite key that compares lower than the one already visited.

This mode can execute iff:
 * The leading source in the composite definition uses an indexed field of type `date` (works also with `date_histogram` source), `integer`, `long` or `keyword`.
 * The query is a match_all query or a range query over the field that is used as the leading source in the composite definition.
 * The sort order of the leading source is the natural order (ascending since postings and numerics are sorted in ascending order only).

If these conditions are not met this aggregation visits each document like any other agg.
2018-03-26 09:51:37 +02:00
Paul Sanwald 6dae955b6a
Document and test date_range "missing" support (#28983)
* Add a REST integration test that documents date_range support

Add a test case that exercises date_range aggregations using the missing
option.

Addresses #17597

* Test cleanup and correction

Adding a document with a null date to exercise `missing` option, update
test name to something reasonable.

* Update documentation to explain how the "missing" parameter works for
date_range aggregations.

* Wrap lines at 80 chars in docs.

* Change format of test to YAML for readability.
2018-03-13 12:58:30 -07:00
Craig van Tonder 95a13ea01c Update bucket-sort-aggregation.asciidoc (#28937)
Added two trailing braces that were missing within the first example.
2018-03-08 15:05:34 +01:00
Menno Oudshoorn d018a0008e Add a usage example of the JLH score (#28905)
Adds a usage example of the JLH score used in significant terms aggregation.
All other methods to calculate significance score have such an example

Closes #28513
2018-03-06 15:37:18 +01:00
Tim Roes 5689dc1182 [Docs] Fix typo in composite aggregation (#28891) 2018-03-04 11:47:24 -08:00
Ke Li fc406c9a5a Upgrade t-digest to 3.2 (#28295) (#28305) 2018-02-15 08:23:20 +00:00
Islam Heggo f562c7f15a Correct the explanation of load time percentiles (#28510)
* Correct the explanation of load time percentiles

* Adjusting the percentile clarification

Eliminating the false sentence about majority of load time
2018-02-08 16:29:43 -08:00
Clinton Gormley 45c1e37740 Add defined ID to terms agg size header 2018-02-02 13:43:20 +01:00
Jim Ferenczi c4e0a84344
Mark the composite aggregation as a beta feature (#28431)
The `composite` aggregation should be marked as beta (rather than experimental) in the documentation.
2018-02-02 09:24:10 +01:00
Jim Ferenczi c26d4ac6c1
Always return the after_key in composite aggregation response (#28358)
This change adds the `after_key` of a composite aggregation directly in the response.
It is redundant when all buckets are not filtered/removed by a pipeline aggregation since in this case the `after_key` is always the last bucket
in the response. Though when using a pipeline aggregation to filter composite buckets, the `after_key` can be lost if the last bucket is filtered.
This commit fixes this situation by always returning the `after_key` in a dedicated section.
2018-01-25 09:15:27 +01:00
Jim Ferenczi 65184d0b5b
Adds a note in the `terms` aggregation docs regarding pagination (#28360)
This change adds a note in the `terms` aggregation that explains how to retrieve **all**
terms (or all combinations of terms in a nested agg) using the `composite` aggregation.
2018-01-25 08:59:41 +01:00
Alex Moros Marco 090ac3c2a2 [Doc] Fixs typo in reverse-nested-aggregation.asciidoc (#28348) 2018-01-24 17:54:02 +01:00
Jim Ferenczi b2ce994be7 [Docs] Fix asciidoc style in composite agg docs 2018-01-23 16:41:32 +01:00
Jim Ferenczi 19cfc25873
Adds the ability to specify a format on composite date_histogram source (#28310)
This commit adds the ability to specify a date format on the `date_histogram` composite source.
If the format is defined, the key for the source is returned as a formatted date.

Closes #27923
2018-01-23 15:14:49 +01:00
Jin Liang 66c81e7f5e [Docs] Update tophits-aggregation.asciidoc (#28273) 2018-01-18 18:06:20 +01:00
David Shimon c92b42ef84 Docs: match between snippet to its description (#28296)
s/400/200/ in the text to match a snippet.
2018-01-18 09:31:02 -05:00
akadko 6a5807ad8f [DOCS] Removed differencies between text and code (#27993) 2018-01-12 10:36:48 -05:00
Andrew Banchich a10c406292 text fixes (#28136) 2018-01-12 10:21:39 -05:00
Christoph Büscher 556d77c9ad
[Docs] Add note on limitation for significant_text with nested objects (#28052)
Add section to `significant_text` documentation mentioning that it currently
does not support use on nested objects.

Relates to #28050
2018-01-03 16:28:23 +01:00
Shaunak Kashyap da0ed578b2 Fixing typo in param name: values => sources (#28016) 2017-12-28 18:18:30 +01:00
Adrien Grand 1b660821a2
Allow `_doc` as a type. (#27816)
Allowing `_doc` as a type will enable users to make the transition to 7.0
smoother since the index APIs will be `PUT index/_doc/id` and `POST index/_doc`.
This also moves most of the documentation to `_doc` as a type name.

Closes #27750
Closes #27751
2017-12-14 17:47:53 +01:00
Jim Ferenczi caea6b70fa
Add a new cluster setting to limit the total number of buckets returned by a request (#27581)
This commit adds a new dynamic cluster setting named `search.max_buckets` that can be used to limit the number of buckets created per shard or by the reduce phase. Each multi bucket aggregator can consume buckets during the final build of the aggregation at the shard level or during the reduce phase (final or not) in the coordinating node. When an aggregator consumes a bucket, a global count for the request is incremented and if this number is greater than the limit an exception is thrown (TooManyBuckets exception).
This change adds the ability for multi bucket aggregator to "consume" buckets in the global limit, the default is 10,000. It's an opt-in consumer so each multi-bucket aggregator must explicitly call the consumer when a bucket is added in the response.

Closes #27452 #26012
2017-12-06 09:15:28 +01:00
Christoph Büscher 0d11b9fe34
[Docs] Unify spelling of Elasticsearch (#27567)
Removes occurences of "elasticsearch" or "ElasticSearch" in favour of
"Elasticsearch" where appropriate.
2017-11-29 09:44:25 +01:00
Martijn van Groningen cb1204774b
Include the _index, _type and _id to nested search hits in the top_hits and inner_hits response.
Also include _type and _id for parent/child hits inside inner hits.

In the case of top_hits aggregation the nested search hits are
directly returned and are not grouped by a root or parent document, so
it is important to include the _id and _index attributes in order to know
to what documents these nested search hits belong to.

Closes #27053
2017-11-28 14:05:29 +01:00
Clinton Gormley d1b1d711df Update composite-aggregation.asciidoc
Fixed asciidoc typo
2017-11-23 15:05:14 +01:00
Simon Willnauer fadbe0de08
Automatically prepare indices for splitting (#27451)
Today we require users to prepare their indices for split operations.
Yet, we can do this automatically when an index is created which would
make the split feature a much more appealing option since it doesn't have
any 3rd party prerequisites anymore.

This change automatically sets the number of routinng shards such that
an index is guaranteed to be able to split once into twice as many shards.
The number of routing shards is scaled towards the default shard limit per index
such that indices with a smaller amount of shards can be split more often than
larger ones. For instance an index with 1 or 2 shards can be split 10x
(until it approaches 1024 shards) while an index created with 128 shards can only
be split 3x by a factor of 2. Please note this is just a default value and users
can still prepare their indices with `index.number_of_routing_shards` for custom
splitting.

NOTE: this change has an impact on the document distribution since we are changing
the hash space. Documents are still uniformly distributed across all shards but since
we are artificually changing the number of buckets in the consistent hashign space
document might be hashed into different shards compared to previous versions.

This is a 7.0 only change.
2017-11-23 09:48:54 +01:00
Takumasa Ochi eed8d1aee5 [DOC] Fix mathematical representation on interval (range) (#27450) 2017-11-21 17:06:26 +00:00
Jim Ferenczi d1093bd2fa #26800: Fix docs rendering 2017-11-20 08:41:02 +01:00
Jim Ferenczi 623367d793
Add composite aggregator (#26800)
* This change adds a module called `aggs-composite` that defines a new aggregation named `composite`.
The `composite` aggregation is a multi-buckets aggregation that creates composite buckets made of multiple sources.
The sources for each bucket can be defined as:
  * A `terms` source, values are extracted from a field or a script.
  * A `date_histogram` source, values are extracted from a date field and rounded to the provided interval.
This aggregation can be used to retrieve all buckets of a deeply nested aggregation by flattening the nested aggregation in composite buckets.
A composite buckets is composed of one value per source and is built for each document as the combinations of values in the provided sources.
For instance the following aggregation:

````
"test_agg": {
  "terms": {
    "field": "field1"
  },
  "aggs": {
    "nested_test_agg":
      "terms": {
        "field": "field2"
      }
  }
}
````
... which retrieves the top N terms for `field1` and for each top term in `field1` the top N terms for `field2`, can be replaced by a `composite` aggregation in order to retrieve **all** the combinations of `field1`, `field2` in the matching documents:

````
"composite_agg": {
  "composite": {
    "sources": [
      {
	"field1": {
          "terms": {
              "field": "field1"
            }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
        }
      },
    }
  }
````

The response of the aggregation looks like this:

````
"aggregations": {
  "composite_agg": {
    "buckets": [
      {
        "key": {
          "field1": "alabama",
          "field2": "almanach"
        },
        "doc_count": 100
      },
      {
        "key": {
          "field1": "alabama",
          "field2": "calendar"
        },
        "doc_count": 1
      },
      {
        "key": {
          "field1": "arizona",
          "field2": "calendar"
        },
        "doc_count": 1
      }
    ]
  }
}
````

By default this aggregation returns 10 buckets sorted in ascending order of the composite key.
Pagination can be achieved by providing `after` values, the values of the composite key to aggregate after.
For instance the following aggregation will aggregate all composite keys that sorts after `arizona, calendar`:

````
"composite_agg": {
  "composite": {
    "after": {"field1": "alabama", "field2": "calendar"},
    "size": 100,
    "sources": [
      {
	"field1": {
          "terms": {
            "field": "field1"
          }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
	}
      }
    }
  }
````

This aggregation is optimized for indices that set an index sorting that match the composite source definition.
For instance the aggregation above could run faster on indices that defines an index sorting like this:

````
"settings": {
  "index.sort.field": ["field1", "field2"]
}
````

In this case the `composite` aggregation can early terminate on each segment.
This aggregation also accepts multi-valued field but disables early termination for these fields even if index sorting matches the sources definition.
This is mandatory because index sorting picks only one value per document to perform the sort.
2017-11-16 15:13:36 +01:00
Dimitris Athanasiou 66bef26495
Aggregations: bucket_sort pipeline aggregation (#27152)
This commit adds a parent pipeline aggregation that allows
sorting the buckets of a parent multi-bucket aggregation.

The aggregation also offers [from] and [size] parameters
in order to truncate the result as desired.

Closes #14928
2017-11-09 17:59:57 +00:00
Dimitrios Athanasiou 3796471ac4 [Docs] Fix note in bucket_selector 2017-10-30 15:20:46 +00:00
Martijn van Groningen 87c9b79b10
Return the _source of inner hit nested as is without wrapping it into its full path context
Due to a change happened via #26102 to make the nested source consistent
with or without source filtering, the _source of a nested inner hit was
always wrapped in the parent path. This turned out to be not ideal for
users relying on the nested source, as it would require additional parsing
on the client side. This change fixes this, the _source of nested inner hits
is now no longer wrapped by parent json objects, irregardless of whether
the _source is included as is or source filtering is used.

Internally source filtering and highlighting relies on the fact that the
_source of nested inner hits are accessible by its full field path, so
in order to now break this, the conversion of the _source into its binary
form is performed in FetchSourceSubPhase, after any potential source filtering
is performed to make sure the structure of _source of the nested inner hit
is consistent irregardless if source filtering is performed.

PR for #26944

Closes #26944
2017-10-19 12:04:56 +02:00
shaulzorea 9db21cd23f fixing typo in datehistogram-aggregation.asciidoc (#26924) 2017-10-08 15:12:43 +02:00
Ryan Ernst c0c5d5488f Docs: Remove remaining references to file and native scripts (#26580)
relates #25690
2017-09-11 11:39:29 -07:00
shaulzorea 666cf4b872
fixing typo in nested-aggregation.asciidoc (#26481) 2017-09-04 06:42:44 +02:00
Tanguy Leroux 3d07bce504 [Docs] Fix tophits-aggregation.asciidoc 2017-08-30 13:06:44 +02:00
Tanguy Leroux 643eb286dc [Docs] Convert remaining code snippets in docs (#26422)
This commit converts the last remaining code snippets so that they are
now testable.
2017-08-30 12:11:10 +02:00
Jim Ferenczi 977dcfe789 Deprecate global_ordinals_hash and global_ordinals_low_cardinality (#26173)
* Deprecate global_ordinals_hash and global_ordinals_low_cardinality

This change deprecates the `global_ordinals_hash` and `global_ordinals_low_cardinality` and
makes the `global_ordinals` execution hint choose internally if global ords should be remapped or use the segment ord directly.
These hints are too sensitive and expert to be exposed and we should be able to take the right decision internally based on the agg tree.
2017-08-21 19:12:27 +02:00
Christoph Büscher 5dae277bb2 Support distance units in GeoHashGrid aggregation precision (#26291)
Currently the `precision` parameter must be a precision level
in the range of [1,12]. In #5042 it was suggested also supporting
distance units like "1km" to automatically approcimate the needed
precision level. This change adds this support to the Rest API by
making use of GeoUtils#geoHashLevelsForPrecision.

Plain integer values without a unit are still treated as precision
levels like before. Distance values that are too small to be represented
by a precision level of 12 (values approx. less than 0.056m) are
rejected.

Closes #5042
2017-08-21 17:29:28 +02:00
Nik Everett 7e76b2a8c3 Docs: fold section into current chapter
In #25602 we added a new *chapter* on aggregating by day of the
week. We intended to add a new *section* but we were missing a
single `=`.
2017-08-17 11:19:02 -04:00
Nik Everett 6d2c40e546 Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!

This adds a step during the build to parse them as JSON and fail
the build if they don't parse.

But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
    ...
    "aggregations": {
        "range": {
            "buckets": [
                {
                    "to": 1.4436576E12,
                    "to_as_string": "10-2015",
                    "doc_count": 7,
                    "key": "*-10-2015"
                },
                {
                    "from": 1.4436576E12,
                    "from_as_string": "10-2015",
                    "doc_count": 0,
                    "key": "10-2015-*"
                }
            ]
        }
    }
}
```

Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
    "took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
    "aggregations": {
        "range": {
            "buckets": [
                {
                    "to": 1.4436576E12,
                    "to_as_string": "10-2015",
                    "doc_count": 7,
                    "key": "*-10-2015"
                },
                {
                    "from": 1.4436576E12,
                    "from_as_string": "10-2015",
                    "doc_count": 0,
                    "key": "10-2015-*"
                }
            ]
        }
    }
}
```

That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.

Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.

Closes #26233
2017-08-17 09:02:10 -04:00