[DOCS] Backporting GS search & aggs updates. (#46008)
* [DOCS] Streamlined GS aggs section. (#45951) * [DOCS] Streamlined GS aggs section. * Update docs/reference/getting-started.asciidoc Co-Authored-By: James Rodewig <james.rodewig@elastic.co> * [DOCS] Fix typo. (#46006)
This commit is contained in:
parent
d50d700f14
commit
948b03856b
|
@ -17,7 +17,7 @@ Step through this getting started tutorial to:
|
|||
Need more context?
|
||||
|
||||
Check out the <<elasticsearch-intro,
|
||||
Elasticsearch Introduction>> to learn the lingo and understand the basics of
|
||||
{es} Introduction>> to learn the lingo and understand the basics of
|
||||
how {es} works. If you're already familiar with {es} and want to see how it works
|
||||
with the rest of the stack, you might want to jump to the
|
||||
{stack-gs}/get-started-elastic-stack.html[Elastic Stack
|
||||
|
@ -26,15 +26,15 @@ Tutorial] to see how to set up a system monitoring solution with {es}, {kib},
|
|||
|
||||
TIP: The fastest way to get started with {es} is to
|
||||
https://www.elastic.co/cloud/elasticsearch-service/signup[start a free 14-day
|
||||
trial of Elasticsearch Service] in the cloud.
|
||||
trial of {ess}] in the cloud.
|
||||
--
|
||||
|
||||
[[getting-started-install]]
|
||||
== Get {es} up and running
|
||||
|
||||
To take {es} for a test drive, you can create a one-click cloud deployment
|
||||
on the https://www.elastic.co/cloud/elasticsearch-service/signup[Elasticsearch Service],
|
||||
or <<run-elasticsearch-local, set up a multi-node {es} cluster>> on your own
|
||||
To take {es} for a test drive, you can create a
|
||||
https://www.elastic.co/cloud/elasticsearch-service/signup[hosted deployment] on
|
||||
the {ess} or set up a multi-node {es} cluster on your own
|
||||
Linux, macOS, or Windows machine.
|
||||
|
||||
|
||||
|
@ -42,13 +42,14 @@ Linux, macOS, or Windows machine.
|
|||
[[run-elasticsearch-local]]
|
||||
=== Run {es} locally on Linux, macOS, or Windows
|
||||
|
||||
When you create a cluster on the Elasticsearch Service, you automatically
|
||||
get a three-node cluster. By installing from the tar or zip archive, you can
|
||||
start multiple instances of {es} locally to see how a multi-node cluster behaves.
|
||||
When you create a deployment on the {ess}, a master node and
|
||||
two data nodes are provisioned automatically. By installing from the tar or zip
|
||||
archive, you can start multiple instances of {es} locally to see how a multi-node
|
||||
cluster behaves.
|
||||
|
||||
To run a three-node {es} cluster locally:
|
||||
|
||||
. Download the Elasticsearch archive for your OS:
|
||||
. Download the {es} archive for your OS:
|
||||
+
|
||||
Linux: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-{version}-linux-x86_64.tar.gz[elasticsearch-{version}-linux-x86_64.tar.gz]
|
||||
+
|
||||
|
@ -92,7 +93,7 @@ Windows PowerShell:
|
|||
Expand-Archive elasticsearch-{version}-windows-x86_64.zip
|
||||
--------------------------------------------------
|
||||
|
||||
. Start elasticsearch from the `bin` directory:
|
||||
. Start {es} from the `bin` directory:
|
||||
+
|
||||
Linux and macOS:
|
||||
+
|
||||
|
@ -386,28 +387,8 @@ And the response (partially shown):
|
|||
// TESTRESPONSE[s/"took" : 63/"took" : $body.took/]
|
||||
// TESTRESPONSE[s/\.\.\./$body.hits.hits.2, $body.hits.hits.3, $body.hits.hits.4, $body.hits.hits.5, $body.hits.hits.6, $body.hits.hits.7, $body.hits.hits.8, $body.hits.hits.9/]
|
||||
|
||||
As for the response, we see the following parts:
|
||||
|
||||
* `took` – time in milliseconds for Elasticsearch to execute the search
|
||||
* `timed_out` – tells us if the search timed out or not
|
||||
* `_shards` – tells us how many shards were searched, as well as a count of the successful/failed searched shards
|
||||
* `hits` – search results
|
||||
* `hits.total` – an object that contains information about the total number of documents matching our search criteria
|
||||
** `hits.total.value` - the value of the total hit count (must be interpreted in the context of `hits.total.relation`).
|
||||
** `hits.total.relation` - whether `hits.total.value` is the exact hit count, in which case it is equal to `"eq"` or a
|
||||
lower bound of the total hit count (greater than or equals), in which case it is equal to `gte`.
|
||||
* `hits.hits` – actual array of search results (defaults to first 10 documents)
|
||||
* `hits.sort` - sort value of the sort key for each result (missing if sorting by score)
|
||||
* `hits._score` and `max_score` - ignore these fields for now
|
||||
|
||||
The accuracy of `hits.total` is controlled by the request parameter `track_total_hits`, when set to true
|
||||
the request will track the total hits accurately (`"relation": "eq"`). It defaults to `10,000`
|
||||
which means that the total hit count is accurately tracked up to `10,000` documents.
|
||||
You can force an accurate count by setting `track_total_hits` to true explicitly.
|
||||
See the <<request-body-search-track-total-hits, request body>> documentation
|
||||
for more details.
|
||||
|
||||
Here is the same exact search above using the alternative request body method:
|
||||
For example, the following request retrieves all documents in the `bank`
|
||||
index sorted by account number:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -506,7 +487,9 @@ GET /bank/_search
|
|||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
Note that if `size` is not specified, it defaults to 10.
|
||||
Each search request is self-contained: {es} does not maintain any
|
||||
state information across requests. To page through the search hits, specify
|
||||
the `from` and `size` parameters in your request.
|
||||
|
||||
This example does a `match_all` and returns documents 10 through 19:
|
||||
|
||||
|
@ -524,65 +507,9 @@ GET /bank/_search
|
|||
|
||||
The `from` parameter (0-based) specifies which document index to start from and the `size` parameter specifies how many documents to return starting at the from parameter. This feature is useful when implementing paging of search results. Note that if `from` is not specified, it defaults to 0.
|
||||
|
||||
This example does a `match_all` and sorts the results by account balance in descending order and returns the top 10 (default size) documents.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /bank/_search
|
||||
{
|
||||
"query": { "match_all": {} },
|
||||
"sort": { "balance": { "order": "desc" } }
|
||||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
Now that we have seen a few of the basic search parameters, let's dig in some more into the Query DSL. Let's first take a look at the returned document fields. By default, the full JSON document is returned as part of all searches. This is referred to as the source (`_source` field in the search hits). If we don't want the entire source document returned, we have the ability to request only a few fields from within source to be returned.
|
||||
|
||||
This example shows how to return two fields, `account_number` and `balance` (inside of `_source`), from the search:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /bank/_search
|
||||
{
|
||||
"query": { "match_all": {} },
|
||||
"_source": ["account_number", "balance"]
|
||||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
Note that the above example simply reduces the `_source` field. It will still only return one field named `_source` but within it, only the fields `account_number` and `balance` are included.
|
||||
|
||||
If you come from a SQL background, the above is somewhat similar in concept to the `SQL SELECT FROM` field list.
|
||||
|
||||
Now let's move on to the query part. Previously, we've seen how the `match_all` query is used to match all documents. Let's now introduce a new query called the {ref}/query-dsl-match-query.html[`match` query], which can be thought of as a basic fielded search query (i.e. a search done against a specific field or set of fields).
|
||||
|
||||
This example returns the account numbered 20:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /bank/_search
|
||||
{
|
||||
"query": { "match": { "account_number": 20 } }
|
||||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
This example returns all accounts containing the term "mill" in the address:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /bank/_search
|
||||
{
|
||||
"query": { "match": { "address": "mill" } }
|
||||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
This example returns all accounts containing the term "mill" or "lane" in the address:
|
||||
To search for specific terms within a field, you can use a `match` query.
|
||||
For example, the following request searches the `address` field to find
|
||||
customers whose addresses contain `mill` or `lane`:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -735,9 +662,15 @@ In addition to the `match_all`, `match`, `bool`, and `range` queries, there are
|
|||
[[getting-started-aggregations]]
|
||||
== Analyze results with aggregations
|
||||
|
||||
Aggregations provide the ability to group and extract statistics from your data. The easiest way to think about aggregations is by roughly equating it to the SQL GROUP BY and the SQL aggregate functions. In Elasticsearch, you have the ability to execute searches returning hits and at the same time return aggregated results separate from the hits all in one response. This is very powerful and efficient in the sense that you can run queries and multiple aggregations and get the results back of both (or either) operations in one shot avoiding network roundtrips using a concise and simplified API.
|
||||
{es} aggregations enable you to get meta-information about your search results
|
||||
and answer questions like, "How many account holders are in Texas?" or
|
||||
"What's the average balance of accounts in Tennessee?" You can search
|
||||
documents, filter hits, and use aggregations to analyze the results all in one
|
||||
request.
|
||||
|
||||
To start with, this example groups all the accounts by state, and then returns the top 10 (default) states sorted by count descending (also default):
|
||||
For example, the following request uses a `terms` aggregation to group
|
||||
all of the accounts in the `bank` index by state, and returns the ten states
|
||||
with the most accounts in descending order:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -756,14 +689,10 @@ GET /bank/_search
|
|||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
In SQL, the above aggregation is similar in concept to:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
SELECT state, COUNT(*) FROM bank GROUP BY state ORDER BY COUNT(*) DESC LIMIT 10;
|
||||
--------------------------------------------------
|
||||
|
||||
And the response (partially shown):
|
||||
The `buckets` in the response are the values of the `state` field. The
|
||||
`doc_count` shows the number of accounts in each state. For example, you
|
||||
can see that there are 27 accounts in `ID` (Idaho). Because the request
|
||||
set `size=0`, the response only contains the aggregation results.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -825,12 +754,11 @@ And the response (partially shown):
|
|||
--------------------------------------------------
|
||||
// TESTRESPONSE[s/"took": 29/"took": $body.took/]
|
||||
|
||||
We can see that there are 27 accounts in `ID` (Idaho), followed by 27 accounts
|
||||
in `TX` (Texas), followed by 25 accounts in `AL` (Alabama), and so forth.
|
||||
|
||||
Note that we set `size=0` to not show search hits because we only want to see the aggregation results in the response.
|
||||
|
||||
Building on the previous aggregation, this example calculates the average account balance by state (again only for the top 10 states sorted by count in descending order):
|
||||
You can combine aggregations to build more complex summaries of your data. For
|
||||
example, the following request nests an `avg` aggregation within the previous
|
||||
`group_by_state` aggregation to calculate the average account balances for
|
||||
each state.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -856,9 +784,8 @@ GET /bank/_search
|
|||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
Notice how we nested the `average_balance` aggregation inside the `group_by_state` aggregation. This is a common pattern for all the aggregations. You can nest aggregations inside aggregations arbitrarily to extract pivoted summarizations that you require from your data.
|
||||
|
||||
Building on the previous aggregation, let's now sort on the average balance in descending order:
|
||||
Instead of sorting the results by count, you could sort using the result of
|
||||
the nested aggregation by specifying the order within the `terms` aggregation:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -887,54 +814,14 @@ GET /bank/_search
|
|||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
This example demonstrates how we can group by age brackets (ages 20-29, 30-39, and 40-49), then by gender, and then finally get the average account balance, per age bracket, per gender:
|
||||
In addition to basic bucketing and metrics aggregations like these, {es}
|
||||
provides specialized aggregations for operating on multiple fields and
|
||||
analyzing particular types of data such as dates, IP addresses, and geo
|
||||
data. You can also feed the results of individual aggregations into pipeline
|
||||
aggregations for further analysis.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /bank/_search
|
||||
{
|
||||
"size": 0,
|
||||
"aggs": {
|
||||
"group_by_age": {
|
||||
"range": {
|
||||
"field": "age",
|
||||
"ranges": [
|
||||
{
|
||||
"from": 20,
|
||||
"to": 30
|
||||
},
|
||||
{
|
||||
"from": 30,
|
||||
"to": 40
|
||||
},
|
||||
{
|
||||
"from": 40,
|
||||
"to": 50
|
||||
}
|
||||
]
|
||||
},
|
||||
"aggs": {
|
||||
"group_by_gender": {
|
||||
"terms": {
|
||||
"field": "gender.keyword"
|
||||
},
|
||||
"aggs": {
|
||||
"average_balance": {
|
||||
"avg": {
|
||||
"field": "balance"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
There are many other aggregations capabilities that we won't go into detail here. The {ref}/search-aggregations.html[aggregations reference guide] is a great starting point if you want to do further experimentation.
|
||||
The core analysis capabilities provided by aggregations enable advanced
|
||||
features such as using machine learning to detect anomalies.
|
||||
|
||||
[[getting-started-next-steps]]
|
||||
== Where to go from here
|
||||
|
|
Loading…
Reference in New Issue