[DOCS] Fix hyphenation for "time series" (#61472) (#61481)

This commit is contained in:
James Rodewig 2020-08-24 11:18:07 -04:00 committed by GitHub
parent 618dd65d5f
commit 2b852388c5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 31 additions and 33 deletions

View File

@ -49,8 +49,8 @@ syntax.
[[date-math-index-names]] [[date-math-index-names]]
=== Date math support in index names === Date math support in index names
Date math index name resolution enables you to search a range of time-series indices, rather Date math index name resolution enables you to search a range of time series indices, rather
than searching all of your time-series indices and filtering the results or maintaining aliases. than searching all of your time series indices and filtering the results or maintaining aliases.
Limiting the number of indices that are searched reduces the load on the cluster and improves Limiting the number of indices that are searched reduces the load on the cluster and improves
execution performance. For example, if you are searching for errors in your execution performance. For example, if you are searching for errors in your
daily logs, you can use a date math name template to restrict the search to the past daily logs, you can use a date math name template to restrict the search to the past

View File

@ -6,9 +6,9 @@
++++ ++++
A _data stream_ is a convenient, scalable way to ingest, search, and manage A _data stream_ is a convenient, scalable way to ingest, search, and manage
continuously generated time-series data. continuously generated time series data.
Time-series data, such as logs, tends to grow over time. While storing an entire Time series data, such as logs, tends to grow over time. While storing an entire
time series in a single {es} index is simpler, it is often more efficient and time series in a single {es} index is simpler, it is often more efficient and
cost-effective to store large volumes of data across multiple, time-based cost-effective to store large volumes of data across multiple, time-based
indices. Multiple indices let you move indices containing older, less frequently indices. Multiple indices let you move indices containing older, less frequently
@ -38,10 +38,10 @@ budget, performance, resiliency, and retention needs.
We recommend using data streams if you: We recommend using data streams if you:
* Use {es} to ingest, search, and manage large volumes of time-series data * Use {es} to ingest, search, and manage large volumes of time series data
* Want to scale and reduce costs by using {ilm-init} to automate the management * Want to scale and reduce costs by using {ilm-init} to automate the management
of your indices of your indices
* Index large volumes of time-series data in {es} but rarely delete or update * Index large volumes of time series data in {es} but rarely delete or update
individual documents individual documents
@ -161,7 +161,7 @@ manually perform a rollover. See <<manually-roll-over-a-data-stream>>.
[[data-streams-append-only]] [[data-streams-append-only]]
== Append-only == Append-only
For most time-series use cases, existing data is rarely, if ever, updated. For most time series use cases, existing data is rarely, if ever, updated.
Because of this, data streams are designed to be append-only. Because of this, data streams are designed to be append-only.
You can send <<add-documents-to-a-data-stream,indexing requests for new You can send <<add-documents-to-a-data-stream,indexing requests for new

View File

@ -21,12 +21,9 @@ and its backing indices.
[[data-stream-prereqs]] [[data-stream-prereqs]]
=== Prerequisites === Prerequisites
* {es} data streams are intended for time-series data only. Each document * {es} data streams are intended for time series data only. Each document
indexed to a data stream must contain a shared timestamp field. indexed to a data stream must contain the `@timestamp` field. This field must be
+ mapped as a <<date,`date`>> or <<date_nanos,`date_nanos`>> field data type.
TIP: Data streams work well with most common log formats. While no schema is
required to use data streams, we recommend the {ecs-ref}[Elastic Common Schema
(ECS)].
* Data streams are best suited for time-based, * Data streams are best suited for time-based,
<<data-streams-append-only,append-only>> use cases. If you frequently need to <<data-streams-append-only,append-only>> use cases. If you frequently need to

View File

@ -9,7 +9,7 @@
experimental::[] experimental::[]
{eql-ref}/index.html[Event Query Language (EQL)] is a query language for {eql-ref}/index.html[Event Query Language (EQL)] is a query language for
event-based, time-series data, such as logs. event-based, time series data, such as logs.
[discrete] [discrete]
[[eql-advantages]] [[eql-advantages]]

View File

@ -73,7 +73,7 @@ multiple clusters. See <<modules-cross-cluster-search>>.
+ +
-- --
// tag::data-stream-def[] // tag::data-stream-def[]
A named resource used to ingest, search, and manage time-series data in {es}. A A named resource used to ingest, search, and manage time series data in {es}. A
data stream's data is stored across multiple hidden, auto-generated data stream's data is stored across multiple hidden, auto-generated
<<glossary-index,indices>>. You can automate management of these indices to more <<glossary-index,indices>>. You can automate management of these indices to more
efficiently store large data volumes. efficiently store large data volumes.

View File

@ -23,7 +23,7 @@ include::../glossary.asciidoc[tag=freeze-def-short]
* **Delete**: Permanently remove an index, including all of its data and metadata. * **Delete**: Permanently remove an index, including all of its data and metadata.
{ilm-init} makes it easier to manage indices in hot-warm-cold architectures, {ilm-init} makes it easier to manage indices in hot-warm-cold architectures,
which are common when you're working with time-series data such as logs and metrics. which are common when you're working with time series data such as logs and metrics.
You can specify: You can specify:

View File

@ -271,18 +271,18 @@ DELETE /_index_template/timeseries_template
[discrete] [discrete]
[[manage-time-series-data-without-data-streams]] [[manage-time-series-data-without-data-streams]]
=== Manage time-series data without data streams === Manage time series data without data streams
Even though <<data-streams, data streams>> are a convenient way to scale Even though <<data-streams, data streams>> are a convenient way to scale
and manage time-series data, they are designed to be append-only. We recognise there and manage time series data, they are designed to be append-only. We recognise there
might be use-cases where data needs to be updated or deleted in place and the might be use-cases where data needs to be updated or deleted in place and the
data streams don't support delete and update requests directly, data streams don't support delete and update requests directly,
so the index APIs would need to be used directly on the data stream's backing indices. so the index APIs would need to be used directly on the data stream's backing indices.
In these cases, you can use an index alias to manage indices containing the time-series data In these cases, you can use an index alias to manage indices containing the time series data
and periodically roll over to a new index. and periodically roll over to a new index.
To automate rollover and management of time-series indices with {ilm-init} using an index To automate rollover and management of time series indices with {ilm-init} using an index
alias, you: alias, you:
. Create a lifecycle policy that defines the appropriate phases and actions. . Create a lifecycle policy that defines the appropriate phases and actions.
@ -352,7 +352,7 @@ DELETE _index_template/timeseries_template
[discrete] [discrete]
[[ilm-gs-alias-bootstrap]] [[ilm-gs-alias-bootstrap]]
=== Bootstrap the initial time-series index with a write index alias === Bootstrap the initial time series index with a write index alias
To get things started, you need to bootstrap an initial index and To get things started, you need to bootstrap an initial index and
designate it as the write index for the rollover alias specified in your index template. designate it as the write index for the rollover alias specified in your index template.

View File

@ -1,7 +1,7 @@
[[index-rollover]] [[index-rollover]]
=== Rollover === Rollover
When indexing time-series data like logs or metrics, you can't write to a single index indefinitely. When indexing time series data like logs or metrics, you can't write to a single index indefinitely.
To meet your indexing and search performance requirements and manage resource usage, To meet your indexing and search performance requirements and manage resource usage,
you write to an index until some threshold is met and you write to an index until some threshold is met and
then create a new index and start writing to it instead. then create a new index and start writing to it instead.
@ -12,7 +12,7 @@ Using rolling indices enables you to:
* Shift older, less frequently accessed data to less expensive _cold_ nodes, * Shift older, less frequently accessed data to less expensive _cold_ nodes,
* Delete data according to your retention policies by removing entire indices. * Delete data according to your retention policies by removing entire indices.
We recommend using <<indices-create-data-stream, data streams>> to manage time-series We recommend using <<indices-create-data-stream, data streams>> to manage time series
data. Data streams automatically track the write index while keeping configuration to a minimum. data. Data streams automatically track the write index while keeping configuration to a minimum.
Each data stream requires an <<indices-templates,index template>> that contains: Each data stream requires an <<indices-templates,index template>> that contains:
@ -27,7 +27,7 @@ Each data stream requires an <<indices-templates,index template>> that contains:
Data streams are designed for append-only data, where the data stream name Data streams are designed for append-only data, where the data stream name
can be used as the operations (read, write, rollover, shrink etc.) target. can be used as the operations (read, write, rollover, shrink etc.) target.
If your use case requires data to be updated in place, you can instead manage your time-series data using <<indices-aliases, indices aliases>>. However, there are a few more configuration steps and If your use case requires data to be updated in place, you can instead manage your time series data using <<indices-aliases, indices aliases>>. However, there are a few more configuration steps and
concepts: concepts:
* An _index template_ that specifies the settings for each new index in the series. * An _index template_ that specifies the settings for each new index in the series.

View File

@ -98,15 +98,16 @@ Name of the data stream.
`timestamp_field`:: `timestamp_field`::
(object) (object)
Contains information about the data stream's timestamp field. Contains information about the data stream's `@timestamp` field.
+ +
.Properties of `timestamp_field` .Properties of `timestamp_field`
[%collapsible%open] [%collapsible%open]
===== =====
`name`:: `name`::
(string) (string)
Name of the data stream's timestamp field. This field must be included in every Name of the data stream's timestamp field, which must be `@timestamp`. The
document indexed to the data stream. `@timestamp` field must be included in every document indexed to the data
stream.
===== =====
`indices`:: `indices`::

View File

@ -163,7 +163,7 @@ embroidery_ needles.
[[more-features]] [[more-features]]
===== But wait, theres more ===== But wait, theres more
Want to automate the analysis of your time-series data? You can use Want to automate the analysis of your time series data? You can use
{ml-docs}/ml-overview.html[machine learning] features to create accurate {ml-docs}/ml-overview.html[machine learning] features to create accurate
baselines of normal behavior in your data and identify anomalous patterns. With baselines of normal behavior in your data and identify anomalous patterns. With
machine learning, you can detect: machine learning, you can detect:

View File

@ -385,7 +385,7 @@ default rules of dynamic mappings. Of course if you do not need them because
you don't need to perform exact search or aggregate on this field, you could you don't need to perform exact search or aggregate on this field, you could
remove it as described in the previous section. remove it as described in the previous section.
===== Time-series ===== Time series
When doing time series analysis with Elasticsearch, it is common to have many When doing time series analysis with Elasticsearch, it is common to have many
numeric fields that you will often aggregate on but never filter on. In such a numeric fields that you will often aggregate on but never filter on. In such a

View File

@ -98,9 +98,9 @@ Guidelines
By default, {es} changes the values of `text` fields during analysis. For By default, {es} changes the values of `text` fields during analysis. For
example, ... example, ...
===== Using the `sample` query on time-series data ===== Using the `sample` query on time series data
You can use the `sample` query to perform searches on time-series data. You can use the `sample` query to perform searches on time series data.
For example: For example:
[source,console] [source,console]

View File

@ -68,7 +68,7 @@ session ID. This string cannot start with a `_`.
TIP: You can use this option to serve cached results for frequently used and TIP: You can use this option to serve cached results for frequently used and
resource-intensive searches. If the shard's data doesn't change, repeated resource-intensive searches. If the shard's data doesn't change, repeated
searches with the same `preference` string retrieve results from the same searches with the same `preference` string retrieve results from the same
<<shard-request-cache,shard request cache>>. For time-series use cases, such as <<shard-request-cache,shard request cache>>. For time series use cases, such as
logging, data in older indices is rarely updated and can be served directly from logging, data in older indices is rarely updated and can be served directly from
this cache. this cache.

View File

@ -2,7 +2,7 @@
[[watching-time-series-data]] [[watching-time-series-data]]
=== Watching time series data === Watching time series data
If you are indexing time-series data such as logs, RSS feeds, or network traffic, If you are indexing time series data such as logs, RSS feeds, or network traffic,
you can use {watcher} to send notifications when certain events occur. you can use {watcher} to send notifications when certain events occur.
For example, you could index an RSS feed of posts on Stack Overflow that are For example, you could index an RSS feed of posts on Stack Overflow that are