[DOCS] Fix hyphenation for "time series" (#61472) (#61481)

This commit is contained in:
James Rodewig 2020-08-24 11:18:07 -04:00 committed by GitHub
parent 618dd65d5f
commit 2b852388c5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 31 additions and 33 deletions

View File

@ -49,8 +49,8 @@ syntax.
[[date-math-index-names]]
=== Date math support in index names
Date math index name resolution enables you to search a range of time-series indices, rather
than searching all of your time-series indices and filtering the results or maintaining aliases.
Date math index name resolution enables you to search a range of time series indices, rather
than searching all of your time series indices and filtering the results or maintaining aliases.
Limiting the number of indices that are searched reduces the load on the cluster and improves
execution performance. For example, if you are searching for errors in your
daily logs, you can use a date math name template to restrict the search to the past

View File

@ -6,9 +6,9 @@
++++
A _data stream_ is a convenient, scalable way to ingest, search, and manage
continuously generated time-series data.
continuously generated time series data.
Time-series data, such as logs, tends to grow over time. While storing an entire
Time series data, such as logs, tends to grow over time. While storing an entire
time series in a single {es} index is simpler, it is often more efficient and
cost-effective to store large volumes of data across multiple, time-based
indices. Multiple indices let you move indices containing older, less frequently
@ -38,10 +38,10 @@ budget, performance, resiliency, and retention needs.
We recommend using data streams if you:
* Use {es} to ingest, search, and manage large volumes of time-series data
* Use {es} to ingest, search, and manage large volumes of time series data
* Want to scale and reduce costs by using {ilm-init} to automate the management
of your indices
* Index large volumes of time-series data in {es} but rarely delete or update
* Index large volumes of time series data in {es} but rarely delete or update
individual documents
@ -161,7 +161,7 @@ manually perform a rollover. See <<manually-roll-over-a-data-stream>>.
[[data-streams-append-only]]
== Append-only
For most time-series use cases, existing data is rarely, if ever, updated.
For most time series use cases, existing data is rarely, if ever, updated.
Because of this, data streams are designed to be append-only.
You can send <<add-documents-to-a-data-stream,indexing requests for new

View File

@ -21,12 +21,9 @@ and its backing indices.
[[data-stream-prereqs]]
=== Prerequisites
* {es} data streams are intended for time-series data only. Each document
indexed to a data stream must contain a shared timestamp field.
+
TIP: Data streams work well with most common log formats. While no schema is
required to use data streams, we recommend the {ecs-ref}[Elastic Common Schema
(ECS)].
* {es} data streams are intended for time series data only. Each document
indexed to a data stream must contain the `@timestamp` field. This field must be
mapped as a <<date,`date`>> or <<date_nanos,`date_nanos`>> field data type.
* Data streams are best suited for time-based,
<<data-streams-append-only,append-only>> use cases. If you frequently need to

View File

@ -9,7 +9,7 @@
experimental::[]
{eql-ref}/index.html[Event Query Language (EQL)] is a query language for
event-based, time-series data, such as logs.
event-based, time series data, such as logs.
[discrete]
[[eql-advantages]]

View File

@ -73,7 +73,7 @@ multiple clusters. See <<modules-cross-cluster-search>>.
+
--
// tag::data-stream-def[]
A named resource used to ingest, search, and manage time-series data in {es}. A
A named resource used to ingest, search, and manage time series data in {es}. A
data stream's data is stored across multiple hidden, auto-generated
<<glossary-index,indices>>. You can automate management of these indices to more
efficiently store large data volumes.

View File

@ -23,7 +23,7 @@ include::../glossary.asciidoc[tag=freeze-def-short]
* **Delete**: Permanently remove an index, including all of its data and metadata.
{ilm-init} makes it easier to manage indices in hot-warm-cold architectures,
which are common when you're working with time-series data such as logs and metrics.
which are common when you're working with time series data such as logs and metrics.
You can specify:

View File

@ -271,18 +271,18 @@ DELETE /_index_template/timeseries_template
[discrete]
[[manage-time-series-data-without-data-streams]]
=== Manage time-series data without data streams
=== Manage time series data without data streams
Even though <<data-streams, data streams>> are a convenient way to scale
and manage time-series data, they are designed to be append-only. We recognise there
and manage time series data, they are designed to be append-only. We recognise there
might be use-cases where data needs to be updated or deleted in place and the
data streams don't support delete and update requests directly,
so the index APIs would need to be used directly on the data stream's backing indices.
In these cases, you can use an index alias to manage indices containing the time-series data
In these cases, you can use an index alias to manage indices containing the time series data
and periodically roll over to a new index.
To automate rollover and management of time-series indices with {ilm-init} using an index
To automate rollover and management of time series indices with {ilm-init} using an index
alias, you:
. Create a lifecycle policy that defines the appropriate phases and actions.
@ -352,7 +352,7 @@ DELETE _index_template/timeseries_template
[discrete]
[[ilm-gs-alias-bootstrap]]
=== Bootstrap the initial time-series index with a write index alias
=== Bootstrap the initial time series index with a write index alias
To get things started, you need to bootstrap an initial index and
designate it as the write index for the rollover alias specified in your index template.

View File

@ -1,7 +1,7 @@
[[index-rollover]]
=== Rollover
When indexing time-series data like logs or metrics, you can't write to a single index indefinitely.
When indexing time series data like logs or metrics, you can't write to a single index indefinitely.
To meet your indexing and search performance requirements and manage resource usage,
you write to an index until some threshold is met and
then create a new index and start writing to it instead.
@ -12,7 +12,7 @@ Using rolling indices enables you to:
* Shift older, less frequently accessed data to less expensive _cold_ nodes,
* Delete data according to your retention policies by removing entire indices.
We recommend using <<indices-create-data-stream, data streams>> to manage time-series
We recommend using <<indices-create-data-stream, data streams>> to manage time series
data. Data streams automatically track the write index while keeping configuration to a minimum.
Each data stream requires an <<indices-templates,index template>> that contains:
@ -27,7 +27,7 @@ Each data stream requires an <<indices-templates,index template>> that contains:
Data streams are designed for append-only data, where the data stream name
can be used as the operations (read, write, rollover, shrink etc.) target.
If your use case requires data to be updated in place, you can instead manage your time-series data using <<indices-aliases, indices aliases>>. However, there are a few more configuration steps and
If your use case requires data to be updated in place, you can instead manage your time series data using <<indices-aliases, indices aliases>>. However, there are a few more configuration steps and
concepts:
* An _index template_ that specifies the settings for each new index in the series.

View File

@ -98,15 +98,16 @@ Name of the data stream.
`timestamp_field`::
(object)
Contains information about the data stream's timestamp field.
Contains information about the data stream's `@timestamp` field.
+
.Properties of `timestamp_field`
[%collapsible%open]
=====
`name`::
(string)
Name of the data stream's timestamp field. This field must be included in every
document indexed to the data stream.
Name of the data stream's timestamp field, which must be `@timestamp`. The
`@timestamp` field must be included in every document indexed to the data
stream.
=====
`indices`::

View File

@ -163,7 +163,7 @@ embroidery_ needles.
[[more-features]]
===== But wait, theres more
Want to automate the analysis of your time-series data? You can use
Want to automate the analysis of your time series data? You can use
{ml-docs}/ml-overview.html[machine learning] features to create accurate
baselines of normal behavior in your data and identify anomalous patterns. With
machine learning, you can detect:

View File

@ -385,7 +385,7 @@ default rules of dynamic mappings. Of course if you do not need them because
you don't need to perform exact search or aggregate on this field, you could
remove it as described in the previous section.
===== Time-series
===== Time series
When doing time series analysis with Elasticsearch, it is common to have many
numeric fields that you will often aggregate on but never filter on. In such a

View File

@ -98,9 +98,9 @@ Guidelines
By default, {es} changes the values of `text` fields during analysis. For
example, ...
===== Using the `sample` query on time-series data
===== Using the `sample` query on time series data
You can use the `sample` query to perform searches on time-series data.
You can use the `sample` query to perform searches on time series data.
For example:
[source,console]

View File

@ -68,7 +68,7 @@ session ID. This string cannot start with a `_`.
TIP: You can use this option to serve cached results for frequently used and
resource-intensive searches. If the shard's data doesn't change, repeated
searches with the same `preference` string retrieve results from the same
<<shard-request-cache,shard request cache>>. For time-series use cases, such as
<<shard-request-cache,shard request cache>>. For time series use cases, such as
logging, data in older indices is rarely updated and can be served directly from
this cache.

View File

@ -2,7 +2,7 @@
[[watching-time-series-data]]
=== Watching time series data
If you are indexing time-series data such as logs, RSS feeds, or network traffic,
If you are indexing time series data such as logs, RSS feeds, or network traffic,
you can use {watcher} to send notifications when certain events occur.
For example, you could index an RSS feed of posts on Stack Overflow that are