[7.x] [DOCS] Update CCR docs to focus on Kibana. (#61237)

* First crack at rewriting the CCR introduction.

* Emphasizing Kibana in configuring CCR (part one).

* Many more edits, plus new files.

* Fixing test case.

* Removing overview page and consolidating that information in the main page.

* Adding redirects for moved and deleted pages.

* Removing, consolidating, and adding redirects.

* Fixing duplicate ID in redirects and removing outdated reference.

* Adding test case and steps for recreating a follower index.

* Adding steps for managing CCR tasks in Kibana.

* Adding tasks for managing auto-follow patterns.

* Fixing glossary link.

* Fixing glossary link, again.

* Updating the upgrade information and other stuff.

* Apply suggestions from code review

* Incorporating review feedback.

* Adding more edits.

* Fixing link reference.

* Adding use cases for #59812.

* Incorporating feedback from reviewers.

* Apply suggestions from code review

* Incorporating more review comments.

* Condensing some of the steps for accessing Kibana.

* Incorporating small changes from reviewers.
This commit is contained in:
Adam Locke 2020-08-17 16:58:13 -04:00 committed by GitHub
parent 06d3159125
commit a0af82c213
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
24 changed files with 984 additions and 549 deletions

View File

@ -1,32 +1,82 @@
[role="xpack"]
[testenv="platinum"]
[[ccr-auto-follow]]
=== Automatically following indices
=== Manage auto-follow patterns
To replicate time series indices, you configure an auto-follow pattern so that
each new index in the series is replicated automatically. Whenever the name of
a new index on the remote cluster matches the auto-follow pattern, a
corresponding follower index is added to the local cluster.
In time series use cases where you want to follow new indices that are
periodically created (such as daily Beats indices), manually configuring follower
indices for each new leader index can be an operational burden. The auto-follow
functionality in {ccr} is aimed at easing this burden. With the auto-follow
functionality, you can specify that new indices in a remote cluster that have a
name that matches a pattern are automatically followed.
Auto-follow patterns are especially useful with
<<index-lifecycle-management,{ilm-cap}>>, which might continually create
new indices on the cluster containing the leader index.
==== Managing auto-follow patterns
[[ccr-access-ccr-auto-follow]]
To start using {ccr} auto-follow patterns, access {kib} and go to
*Management > Stack Management*. In the side navigation, select
*Cross-Cluster Replication* and choose the *Auto-follow patterns* tab
You can add a new auto-follow pattern configuration with the
{ref}/ccr-put-auto-follow-pattern.html[create auto-follow pattern API]. When you create
a new auto-follow pattern configuration, you are configuring a collection of
patterns against a single remote cluster. Any time a new index with a name that
matches one of the patterns in the collection is created in the remote cluster,
a follower index is configured in the local cluster. The follower index uses the
new index as its leader index.
[[ccr-auto-follow-create]]
==== Create auto-follow patterns
When you <<ccr-getting-started-auto-follow,create an auto-follow pattern>>,
you are configuring a collection of patterns against a single remote cluster.
When an index is created in the remote cluster with a name that matches one of
the patterns in the collection, a follower index is configured in the local
cluster. The follower index uses the new index as its leader index.
You can inspect all configured auto-follow pattern collections with the
{ref}/ccr-get-auto-follow-pattern.html[get auto-follow pattern API]. Auto-follow patterns
can be paused using the {ref}/ccr-pause-auto-follow-pattern.html[pause auto-follow pattern API]
and can be resumed using the {ref}/ccr-resume-auto-follow-pattern.html[resume auto-follow pattern API].
To delete a configured auto-follow pattern collection, use the
{ref}/ccr-delete-auto-follow-pattern.html[delete auto-follow pattern API].
[%collapsible]
.Use the API
====
Use the <<ccr-put-auto-follow-pattern,create auto-follow pattern API>> to add a
new auto-follow pattern configuration.
====
Since auto-follow functionality is handled automatically in the background on
your behalf, error reporting is done through logs on the elected master node
and through the {ref}/ccr-get-stats.html[{ccr} stats API].
[[ccr-auto-follow-retrieve]]
==== Retrieve auto-follow patterns
To view existing auto-follow patterns and make changes to the backing
patterns, <<ccr-access-ccr-auto-follow,access {kib}>> on your _remote_ cluster.
Select the auto-follow pattern that you want to view details about. From there,
you can make changes to the auto-follow pattern. You can also view your
follower indices included in the auto-follow pattern.
[%collapsible]
.Use the API
====
Use the <<ccr-get-auto-follow-pattern,get auto-follow pattern API>> to inspect
all configured auto-follow pattern collections.
====
[[ccr-auto-follow-pause]]
==== Pause and resume auto-follow patterns
To pause and resume replication of auto-follow pattern collections,
<<ccr-access-ccr-auto-follow,access {kib}>>, select the auto-follow pattern,
and pause replication.
To resume replication, select the pattern and choose
*Manage pattern > Resume replication*.
[%collapsible]
.Use the API
====
Use the <<ccr-pause-auto-follow-pattern,pause auto-follow pattern API>> to
pause auto-follow patterns.
Use the <<ccr-resume-auto-follow-pattern,resume auto-follow pattern API>> to
resume auto-follow patterns.
====
[[ccr-auto-follow-delete]]
==== Delete auto-follow patterns
To delete an auto-follow pattern collection,
<<ccr-access-ccr-auto-follow,access {kib}>>, select the auto-follow pattern,
and pause replication.
When the pattern status changes to Paused, choose
*Manage pattern > Delete pattern*.
[%collapsible]
.Use the API
====
Use the <<ccr-delete-auto-follow-pattern,delete auto-follow pattern API>> to
delete a configured auto-follow pattern collection.
====

View File

@ -1,47 +1,39 @@
[role="xpack"]
[testenv="platinum"]
[[ccr-getting-started]]
=== Getting started with {ccr}
=== Set up {ccr}
You can manually create follower indices to replicate specific indices on a
remote cluster, or configure auto-follow patterns to automatically create
follower indices for new time series.
This getting-started guide for {ccr} shows you how to:
After the follower index is created, the
<<ccr-remote-recovery, remote recovery>> process copies all of the Lucene
segment files from the remote cluster to the local cluster.
* <<ccr-getting-started-remote-cluster,Connect a local cluster to a remote
cluster>>
* <<ccr-getting-started-leader-index,Create a leader index>> in a remote cluster
* <<ccr-getting-started-follower-index,Create a follower index>> that replicates
a leader index
* <<ccr-getting-started-auto-follow,Automatically create follower indices>>
To set up {ccr}:
. <<ccr-getting-started-remote-cluster,Connect a local cluster to a remote cluster>>
. <<ccr-getting-started-leader-index,Identify the index (or time series indices) you want to replicate on the remote cluster>>
. <<ccr-enable-soft-deletes,Enable soft deletes on the leader index>>
. Manually create a follower index or create an auto-follow pattern:
* To replicate the leader index, <<ccr-getting-started-follower-index,manually create a follower index>>
* To automatically follow time series indices, <<ccr-getting-started-auto-follow,create an auto-follow pattern>>
[[ccr-getting-started-before-you-begin]]
==== Before you begin
. {stack-gs}/get-started-elastic-stack.html#install-elasticsearch[Install {es}]
on your local and remote clusters.
[[ccr-getting-started-prerequisites]]
==== Prerequisites
If the Elastic {security-features} are enabled in your local and remote
clusters, you need a user with appropriate authority to complete the steps
in this tutorial.
. Obtain a license that includes the {ccr} features. See
https://www.elastic.co/subscriptions[subscriptions] and
{kibana-ref}/managing-licenses.html[License management].
By default, you can complete the following steps as the built-in
`elastic` user. However, you must <<get-started-built-in-users,set a password>>
for this user before proceeding.
. If the Elastic {security-features} are enabled in your local and remote
clusters, you need a user that has appropriate authority to perform the steps
in this tutorial.
+
--
[[ccr-getting-started-security]]
The {ccr} features use cluster privileges and built-in roles to make it easier
to control which users have authority to manage {ccr}.
By default, you can perform all of the steps in this tutorial by
using the built-in `elastic` user. However, a password must be set for this user
before the user can do anything. For information about how to set that password,
see <<security-getting-started>>.
If you are performing these steps in a production environment, take extra care
because the `elastic` user has the `superuser` role and you could inadvertently
make significant changes.
WARNING: If you are performing these steps in a production environment, do
not use the `elastic` user.
Alternatively, you can assign the appropriate privileges to a user ID of your
choice. On the remote cluster that contains the leader index, a user will need
choice. On the remote cluster that contains the leader index, a user must have
the `read_ccr` cluster privilege and `monitor` and `read` privileges on the
leader index.
@ -76,19 +68,31 @@ ccr_user:
--------------------------------------------------
If you are managing
<<ccr-getting-started-remote-cluster,connecting to the remote cluster>> via the
cluster update settings API, you will also need a user with the `all` cluster
privilege.
--
<<ccr-getting-started-remote-cluster,connecting to the remote cluster>> using
the cluster update settings API, you will also need a user with the `all`
cluster privilege.
[[ccr-getting-started-remote-cluster]]
==== Connecting to a remote cluster
==== Connect to a remote cluster
Connect your local cluster to a
<<modules-remote-clusters,remote cluster>> to begin using cross-cluster
replication.
The {ccr} features require that you
{ref}/modules-remote-clusters.html[connect your local cluster to a remote
cluster]. In this tutorial, we will connect our local cluster to a remote
cluster with the cluster alias `leader`.
To configure a {kibana-ref}/working-remote-clusters.html[remote cluster],
access {kib} and go to
*Management > Stack Management*. In the side navigation, select
*Remote Clusters*.
Add a remote cluster by specifying the IP address or host name, followed by the
transport port of the remote cluster.
[role="screenshot"]
image::images/ccr-add-remote-cluster.png["The Add remote clusters page in {kib}"]
[%collapsible]
.API example
====
Use the <<cluster-update-settings,cluster update settings API>> to add a remote cluster:
[source,console]
--------------------------------------------------
@ -147,20 +151,19 @@ remote cluster.
alias `leader`
<2> This shows the number of nodes in the remote cluster the local cluster is
connected to.
Alternatively, you can manage remote clusters on the
*Management / Elasticsearch / Remote Clusters* page in {kib}:
[role="screenshot"]
image::images/remote-clusters.jpg["The Remote Clusters page in {kib}"]
====
[[ccr-getting-started-leader-index]]
==== Creating a leader index
==== Create a leader index
To create a leader index, access {kib} on your _remote_ cluster and go to
*Management > Dev Tools*.
In the following example, we will create a leader index in the remote cluster:
Copy the following example into the Console to create a leader index named
`server-metrics` in your remote cluster:
[%collapsible]
.Leader index example
====
[source,console]
--------------------------------------------------
PUT /server-metrics
@ -199,20 +202,68 @@ PUT /server-metrics
}
--------------------------------------------------
// TEST[continued]
====
[[ccr-enable-soft-deletes]]
==== Enable soft deletes on leader indices
<<ccr-leader-requirements,Soft deletes>> must be enabled for indices that you want to
use as leader indices. Soft deletes are enabled by default on new indices
created on or after {es} 7.0.0, so
*no further action is required if your cluster is running {es} 7.0.0 or later*.
include::{es-ref-dir}/ccr/index.asciidoc[tag=ccr-existing-indices-tag]
To enable soft deletes on indices created on versions of
{es} between 6.5.0 and 7.0.0, set <<ccr-index-soft-deletes,`index.soft_deletes.enabled`>> to `true`.
[[ccr-getting-started-follower-index]]
==== Creating a follower index
Follower indices are created with the {ref}/ccr-put-follow.html[create follower
API]. When you create a follower index, you must reference the
==== Create a follower index
When you create a {kibana-ref}/managing-cross-cluster-replication.html#_create_specific_follower_indices[follower index], you
must reference the
<<ccr-getting-started-remote-cluster,remote cluster>> and the
<<ccr-getting-started-leader-index,leader index>> that you created in the remote
cluster.
To create a follower index, access {kib} and go to
*Management > Stack Management*. In the side navigation, select
*Cross-Cluster Replication* and choose the *Follower Indices* tab.
. Choose the remote cluster containing the index you want to replicate, which
is `leader` if you are following the tutorial.
. Enter the name of the leader index, which is `server-metrics` if you are
following the tutorial.
image::images/ccr-add-follower-index.png["Adding a follower index named server-metrics in {kib}"]
The follower index is initialized using the
<<ccr-remote-recovery, remote recovery>>
process, which transfers the existing Lucene segment files from the leader
index to the follower index. The index status changes to *Paused*. When the
remote recovery process is complete, the index following begins and the status
changes to *Active*.
When you index documents into your leader index, the documents are replicated
in the follower index.
[role="screenshot"]
image::images/ccr-follower-index.png["The Cross-Cluster Replication page in {kib}"]
[%collapsible]
.API example
====
Use the <<ccr-put-follow,create follower API>> to create follower indices.
When you create a follower index, you must reference the
<<ccr-getting-started-remote-cluster,remote cluster>> and the
<<ccr-getting-started-leader-index,leader index>> that you created in the
remote cluster.
When initiating the follower request, the response returns before the
<<ccr-remote-recovery, remote recovery>> process completes. To wait for the process
to complete, add the `wait_for_active_shards` parameter to your request.
[source,console]
--------------------------------------------------
PUT /server-metrics-copy/_ccr/follow?wait_for_active_shards=1
PUT /server-metrics-follower/_ccr/follow?wait_for_active_shards=1
{
"remote_cluster" : "leader",
"leader_index" : "server-metrics"
@ -233,44 +284,65 @@ PUT /server-metrics-copy/_ccr/follow?wait_for_active_shards=1
//////////////////////////
The follower index is initialized using the <<remote-recovery, remote recovery>>
process. The remote recovery process transfers the existing Lucene segment files
from the leader to the follower. When the remote recovery process is complete,
the index following begins.
Now when you index documents into your leader index, you will see these
documents replicated in the follower index. You can
inspect the status of replication using the
{ref}/ccr-get-follow-stats.html[get follower stats API].
Use the
<<ccr-get-follow-stats,get follower stats API>> to inspect the status of
replication
//////////////////////////
[source,console]
--------------------------------------------------
POST /server-metrics-copy/_ccr/pause_follow
POST /server-metrics-follower/_ccr/pause_follow
POST /server-metrics-copy/_close
POST /server-metrics-follower/_close
POST /server-metrics-copy/_ccr/unfollow
POST /server-metrics-follower/_ccr/unfollow
--------------------------------------------------
// TEST[continued]
//////////////////////////
====
[[ccr-getting-started-auto-follow]]
==== Automatically create follower indices
Create <<ccr-auto-follow,auto-follow patterns>> to automatically follow time
series indices that are periodically created in a remote cluster (such as daily
{beats} indices).
The <<ccr-auto-follow,auto-follow>> feature in {ccr} helps for time series use
cases where you want to follow new indices that are periodically created in the
remote cluster (such as daily Beats indices). Auto-following is configured using
the {ref}/ccr-put-auto-follow-pattern.html[create auto-follow pattern API]. With
an auto-follow pattern, you reference the
<<ccr-getting-started-remote-cluster,remote cluster>> that you connected your
local cluster to. You must also specify a collection of patterns that match the
With an auto-follow pattern, you reference the
<<ccr-getting-started-remote-cluster,remote cluster>> connected to your
local cluster. You must also specify a collection of patterns that match the
indices you want to automatically follow.
For example:
// tag::ccr-create-auto-follow-pattern-tag[]
To create follower indices from an {kibana-ref}/managing-cross-cluster-replication.html#_create_follower_indices_from_an_auto_follow_pattern[auto-follow pattern],
access {kib} on your remote cluster and go to
*Management > Stack Management*. In the side navigation, select
*Cross Cluster Replication* and choose the *Auto-follow patterns* tab.
[role="screenshot"]
image::images/auto-follow-patterns.png["The Auto-follow patterns page in {kib}"]
* Enter a name for the auto-follow pattern. For this tutorial, enter `beats`
as the name.
* Choose the remote cluster containing the index you want to replicate, which
is `leader` if you are following the tutorial.
* Enter one or more index patterns that identify the indices you want to
replicate from the remote cluster. For this tutorial, enter
`metricbeat-*,packetbeat-*` as the index pattern.
* Enter *copy-* as the prefix to apply to the names of the follower indices so
you can more easily identify replicated indices.
As new indices matching these patterns are
created, they are replicated to the follower indices.
// end::ccr-create-auto-follow-pattern-tag[]
[%collapsible]
.API example
====
Use the <<ccr-put-auto-follow-pattern,create auto-follow pattern API>> to
configure auto-follow patterns.
[source,console]
--------------------------------------------------
@ -311,9 +383,4 @@ DELETE /_ccr/auto_follow/beats
// TEST[continued]
//////////////////////////
Alternatively, you can manage auto-follow patterns on the
*Management / Elasticsearch / Cross Cluster Replication* page in {kib}:
[role="screenshot"]
image::images/auto-follow-patterns.jpg["The Auto-follow patterns page in {kib}"]
====

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 279 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

View File

@ -2,28 +2,301 @@
[testenv="platinum"]
[[xpack-ccr]]
== {ccr-cap}
With {ccr}, you can replicate indices across clusters to:
The {ccr} (CCR) feature enables replication of indices in remote clusters to a
local cluster. This functionality can be used in some common production use
cases:
* Continue handling search requests in the event of a datacenter outage
* Prevent search volume from impacting indexing throughput
* Reduce search latency by processing search requests in geo-proximity to the
user
* Disaster recovery in case a primary cluster fails. A secondary cluster can
serve as a hot backup
* Geo-proximity so that reads can be served locally
* Prevent search load from interfering with indexing by offloading search to a secondary cluster
{ccr-cap} uses an active-passive model. You index to a _leader_ index, and the
data is replicated to one or more read-only _follower_ indices. Before you can add a follower index to a cluster, you must configure the _remote cluster_ that contains the leader index.
This guide provides an overview of {ccr}:
When the leader index receives writes, the follower indices pull changes from
the leader index on the remote cluster. You can manually create follower
indices, or configure auto-follow patterns to automatically create follower
indices for new time series indices.
You configure {ccr} clusters in a uni-directional or bi-directional setup:
* In a uni-directional configuration, one cluster contains only
leader indices, and the other cluster contains only follower indices.
* In a bi-directional configuration, each cluster contains both leader and
follower indices.
In a uni-directional configuration, the cluster containing follower indices
must be running **the same or newer** version of {es} as the remote cluster.
If newer, the versions must also be compatible as outlined in the following matrix.
[%collapsible]
[[ccr-version-compatibility]]
.Version compatibility matrix
====
include::../modules/remote-clusters.asciidoc[tag=remote-cluster-compatibility-matrix]
====
[discrete]
[[ccr-multi-cluster-architectures]]
=== Multi-cluster architectures
Use {ccr} to construct several multi-cluster architectures within the Elastic
Stack:
* <<ccr-disaster-recovery,Disaster recovery>> in case a primary cluster fails,
with a secondary cluster serving as a hot backup
* <<ccr-data-locality,Data locality>> to maintain multiple copies of the
dataset close to the application servers (and users), and reduce costly latency
* <<ccr-centralized-reporting,Centralized reporting>> for minimizing network
traffic and latency in querying multiple geo-distributed {es} clusters, or for
preventing search load from interfering with indexing by offloading search to a
secondary cluster
Watch the
https://www.elastic.co/webinars/replicate-elasticsearch-data-with-cross-cluster-replication-ccr[{ccr} webinar] to learn more about the following use cases.
Then, <<ccr-getting-started,set up {ccr}>> on your local machine and work
through the demo from the webinar.
[discrete]
[[ccr-disaster-recovery]]
==== Disaster recovery and high availability
Disaster recovery provides your mission-critical applications with the
tolerance to withstand datacenter or region outages. This use case is the
most common deployment of {ccr}. You can configure clusters in different
architectures to support disaster recovery and high availability:
* <<ccr-single-datacenter-recovery>>
* <<ccr-multiple-datacenter-recovery>>
* <<ccr-chained-replication>>
* <<ccr-bi-directional-replication>>
[discrete]
[[ccr-single-datacenter-recovery]]
===== Single disaster recovery datacenter
In this configuration, data is replicated from the production datacenter to the
disaster recovery datacenter. Because the follower indices replicate the leader
index, your application can use the disaster recovery datacenter if the
production datacenter is unavailable.
image::images/ccr-arch-disaster-recovery.png[Production datacenter that replicates data to a disaster recovery datacenter]
[discrete]
[[ccr-multiple-datacenter-recovery]]
===== Multiple disaster recovery datacenters
You can replicate data from one datacenter to multiple datacenters. This
configuration provides both disaster recovery and high availability, ensuring
that data is replicated in two datacenters if the primary datacenter is down
or unavailable.
In the following diagram, data from Datacenter A is replicated to
Datacenter B and Datacenter C, which both have a read-only copy of the leader
index from Datacenter A.
image::images/ccr-arch-multiple-dcs.png[Production datacenter that replicates data to two other datacenters]
[discrete]
[[ccr-chained-replication]]
===== Chained replication
You can replicate data across multiple datacenters to form a replication
chain. In the following diagram, Datacenter A contains the leader index.
Datacenter B replicates data from Datacenter A, and Datacenter C replicates
from the follower indices in Datacenter B. The connection between these
datacenters forms a chained replication pattern.
image::images/ccr-arch-chain-dcs.png[Three datacenters connected to form a replication chain]
[discrete]
[[ccr-bi-directional-replication]]
===== Bi-directional replication
In a https://www.elastic.co/blog/bi-directional-replication-with-elasticsearch-cross-cluster-replication-ccr[bi-directional replication] setup, all clusters have access to view
all data, and all clusters have an index to write to without manually
implementing failover. Applications can write to the local index within each
datacenter, and read across multiple indices for a global view of all
information.
This configuration requires no manual intervention when a cluster or datacenter
is unavailable. In the following diagram, if Datacenter A is unavailable, you can continue using Datacenter B without manual failover. When Datacenter A
comes online, replication resumes between the clusters.
image::images/ccr-arch-bi-directional.png[Bi-directional configuration where each cluster contains both a leader index and follower indices]
NOTE: This configuration is useful for index-only workloads, where no updates
to document values occur. In this configuration, documents indexed by {es} are
immutable. Clients are located in each datacenter alongside the {es}
cluster, and do not communicate with clusters in different datacenters.
[discrete]
[[ccr-data-locality]]
==== Data locality
Bringing data closer to your users or application server can reduce latency
and response time. This methodology also applies when replicating data in {es}.
For example, you can replicate a product catalog or reference dataset to 20 or
more datacenters around the world to minimize the distance between the data and
the application server.
In the following diagram, data is replicated from one datacenter to three
additional datacenters, each in their own region. The central datacenter
contains the leader index, and the additional datacenters contain follower
indices that replicate data in that particular region. This configuration
puts data closer to the application accessing it.
image::images/ccr-arch-data-locality.png[A centralized datacenter replicated across three other datacenters, each in their own region]
[discrete]
[[ccr-centralized-reporting]]
==== Centralized reporting
Using a centralized reporting cluster is useful when querying across a large
network is inefficient. In this configuration, you replicate data from many
smaller clusters to the centralized reporting cluster.
For example, a large global bank might have 100 {es} clusters around the world
that are distributed across different regions for each bank branch. Using
{ccr}, the bank can replicate events from all 100 banks to a central cluster to
analyze and aggregate events locally for reporting. Rather than maintaining a
mirrored cluster, the bank can use {ccr} to replicate specific indices.
In the following diagram, data from three datacenters in different regions is
replicated to a centralized reporting cluster. This configuration enables you
to copy data from regional hubs to a central cluster, where you can run all
reports locally.
image::images/ccr-arch-central-reporting.png[Three clusters in different regions sending data to a centralized reporting cluster for analysis]
[discrete]
[[ccr-replication-mechanics]]
=== Replication mechanics
Although you <<ccr-getting-started,set up {ccr}>> at the index level, {es}
achieves replication at the shard level. When a follower index is created,
each shard in that index pulls changes from its corresponding shard in the
leader index, which means that a follower index has the same number of
shards as its leader index. All operations on the leader are replicated by the
follower, such as operations to create, update, or delete a document.
These requests can be served from any copy of the leader shard (primary or
replica).
When a follower shard sends a read request, the leader shard responds with
any new operations, limited by the read parameters that you establish when
configuring the follower index. If no new operations are available, the
leader shard waits up to the configured timeout for new operations. If the
timeout elapses, the leader shard responds to the follower shard that there
are no new operations. The follower shard updates shard statistics and
immediately sends another read request to the leader shard. This
communication model ensures that network connections between the remote
cluster and the local cluster are continually in use, avoiding forceful
termination by an external source such as a firewall.
If a read request fails, the cause of the failure is inspected. If the
cause of the failure is deemed to be recoverable (such as a network
failure), the follower shard enters into a retry loop. Otherwise, the
follower shard pauses
<<ccr-pause-replication,until you resume it>>.
When a follower shard receives operations from the leader shard, it places
those operations in a write buffer. The follower shard submits bulk write
requests using operations from the write buffer. If the write buffer exceeds
its configured limits, no additional read requests are sent. This configuration
provides a back-pressure against read requests, allowing the follower shard
to resume sending read requests when the write buffer is no longer full.
To manage how operations are replicated from the leader index, you can
configure settings when
<<ccr-getting-started-follower-index,creating the follower index>>.
The follower index automatically retrieves some updates applied to the leader
index, while other updates are retrieved as needed:
[cols="3"]
|===
h| Update type h| Automatic h| As needed
| Alias | {yes-icon} | {no-icon}
| Mapping | {no-icon} | {yes-icon}
| Settings | {no-icon} | {yes-icon}
|===
For example, changing the number of replicas on the leader index is not
replicated by the follower index, so that setting might not be retrieved.
NOTE: You cannot manually modify a follower index's mappings or aliases.
If you apply a non-dynamic settings change to the leader index that is
needed by the follower index, the follower index closes itself, applies the
settings update, and then re-opens itself. The follower index is unavailable
for reads and cannot replicate writes during this cycle.
[discrete]
[[ccr-remote-recovery]]
=== Initializing followers using remote recovery
When you create a follower index, you cannot use it until it is fully
initialized. The _remote recovery_ process builds a new copy of a shard on a
follower node by copying data from the primary shard in the leader cluster.
{es} uses this remote recovery process to bootstrap a follower index using the
data from the leader index. This process provides the follower with a copy of
the current state of the leader index, even if a complete history of changes
is not available on the leader due to Lucene segment merging.
Remote recovery is a network intensive process that transfers all of the Lucene
segment files from the leader cluster to the follower cluster. The follower
requests that a recovery session be initiated on the primary shard in the
leader cluster. The follower then requests file chunks concurrently from the
leader. By default, the process concurrently requests five 1MB file
chunks. This default behavior is designed to support leader and follower
clusters with high network latency between them.
TIP: You can modify dynamic <<ccr-recovery-settings,remote recovery settings>>
to rate-limit the transmitted data and manage the resources consumed by remote
recoveries.
Use the <<cat-recovery,recovery API>> on the cluster containing the follower
index to obtain information about an in-progress remote recovery. Because {es}
implements remote recoveries using the
<<snapshot-restore,snapshot and restore>> infrastructure, running remote
recoveries are labelled as type `snapshot` in the recovery API.
[discrete]
[[ccr-leader-requirements]]
=== Replicating a leader requires soft deletes
{ccr-cap} works by replaying the history of individual write
operations that were performed on the shards of the leader index. {es} needs to
retain the
<<index-modules-history-retention,history of these operations>> on the leader
shards so that they can be pulled by the follower shard tasks. The underlying
mechanism used to retain these operations is _soft deletes_.
A soft delete occurs whenever an existing document is deleted or updated. By
retaining these soft deletes up to configurable limits, the history of
operations can be retained on the leader shards and made available to the
follower shard tasks as it replays the history of operations.
The <<ccr-index-soft-deletes-retention-period,`index.soft_deletes.retention_lease.period`>> setting defines the
maximum time to retain a shard history retention lease before it is
considered expired. This setting determines how long the cluster containing
your leader index can be offline, which is 12 hours by default. If a shard copy
recovers after its retention lease expires, then {es} will fall back to copying
the entire index, because it can no longer replay the missing history.
Soft deletes must be enabled for indices that you want to use as leader
indices. Soft deletes are enabled by default on new indices created on
or after {es} 7.0.0.
// tag::ccr-existing-indices-tag[]
IMPORTANT: {ccr-cap} cannot be used on existing indices created using {es}
7.0.0 or earlier, where soft deletes are disabled. You must
<<docs-reindex,reindex>> your data into a new index with soft deletes
enabled.
// end::ccr-existing-indices-tag[]
[discrete]
[[ccr-learn-more]]
=== Use {ccr}
This following sections provide more information about how to configure
and use {ccr}:
* <<ccr-overview>>
* <<ccr-requirements>>
* <<ccr-auto-follow>>
* <<ccr-getting-started>>
* <<ccr-managing>>
* <<ccr-auto-follow>>
* <<ccr-upgrading>>
include::overview.asciidoc[]
include::requirements.asciidoc[]
include::auto-follow.asciidoc[]
include::getting-started.asciidoc[]
include::remote-recovery.asciidoc[]
include::managing.asciidoc[]
include::auto-follow.asciidoc[]
include::upgrading.asciidoc[]

View File

@ -0,0 +1,164 @@
[role="xpack"]
[testenv="platinum"]
//////////////////////////
[source,console]
--------------------------------------------------
PUT /follower_index/_ccr/follow?wait_for_active_shards=1
{
"remote_cluster" : "remote_cluster",
"leader_index" : "leader_index"
}
--------------------------------------------------
// TESTSETUP
// TEST[setup:remote_cluster_and_leader_index]
[source,console]
--------------------------------------------------
POST /follower_index/_ccr/pause_follow
--------------------------------------------------
// TEARDOWN
//////////////////////////
[[ccr-managing]]
=== Manage {ccr}
Use the following information to manage {ccr} tasks, such as inspecting
replication progress, pausing and resuming replication, recreating a follower
index, and terminating replication.
[[ccr-access-ccr]]
To start using {ccr}, access {kib} and go to
*Management > Stack Management*. In the side navigation, select
*Cross-Cluster Replication*.
[[ccr-inspect-progress]]
==== Inspect replication statistics
To inspect the progress of replication for a follower index and view
detailed shard statistics, <<ccr-access-ccr,access Cross-Cluster Replication>> and choose the *Follower indices* tab.
Select the name of the follower index you want to view replication details
for. The slide-out panel shows settings and replication statistics for the
follower index, including read and write operations that are managed by the
follower shard.
To view more detailed statistics, click *View in Index Management*, and
then select the name of the follower index in Index Management.
Open the tabs for detailed statistics about the follower index.
[%collapsible]
.API example
====
Use the <<ccr-get-follow-stats,get follower stats API>> to inspect replication
progress at the shard level. This API provides insight into the read and writes
managed by the follower shard. The API also reports read exceptions that can be
retried and fatal exceptions that require user intervention.
====
[[ccr-pause-replication]]
==== Pause and resume replication
To pause and resume replication of the leader index, <<ccr-access-ccr,access Cross-Cluster Replication>> and choose the *Follower indices* tab.
Select the follower index you want to pause and choose *Manage > Pause Replication*. The follower index status changes to Paused.
To resume replication, select the follower index and choose
*Resume replication*.
[%collapsible]
.API example
====
You can pause replication with the
<<ccr-post-pause-follow,pause follower API>> and then later resume
replication with the <<ccr-post-resume-follow,resume follower API>>.
Using these APIs in tandem enables you to adjust the read and write parameters
on the follower shard task if your initial configuration is not suitable for
your use case.
====
[[ccr-recreate-follower-index]]
==== Recreate a follower index
When a document is updated or deleted, the underlying operation is retained in
the Lucene index for a period of time defined by the
<<ccr-index-soft-deletes-retention-period,`index.soft_deletes.retention_lease.period`>> parameter. You configure
this setting on the <<ccr-leader-requirements,leader index>>.
When a follower index starts, it acquires a retention lease from
the leader index. This lease informs the leader that it should not allow a soft
delete to be pruned until either the follower indicates that it has received
the operation, or until the lease expires.
If a follower index falls sufficiently behind a leader and cannot
replicate operations, {es} reports an `indices[].fatal_exception` error. To
resolve the issue, recreate the follower index. When the new follow index
starts, the <<ccr-remote-recovery, remote recovery>> process recopies the
Lucene segment files from the leader.
IMPORTANT: Recreating the follower index is a destructive action. All existing
Lucene segment files are deleted on the cluster containing the follower index.
To recreate a follower index,
<<ccr-access-ccr,access Cross-Cluster Replication>> and choose the
*Follower indices* tab.
[role="screenshot"]
image::images/ccr-follower-index.png["The Cross-Cluster Replication page in {kib}"]
Select the follower index and pause replication. When the follower index status
changes to Paused, reselect the follower index and choose to unfollow the
leader index.
The follower index will be converted to a standard index and will no longer
display on the Cross-Cluster Replication page.
In the side navigation, choose *Index Management*. Select the follower index
from the previous steps and close the follower index.
You can then <<ccr-getting-started-follower-index,recreate the follower index>>
to restart the replication process.
[%collapsible]
.Use the API
====
Use the <<ccr-post-pause-follow,pause follow API>> to pause the replication
process. Then, close the follower index and recreate it. For example:
[source,console]
----------------------------------------------------------------------
POST /follower_index/_ccr/pause_follow
POST /follower_index/_close
PUT /follower_index/_ccr/follow?wait_for_active_shards=1
{
"remote_cluster" : "remote_cluster",
"leader_index" : "leader_index"
}
----------------------------------------------------------------------
====
[[ccr-terminate-replication]]
==== Terminate replication
You can unfollow a leader index to terminate replication and convert the
follower index to a standard index.
<<ccr-access-ccr,Access Cross-Cluster Replication>> and choose the
*Follower indices* tab.
Select the follower index and pause replication. When the follower index status
changes to Paused, reselect the follower index and choose to unfollow the
leader index.
The follower index will be converted to a standard index and will no longer
display on the Cross-Cluster Replication page.
You can then choose *Index Management*, select the follower index
from the previous steps, and close the follower index.
[%collapsible]
.Use the API
====
You can terminate replication with the
<<ccr-post-unfollow,unfollow API>>. This API converts a follower index
to a standard (non-follower) index.
====

View File

@ -1,228 +0,0 @@
[role="xpack"]
[testenv="platinum"]
[[ccr-overview]]
=== Overview
{ccr-cap} is done on an index-by-index basis. Replication is
configured at the index level. For each configured replication there is a
replication source index called the _leader index_ and a replication target
index called the _follower index_.
Replication is active-passive. This means that while the leader index
can directly be written into, the follower index can not directly receive
writes.
Replication is pull-based. This means that replication is driven by the
follower index. This simplifies state management on the leader index and means
that {ccr} does not interfere with indexing on the leader index.
In {ccr}, the cluster performing this pull is known as the _local cluster_. The
cluster being replicated is known as the _remote cluster_.
==== Prerequisites
* {ccr-cap} requires <<modules-remote-clusters, remote clusters>>.
* The {es} version of the local cluster must be **the same as or newer** than
the remote cluster. If newer, the versions must also be compatible as outlined
in the following matrix.
include::../modules/remote-clusters.asciidoc[tag=remote-cluster-compatibility-matrix]
==== Configuring replication
Replication can be configured in two ways:
* Manually creating specific follower indices (in {kib} or by using the
{ref}/ccr-put-follow.html[create follower API])
* Automatically creating follower indices from auto-follow patterns (in {kib} or
by using the {ref}/ccr-put-auto-follow-pattern.html[create auto-follow pattern API])
For more information about managing {ccr} in {kib}, see
{kibana-ref}/working-remote-clusters.html[Working with remote clusters].
NOTE: You must also <<ccr-requirements,configure the leader index>>.
When you initiate replication either manually or through an auto-follow pattern, the
follower index is created on the local cluster. Once the follower index is created,
the <<remote-recovery, remote recovery>> process copies all of the Lucene segment
files from the remote cluster to the local cluster.
By default, if you initiate following manually (by using {kib} or the create follower API),
the recovery process is asynchronous in relationship to the
{ref}/ccr-put-follow.html[create follower request]. The request returns before
the <<remote-recovery, remote recovery>> process completes. If you would like to wait on
the process to complete, you can use the `wait_for_active_shards` parameter.
//////////////////////////
[source,console]
--------------------------------------------------
PUT /follower_index/_ccr/follow?wait_for_active_shards=1
{
"remote_cluster" : "remote_cluster",
"leader_index" : "leader_index"
}
--------------------------------------------------
// TESTSETUP
// TEST[setup:remote_cluster_and_leader_index]
[source,console]
--------------------------------------------------
POST /follower_index/_ccr/pause_follow
--------------------------------------------------
// TEARDOWN
//////////////////////////
==== The mechanics of replication
While replication is managed at the index level, replication is performed at the
shard level. When a follower index is created, it is automatically
configured to have an identical number of shards as the leader index. A follower
shard task in the follower index pulls from the corresponding leader shard in
the leader index by sending read requests for new operations. These read
requests can be served from any copy of the leader shard (primary or replicas).
For each read request sent by the follower shard task, if there are new
operations available on the leader shard, the leader shard responds with
operations limited by the read parameters that you established when you
configured the follower index. If there are no new operations available on the
leader shard, the leader shard waits up to a configured timeout for new
operations. If new operations occur within that timeout, the leader shard
immediately responds with those new operations. Otherwise, if the timeout
elapses, the leader shard replies that there are no new operations. The
follower shard task updates some statistics and immediately sends another read
request to the leader shard. This ensures that the network connections between
the remote cluster and the local cluster are continually being used so as to
avoid forceful termination by an external source (such as a firewall).
If a read request fails, the cause of the failure is inspected. If the
cause of the failure is deemed to be a failure that can be recovered from (for
example, a network failure), the follower shard task enters into a retry
loop. Otherwise, the follower shard task is paused and requires user
intervention before it can be resumed with the
{ref}/ccr-post-resume-follow.html[resume follower API].
When operations are received by the follower shard task, they are placed in a
write buffer. The follower shard task manages this write buffer and submits
bulk write requests from this write buffer to the follower shard. The write
buffer and these write requests are managed by the write parameters that you
established when you configured the follower index. The write buffer serves as
back-pressure against read requests. If the write buffer exceeds its configured
limits, no additional read requests are sent by the follower shard task. The
follower shard task resumes sending read requests when the write buffer no
longer exceeds its configured limits.
NOTE: The intricacies of how operations are replicated from the leader are
governed by settings that you can configure when you create the follower index
in {kib} or by using the {ref}/ccr-put-follow.html[create follower API].
Mapping updates applied to the leader index are automatically retrieved
as-needed by the follower index. It is not possible to manually modify the
mapping of a follower index.
Settings updates applied to the leader index that are needed by the follower
index are automatically retrieved as-needed by the follower index. Not all
settings updates are needed by the follower index. For example, changing the
number of replicas on the leader index is not replicated by the follower index.
Alias updates applied to the leader index are automatically retrieved by the
follower index. It is not possible to manually modify an alias of a follower
index.
NOTE: If you apply a non-dynamic settings change to the leader index that is
needed by the follower index, the follower index will go through a cycle of
closing itself, applying the settings update, and then re-opening itself. The
follower index will be unavailable for reads and not replicating writes
during this cycle.
==== Inspecting the progress of replication
You can inspect the progress of replication at the shard level with the
{ref}/ccr-get-follow-stats.html[get follower stats API]. This API gives you
insight into the read and writes managed by the follower shard task. It also
reports read exceptions that can be retried and fatal exceptions that require
user intervention.
==== Pausing and resuming replication
You can pause replication with the
{ref}/ccr-post-pause-follow.html[pause follower API] and then later resume
replication with the {ref}/ccr-post-resume-follow.html[resume follower API].
Using these APIs in tandem enables you to adjust the read and write parameters
on the follower shard task if your initial configuration is not suitable for
your use case.
==== Leader index retaining operations for replication
If the follower is unable to replicate operations from a leader for a period of
time, the following process can fail due to the leader lacking a complete history
of operations necessary for replication.
Operations replicated to the follower are identified using a sequence number
generated when the operation was initially performed. Lucene segment files are
occasionally merged in order to optimize searches and save space. When these
merges occur, it is possible for operations associated with deleted or updated
documents to be pruned during the merge. When the follower requests the sequence
number for a pruned operation, the process will fail due to the operation missing
on the leader.
This scenario is not possible in an append-only workflow. As documents are never
deleted or updated, the underlying operation will not be pruned.
Elasticsearch attempts to mitigate this potential issue for update workflows using
a Lucene feature called soft deletes. When a document is updated or deleted, the
underlying operation is retained in the Lucene index for a period of time. This
period of time is governed by the `index.soft_deletes.retention_lease.period`
setting which can be <<ccr-requirements,configured on the leader index>>.
When a follower initiates the index following, it acquires a retention lease from
the leader. This informs the leader that it should not allow a soft delete to be
pruned until either the follower indicates that it has received the operation or
the lease expires. It is valuable to have monitoring in place to detect a follower
replication issue prior to the lease expiring so that the problem can be remedied
before the follower falls fatally behind.
==== Remedying a follower that has fallen behind
If a follower falls sufficiently behind a leader that it can no longer replicate
operations this can be detected in {kib} or by using the
{ref}/ccr-get-follow-stats.html[get follow stats API]. It will be reported as a
`indices[].fatal_exception`.
In order to restart the follower, you must pause the following process, close the
index, and the create follower index again. For example:
[source,console]
----------------------------------------------------------------------
POST /follower_index/_ccr/pause_follow
POST /follower_index/_close
PUT /follower_index/_ccr/follow?wait_for_active_shards=1
{
"remote_cluster" : "remote_cluster",
"leader_index" : "leader_index"
}
----------------------------------------------------------------------
Re-creating the follower index is a destructive action. All of the existing Lucene
segment files are deleted on the follower cluster. The
<<remote-recovery, remote recovery>> process copies the Lucene segment
files from the leader again. After the follower index initializes, the
following process starts again.
==== Terminating replication
You can terminate replication with the
{ref}/ccr-post-unfollow.html[unfollow API]. This API converts a follower index
to a regular (non-follower) index.

View File

@ -1,29 +0,0 @@
[role="xpack"]
[testenv="platinum"]
[[remote-recovery]]
=== Remote recovery
When you create a follower index, you cannot use it until it is fully initialized.
The _remote recovery_ process builds a new copy of a shard on a follower node by
copying data from the primary shard in the leader cluster. {es} uses this remote
recovery process to bootstrap a follower index using the data from the leader index.
This process provides the follower with a copy of the current state of the leader index,
even if a complete history of changes is not available on the leader due to Lucene
segment merging.
Remote recovery is a network intensive process that transfers all of the Lucene
segment files from the leader cluster to the follower cluster. The follower
requests that a recovery session be initiated on the primary shard in the leader
cluster. The follower then requests file chunks concurrently from the leader. By
default, the process concurrently requests `5` large `1mb` file chunks. This default
behavior is designed to support leader and follower clusters with high network latency
between them.
There are dynamic settings that you can use to rate-limit the transmitted data
and manage the resources consumed by remote recoveries. See
{ref}/ccr-settings.html[{ccr-cap} settings].
You can obtain information about an in-progress remote recovery by using the
{ref}/cat-recovery.html[recovery API] on the follower cluster. Remote recoveries
are implemented using the {ref}/modules-snapshots.html[snapshot and restore] infrastructure. This means that on-going remote recoveries are labelled as type
`snapshot` in the recovery API.

View File

@ -1,43 +0,0 @@
[role="xpack"]
[testenv="platinum"]
[[ccr-requirements]]
=== Requirements for leader indices
{ccr-cap} works by replaying the history of individual write
operations that were performed on the shards of the leader index. This means that the
history of these operations needs to be retained on the leader shards so that
they can be pulled by the follower shard tasks. The underlying mechanism used to
retain these operations is _soft deletes_. A soft delete occurs whenever an
existing document is deleted or updated. By retaining these soft deletes up to
configurable limits, the history of operations can be retained on the leader
shards and made available to the follower shard tasks as it replays the history
of operations.
Soft deletes must be enabled for indices that you want to use as leader
indices. Soft deletes are enabled by default on new indices created on
or after {es} 7.0.0.
IMPORTANT: This means that {ccr} can not be used on existing indices. If you have
existing data that you want to replicate from another cluster, you must
{ref}/docs-reindex.html[reindex] your data into a new index with soft deletes
enabled.
[[ccr-overview-soft-deletes]]
==== Soft delete settings
`index.soft_deletes.enabled`::
Whether or not soft deletes are enabled on the index. Soft deletes can only be
configured at index creation and only on indices created on or after 6.5.0. The
default value is `true`.
`index.soft_deletes.retention_lease.period`::
The maximum period to retain a shard history retention lease before it is considered
expired. Shard history retention leases ensure that soft deletes are retained during
merges on the Lucene index. If a soft delete is merged away before it can be replicated
to a follower the following process will fail due to incomplete history on the leader.
The default value is `12h`.
For more information about index settings, see {ref}/index-modules.html[Index modules].

View File

@ -1,49 +1,67 @@
[role="xpack"]
[testenv="platinum"]
[[ccr-upgrading]]
=== Upgrading clusters
=== Upgrading clusters using {ccr}
++++
<titleabbrev>Upgrading clusters</titleabbrev>
++++
Clusters that are actively using {ccr} require a careful approach to upgrades.
Otherwise index following may fail during a rolling upgrade, because of the
following reasons:
The following conditions could cause index following to fail during rolling
upgrades:
* If a new index setting or mapping type is replicated from an upgraded cluster
to a non-upgraded cluster then the non-upgraded cluster will reject that and
will fail index following.
* Lucene is not forwards compatible and when index following is falling back to
file based recovery then a node in a non-upgraded cluster will reject index
files from a newer Lucene version compared to what it is using.
Rolling upgrading clusters with {ccr} is different in case of uni-directional
index following and bi-directional index following.
* Clusters that have not yet been upgraded will reject new index settings or
mapping types that are replicated from an upgraded cluster.
* Nodes in a cluster that has not been upgraded will reject index files from a
node in an upgraded cluster when index following tries to fall back to
file-based recovery. This limitation is due to Lucene not being forward
compatible.
The approach to running a rolling upgrade on clusters where {ccr} is
enabled differs based on uni-directional and bi-directional index following.
[[ccr-uni-directional-upgrade]]
==== Uni-directional index following
In a uni-directional configuration, one cluster contains only
leader indices, and the other cluster contains only follower indices that
replicate the leader indices.
In a uni-directional setup between two clusters, one cluster contains only
leader indices, and the other cluster contains only follower indices following
indices in the first cluster.
In this setup, the cluster with follower indices should be upgraded
In this strategy, the cluster with follower indices should be upgraded
first and the cluster with leader indices should be upgraded last.
If clusters are upgraded in this order then index following can continue
Upgrading the clusters in this order ensures that index following can continue
during the upgrade without downtime.
Note that a chain index following setup can also be upgraded in this way.
For example if there is a cluster A that contains all leader indices,
cluster B that follows indices in cluster A and cluster C that follows
indices in cluster B. In this case the cluster C should be upgraded first,
then cluster B and finally cluster A.
You can also use this strategy to upgrade a
<<ccr-chained-replication,replication chain>>. Start by upgrading clusters at
the end of the chain and working your way back to the cluster that contains the
leader indices.
For example, consider a configuration where Cluster A contains all leader
indices. Cluster B follows indices in Cluster A, and Cluster C follows indices
in Cluster B.
--
Cluster A
^--Cluster B
^--Cluster C
--
In this configuration, upgrade the clusters in the following order:
. Cluster C
. Cluster B
. Cluster A
[[ccr-bi-directional-upgrade]]
==== Bi-directional index following
In a bi-directional setup between two clusters, each cluster contains both
leader and follower indices.
In a bi-directional configuration, each cluster contains both leader and
follower indices.
When upgrading clusters in this setup, all index following needs to be paused
using the {ref}/ccr-post-pause-follow.html[pause follower API] and the
{ref}/ccr-pause-auto-follow-pattern.html[pause auto-follower API] prior to
upgrading both clusters. After both clusters have been upgraded then index
following can be resumed using the
{ref}/ccr-post-resume-follow.html[resume follower API]].
When upgrading clusters in this configuration,
<<ccr-pause-replication,pause all index following>> and
<<ccr-auto-follow-pause,pause auto-follow patterns>> prior to
upgrading both clusters.
After upgrading both clusters, resume index following and resume replication
of auto-follow patterns.

View File

@ -19,13 +19,37 @@ It is this process of analysis (both at index time and at search time)
that allows Elasticsearch to perform full text queries.
+
Also see <<glossary-text,text>> and <<glossary-term,term>>.
// end::analysis-def[]
--
[[glossary-api-key]] API key ::
// tag::api-key-def[]
A unique identifier that you can use for authentication when submitting {es} requests.
When TLS is enabled, all requests must be authenticated using either basic authentication
(user name and password) or an API key.
// end::api-key-def[]
[[glossary-auto-follow-pattern]] auto-follow pattern ::
// tag::auto-follow-pattern-def[]
An <<glossary-index-pattern,index pattern>> that automatically configures new indices as
<<glossary-follower-index,follower indices>> for <<glossary-ccr,{ccr}>>.
For more information, see {ref}/ccr-auto-follow.html[Managing auto follow patterns].
// end::auto-follow-pattern-def[]
[[glossary-cluster]] cluster ::
// tag::cluster-def[]
One or more <<glossary-node,nodes>> that share the
same cluster name. Each cluster has a single master node, which is
chosen automatically by the cluster and can be replaced if it fails.
// end::cluster-def[]
A cluster consists of one or more <<glossary-node,nodes>> which share the
same cluster name. Each cluster has a single master node which is
chosen automatically by the cluster and which can be replaced if the
current master node fails.
[[glossary-cold-phase]] cold phase ::
// tag::cold-phase-def[]
The third possible phase in the <<glossary-index-lifecycle,index lifecycle>>.
In the cold phase, an index is no longer updated and seldom queried.
The information still needs to be searchable, but its okay if those queries are slower.
// end::cold-phase-def[]
[[glossary-component-template]] component template ::
// tag::component-template-def[]
@ -34,10 +58,11 @@ A building block for constructing <<indices-templates,index templates>> that spe
// end::component-template-def[]
[[glossary-ccr]] {ccr} (CCR)::
The {ccr} feature enables you to replicate indices in remote clusters to your
// tag::ccr-def[]
A feature that enables you to replicate indices in remote clusters to your
local cluster. For more information, see
{ref}/xpack-ccr.html[{ccr-cap}].
// end::ccr-def[]
[[glossary-ccs]] {ccs} (CCS)::
@ -57,6 +82,12 @@ See {ref}/data-streams.html[Data streams].
// end::data-stream-def[]
--
[[glossary-delete-phase]] delete phase ::
// tag::delete-phase-def[]
The last possible phase in the <<glossary-index-lifecycle,index lifecycle>>.
In the delete phase, an index is no longer needed and can safely be deleted.
// end::delete-phase-def[]
[[glossary-document]] document ::
A document is a JSON document which is stored in Elasticsearch. It is
@ -86,18 +117,33 @@ of data that can be stored in that field, eg `integer`, `string`,
how the value for a field should be analyzed.
[[glossary-filter]] filter ::
A filter is a non-scoring <<glossary-query,query>>, meaning that it does not score documents.
// tag::filter-def[]
A filter is a non-scoring <<glossary-query,query>>,
meaning that it does not score documents.
It is only concerned about answering the question - "Does this document match?".
The answer is always a simple, binary yes or no. This kind of query is said to be made
in a <<query-filter-context,filter context>>,
in a {ref}/query-filter-context.html[filter context],
hence it is called a filter. Filters are simple checks for set inclusion or exclusion.
In most cases, the goal of filtering is to reduce the number of documents that have to be examined.
// end::filter-def[]
[[glossary-flush]] flush ::
// tag::flush-def[]
Peform a Lucene commit to write index updates in the transaction log (translog) to disk.
Because a Lucene commit is a relatively expensive operation,
{es} records index and delete operations in the translog and
automatically flushes changes to disk in batches.
To recover from a crash, operations that have been acknowledged but not yet committed
can be replayed from the translog.
Before upgrading, you can explicitly call the {ref}/indices-flush.html[Flush] API
to ensure that all changes are committed to disk.
// end::flush-def[]
[[glossary-follower-index]] follower index ::
Follower indices are the target indices for <<glossary-ccr,{ccr}>>. They exist
in your local cluster and replicate <<glossary-leader-index,leader indices>>.
// tag::follower-index-def[]
The target index for <<glossary-ccr,{ccr}>>. A follower index exists
in a local cluster and replicates a <<glossary-leader-index,leader index>>.
// end::follower-index-def[]
[[glossary-force-merge]] force merge ::
// tag::force-merge-def[]
@ -107,7 +153,7 @@ and free up the space used by deleted documents.
// end::force-merge-def-short[]
You should not force merge indices that are actively being written to.
Merging is normally performed automatically, but you can use force merge after
<<glossary-rollover, rollover>> to reduce the shards in the old index to a single segment.
<<glossary-rollover,rollover>> to reduce the shards in the old index to a single segment.
See the {ref}/indices-forcemerge.html[force merge API].
// end::force-merge-def[]
@ -123,6 +169,19 @@ before you are ready to archive or delete them.
See the {ref}/freeze-index-api.html[freeze API].
// end::freeze-def[]
[[glossary-frozen-index]] frozen index ::
// tag::frozen-index-def[]
An index reduced to a low overhead state that still enables occasional searches.
Frozen indices use a memory-efficient shard implementation and throttle searches to conserve resources.
Searching a frozen index is lower overhead than re-opening a closed index to enable searching.
// end::frozen-index-def[]
[[glossary-hot-phase]] hot phase ::
// tag::hot-phase-def[]
The first possible phase in the <<glossary-index-lifecycle,index lifecycle>>.
In the hot phase, an index is actively updated and queried.
// end::hot-phase-def[]
[[glossary-id]] id ::
The ID of a <<glossary-document,document>> identifies a document. The
@ -130,6 +189,13 @@ The ID of a <<glossary-document,document>> identifies a document. The
then it will be auto-generated. (also see <<glossary-routing,routing>>)
[[glossary-index]] index ::
+
--
// tag::index-def[]
// tag::index-def-short[]
An optimized collection of JSON documents. Each document is a collection of fields,
the key-value pairs that contain your data.
// end::index-def-short[]
An index is like a _table_ in a relational database. It has a
<<glossary-mapping,mapping>> which contains a <<glossary-type,type>>,
@ -155,6 +221,29 @@ See {ref}/indices-add-alias.html[Add index alias].
// end::index-alias-def[]
--
[[glossary-index-lifecycle]] index lifecycle ::
// tag::index-lifecycle-def[]
The four phases an index can transition through:
<<glossary-hot-phase,hot>>, <<glossary-warm-phase,warm>>,
<<glossary-cold-phase,cold>>, and <<glossary-delete-phase,delete>>.
For more information, see {ref}/ilm-policy-definition.html[Index lifecycle].
// end::index-lifecycle-def[]
[[glossary-index-lifecycle-policy]] index lifecycle policy ::
// tag::index-lifecycle-policy-def[]
Specifies how an index moves between phases in the index lifecycle and
what actions to perform during each phase.
// end::index-lifecycle-policy-def[]
[[glossary-index-pattern]] index pattern ::
// tag::index-pattern-def[]
A string that can contain the `*` wildcard to match multiple index names.
In most cases, the index parameter in an {es} request can be the name of a specific index,
a list of index names, or an index pattern.
For example, if you have the indices `datastream-000001`, `datastream-000002`, and `datastream-000003`,
to search across all three you could use the `datastream-*` index pattern.
// end::index-pattern-def[]
[[glossary-index-template]] index template ::
+
--
@ -163,16 +252,21 @@ See {ref}/indices-add-alias.html[Add index alias].
Defines settings and mappings to apply to new indexes that match a simple naming pattern, such as _logs-*_.
// end::index-template-def-short[]
An index template can also attach a lifecycle policy to the new index.
Index templates are used to automatically configure indices created during <<glossary-rollover, rollover>>.
Index templates are used to automatically configure indices created during <<glossary-rollover,rollover>>.
// end::index-template-def[]
--
[[glossary-leader-index]] leader index ::
Leader indices are the source indices for <<glossary-ccr,{ccr}>>. They exist
on remote clusters and are replicated to
// tag::leader-index-def[]
The source index for <<glossary-ccr,{ccr}>>. A leader index exists
on a remote cluster and is replicated to
<<glossary-follower-index,follower indices>>.
[[glossary-local-cluster]] local cluster ::
// tag::local-cluster-def[]
The cluster that pulls data from a <<glossary-remote-cluster,remote cluster>> in {ccs} or {ccr}.
// end::local-cluster-def[]
[[glossary-mapping]] mapping ::
A mapping is like a _schema definition_ in a relational database. Each
@ -254,6 +348,14 @@ or upgrade {es} between incompatible versions.
// end::reindex-def[]
--
[[glossary-remote-cluster]] remote cluster ::
// tag::remote-cluster-def[]
A separate cluster, often in a different data center or locale, that contains indices that
can be replicated or searched by the <<glossary-local-cluster,local cluster>>.
The connection to a remote cluster is unidirectional.
// end::remote-cluster-def[]
[[glossary-replica-shard]] replica shard ::
Each <<glossary-primary-shard,primary shard>> can have zero or more
@ -283,6 +385,25 @@ See the {ref}/indices-rollover-index.html[rollover index API].
// end::rollover-def[]
--
[[glossary-rollup]] rollup ::
// tag::rollup-def[]
Summarize high-granularity data into a more compressed format to
maintain access to historical data in a cost-effective way.
// end::rollup-def[]
[[glossary-rollup-index]] rollup index ::
// tag::rollup-index-def[]
A special type of index for storing historical data at reduced granularity.
Documents are summarized and indexed into a rollup index by a <<glossary-rollup-job,rollup job>>.
// end::rollup-index-def[]
[[glossary-rollup-job]] rollup job ::
// tag::rollup-job-def[]
A background task that runs continuously to summarize documents in an index and
index the summaries into a separate rollup index.
The job configuration controls what information is rolled up and how often.
// end::rollup-job-def[]
[[glossary-routing]] routing ::
When you index a document, it is stored on a single
@ -377,5 +498,12 @@ See also <<glossary-term,term>> and <<glossary-analysis,analysis>>.
[[glossary-type]] type ::
A type used to represent the _type_ of document, e.g. an `email`, a `user`, or a `tweet`.
Types are deprecated and are in the process of being removed. See <<removal-of-types>>.
Types are deprecated and are in the process of being removed.
See {ref}/removal-of-types.html[Removal of mapping types].
// end::type-def[]
[[glossary-warm-phase]] warm phase ::
// tag::warm-phase-def[]
The second possible phase in the <<glossary-index-lifecycle,index lifecycle>>.
In the warm phase, an index is generally optimized for search and no longer updated.
// end::warm-phase-def[]

View File

@ -95,6 +95,25 @@ indices.
than the `index.number_of_shards` unless the `index.number_of_shards` value is also 1.
See <<routing-index-partition>> for more details about how this setting is used.
[[ccr-index-soft-deletes]]
// tag::ccr-index-soft-deletes-tag[]
`index.soft_deletes.enabled`::
deprecated:[7.6.0, Creating indices with soft-deletes disabled is deprecated and will be removed in future Elasticsearch versions.]
Indicates whether soft deletes are enabled on the index. Soft deletes can only
be configured at index creation and only on indices created on or after
{es} 6.5.0. Defaults to `true`.
// end::ccr-index-soft-deletes-tag[]
[[ccr-index-soft-deletes-retention-period]]
//tag::ccr-index-soft-deletes-retention-tag[]
`index.soft_deletes.retention_lease.period`::
The maximum period to retain a shard history retention lease before it is
considered expired. Shard history retention leases ensure that soft deletes are
retained during merges on the Lucene index. If a soft delete is merged away
before it can be replicated to a follower the following process will fail due
to incomplete history on the leader. Defaults to `12h`.
//end::ccr-index-soft-deletes-retention-tag[]
[[load-fixed-bitset-filters-eagerly]] `index.load_fixed_bitset_filters_eagerly`::
Indicates whether <<query-filter-context, cached filters>> are pre-loaded for

View File

@ -59,16 +59,6 @@ there>>. {ccr-cap} will not function if soft deletes are disabled.
[discrete]
=== History retention settings
`index.soft_deletes.enabled`::
include::{es-ref-dir}/index-modules.asciidoc[tag=ccr-index-soft-deletes-tag]
deprecated:[7.6.0, Creating indices with soft-deletes disabled is deprecated and will be removed in future Elasticsearch versions.]
Whether or not soft deletes are enabled on the index. Soft deletes can only be
configured at index creation and only on indices created on or after 6.5.0.
The default value is `true`.
`index.soft_deletes.retention_lease.period`::
The maximum length of time to retain a shard history retention lease before
it expires and the history that it retains can be discarded. The default
value is `12h`.
include::{es-ref-dir}/index-modules.asciidoc[tag=ccr-index-soft-deletes-retention-tag]

View File

@ -46,14 +46,15 @@ communicate with 6.7. The matrix below summarizes compatibility as described abo
// tag::remote-cluster-compatibility-matrix[]
[cols="^,^,^,^,^,^,^,^"]
|====
| Compatibility | 5.0->5.5 | 5.6 | 6.0->6.6 | 6.7 | 6.8 | 7.0 | 7.1->7.x
| 5.0->5.5 | Yes | Yes | No | No | No | No | No
| 5.6 | Yes | Yes | Yes | Yes | Yes | No | No
| 6.0->6.6 | No | Yes | Yes | Yes | Yes | No | No
| 6.7 | No | Yes | Yes | Yes | Yes | Yes | No
| 6.8 | No | Yes | Yes | Yes | Yes | Yes | Yes
| 7.0 | No | No | No | Yes | Yes | Yes | Yes
| 7.1->7.x | No | No | No | No | Yes | Yes | Yes
| 7+^h| Local cluster
h| Remote cluster | 5.0->5.5 | 5.6 | 6.0->6.6 | 6.7 | 6.8 | 7.0 | 7.1->7.x
| 5.0->5.5 | {yes-icon} | {yes-icon} | {no-icon} | {no-icon} | {no-icon} | {no-icon} | {no-icon}
| 5.6 | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {no-icon} | {no-icon}
| 6.0->6.6 | {no-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {no-icon} | {no-icon}
| 6.7 | {no-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {no-icon}
| 6.8 | {no-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon}
| 7.0 | {no-icon} | {no-icon} | {no-icon} | {yes-icon} | {yes-icon} | {yes-icon} | {yes-icon}
| 7.1->7.x | {no-icon} | {no-icon} | {no-icon} | {no-icon} | {yes-icon} | {yes-icon} | {yes-icon}
|====
// end::remote-cluster-compatibility-matrix[]

View File

@ -3,6 +3,31 @@
The following pages have moved or been deleted.
[role="exclude",id="ccr-remedy-follower-index"]
=== Leader index retaining operations for replication
See <<ccr-leader-requirements,Requirements for leader indices>>.
[role="exclude",id="ccr-leader-not-replicating"]
=== Remedying a follower that has fallen behind
See <<ccr-recreate-follower-index,Recreating a follower index>>.
[role="exclude",id="remote-reovery"]
=== Remote recovery process
See <<ccr-remote-recovery,Remote recovery process>>.
[role="exclude",id="ccr-requirements"]
=== Leader index requirements
See <<ccr-leader-requirements,Requirements for leader indices>>.
[role="exclude",id="ccr-overview"]
=== Cross-cluster replication overview
See <<xpack-ccr,Cross-cluster replication>>.
[role="exclude",id="indices-upgrade"]
=== Upgrade API

View File

@ -10,7 +10,7 @@ These {ccr} settings can be dynamically updated on a live cluster with the
==== Remote recovery settings
The following setting can be used to rate-limit the data transmitted during
<<remote-recovery,remote recoveries>>:
<<ccr-remote-recovery,remote recoveries>>:
`ccr.indices.recovery.max_bytes_per_sec` (<<cluster-update-settings,Dynamic>>)::
Limits the total inbound and outbound remote recovery traffic on each node.