The checkpoints in the assertion message that the follower checkpoint is
less than the leader checkpoint are backwards. This commit fixes this
message.
The shard follow task executor determines the range of translog operations
between the leader shard's global checkpoint and the last know processed
seqno by the current shard follow task that are missing.
Then the chunks coordinator can then chunk this range up in smaller ranges
if the requested range is above the configured max chunk size. If it is
smaller than the entire range then the chunk coordinator has just one
chuck to coordinate.
Each chunk is added to a queue and is processed by the ChunkProcessor,
that reads the translog ops from the leader shard and then indexes
these translog ops into the follow shard. After that a new chuck is polled
from the queue and the ChunkProcessor performs the same actions until
there are no more chunks in the queue to process. After that the shard
follow task executor will determine a new range of translog operations
to process.
This change changes the chunk coordinator to start polling from the chunk
queue with multiple threads at the same time to handle dealing with a higher
indexing load on the leader side better.
* Fixed a small issue where each batch would fetch / index the previous batch last operation
* Made batch size a request param on the follow existing index api request.
This makes is easy to tune this param when running tests from scripts.
* Changed default batch size from 256 to 1024.
I forgot to configure a mapping in the follow shard shard, which caused
a dynamic update (due to type auto creation), but this was ignored.
Subsequent searches in follow index then failed due to a mapping missing.
(The _id couldn't be fetched during fetch phase, because the mapping was missing)
We should at a later stage investigate how to best solve this, but for
know to avoid confusion just fail if a dynamic update happens in a
follow shard.
This commit adds a bulk action for apply translog operations in bulk to
an index. This action is then used in the persistent task for CCR to
apply shard changes from a leader shard.
Relates #3147
This test was broken by an upstream change that no longer guarantees we
see the operations from the upstream translog in the order they appear
in that translog. As such, the assertions in this test were too strong
so this commit relaxes them.
Relates #3153
Operations from a leader shard will be indexed into the engine with the
origin set to primary. The problem is here is that then we have primary
semantics in the engine such as assertions about sequence numbers being
unassigned, and we do not have correct semantics for out-of-order
delivery of operations (as we should on a following engine, whether or
not it is primary since the ordering is determined from the
leader). This commit handles this by always using the replica plan for
indexing into a following engine, whether or not the engine is for a
primary shard.
Relates #3000
A following engine even for a primary shard needs to maintain order of
operations semantics as if it were behaving like a replica. That is,
rather than assuming that the order of operations presented to the
engine is the de facto order of operations as is the case for a leader
engine for a primary shard, a following engine must behave like all
replicas behave which is that they resolve order of operations based on
sequence numbers. This commit causes this to be the case for following
engines.
Relates #2931
This commit is a first step towards a following engine
implementation. Future work will build on this by using this engine to
execute operations on a following engine from another engine (typically
a remote leader engine) that has already assigned sequence numbers to
such operations.
Relates #2776
This commit sets an index setting for the size of a translog generation
and increases the number of documents indexed to increase the chance of
multiple generations being present when testing getting operations
between two sequence numbers.
This commit fixes an off-by-one error in the shard changes action test
for getting operations between two sequence numbers. The off-by-one
error arises because sequence numbers are indexed from zero, so if N
documents are indexed then the maximum sequence number starting from
zero would be N - 1.
* xdcr: Add an internal api to read translog operations between a sequence number range.
This api will be used later by the persistent task for the following index to pull data from the leader index.
The persistent task can fetch the global checkpoint from the shard stats for each primary shard of the leader index.
Based on the global checkpoint of the primary shards of the following index, the persistent task can send several
calls to the internal api added in this commit to replicate changes from follow index to leader index in a batched manner.
This commit utilizes the pluggable engine factory feature in core to
introduce a pluggable engine factory for XDCR. For now this is only a
skeleton implementation to proof out the pluggable engine factory
concept. Future work will implement a genuine following engine for XDCR.
Relates #2655
This commit introduces the container class for CCR functionality. Future
work will expose more specific CCR functionality to the X-Pack plugin
through this class.
Relates #2704