keep track of shard follow stats inside shard follow stats' node task instead of persistent task status.
By maintaining the shard follow stats inside its node task the stats update is quicker as
no cluster state update is required. The stats are now transient; meaning if the task
is going to run a different node then the stats are gone too. Currently only the processed
global checkpoint is being tracked and this is being restored when a shard follow node task
starts via the indices stats api (the reason of the first change of this change). Other stats
that we may add in the future (like fetch_time, see: https://gist.github.com/s1monw/dba13daf8493bf48431b72365e110717)
it is ok if we start from zero in case a shard follow task moves to another node.
This limit is based on the number of estimate bytes in each translog
operation that fall between the minimum and maximum request sequence number.
If this limit is met then the shard follow task executor will make sure
that a subsequent shard changes request will be performed to fetch the
remaining translog operations.
This limit is needed in order to protect against returning too many
translog operations in a single shard changes response.
Relates to #2436
We check for the existence of both leader and follower index, then properly
report to the caller. However, we do not return after reporting failure. This
causes the caller receive exception twice: IllegalArgumentException then
NullPointerException. This commit makes sure to stop the action after reporting
failure.
Looks like we need to split out the tests of core classes to core
and index-lifecycle ones stay in index-lifecycle.
I believe I got everything, although I may have missed at least one thing
checked status with
$ ./gradlew :x-pack-elasticsearch:plugin:index-lifecycle:check -Dtests.seed=39838421912001B4
$ ./gradlew :x-pack-elasticsearch:plugin:core:check -Dtests.seed=39838421912001B4
other things done in this PR:
- removal of a few unused variables/thrown exceptions/imports
- fix TimeseriesLifecycleTypeTests
- an all null AllocateAction was created
- fix AllocateActionTests
- woops. -Dtests.seed=39838421912001B4 resulted in two `null`s and an emptyMap.
this resulted in a test failure.
Specifically this change makes it optional to:
* Specify `includes`, `excludes` and `requires`maps in the allocate action as long as at least one fo the options is specified and is not an empty map
* Specify an `after` parameter on a phase. If no `after` value is specified `TimeValue.ZERO` is used and the phase will be moved to as soon as the previous phase reports `ACTIONS COMPLETED`. `after` is always non-null when we are serialising the Phase.
* Specify a `type` for a LifecyclePolicy. If no `type` is specified `TimeSeriesLifecycleType.INSTANCE` is used since this is currently the only production `type`. `type` is always non-null when we are serialising the LifecyclePolicy.
This commit enables the run task for ccr by specifying that the ccr
project not be evaluated until after core is evaluated. This is
important since ccr is alphabetically before core and thus Gradle
evaluates it first.
Relates #3665
This PR adds a new setting called `index.lifecycle.date` that
the ShrinkAction will be responsible for populating in the newly created index.
This way, we can continue to know when we should be executing the next phase
relative to the original index creation date, and not that of the shrunken index.
A node can stop being the master node whilst it is running, e.g. if it can’t access `minimumMasterNodes` number of master eligible nodes. Because of this we need logic in `IndexLifecycleService` that cancels the scheduled job if the node is no longer master and re-adds the job if the node becomes master again.
This does the following in sequential service polls
1. sets the index to read-only and runs shrink with a modified `index.lifecycle.name` setting set to `null`.
2. checks to see if shrink is complete, if it is...
b. set target index's `index.lifecycle.*` settings to the original index's values.
3. if not complete, just wait till next iteration
4. if operating on shrunken index, delete old index and add it as an alias to shrunken index
* Adds Allocate lifcycle action
* Addresses review comments
Still need to make a change in core for the FilterAllocationDecider to make the execute logic simpler
* Addresses more review comments
* Adds randomMap method to AllocateActionTests
* Addresses further review comments
The checkpoints in the assertion message that the follower checkpoint is
less than the leader checkpoint are backwards. This commit fixes this
message.
* Improves handling of exceptions in Index Lifecycle
This change improves a few different aspects:
* If an exception occurs executing the lifecycle of one index it is caught, logged and other indexes are still processed
* If the lifecycle policy specified in the settings does not exist an error is logged
* Fixes the exception when the delete action is run which occurs because Phase attempts to update the phase and action settings for the deleted index. A `LifecycleAction.indexSurvives()` method is introduced which defaults to `true` but can be overridden to indicate whether the index survives following completion of the action.
* Adds test
* Fix InternalIndexLifecycleContext to update state in memory
The internal and the mock index-lifecycle-context implementations differed
in that the InternalIndexLifecycleContext assumed no one would be using it after
it mutated state. This is not the case. We assume that the current context is updated after
a `#setAction` is called so that the listener can then appropriately use the newly modified
cluster state. since idxMeta was not being updated, any call to `context.getAction` was stale and
either returning null or the previous action, not the next action that was updated by `#setAction`.
Same goes for `setPhase`.
This PR should fix this so that the Mock and Internal implementations are more in line.
The shard follow task executor determines the range of translog operations
between the leader shard's global checkpoint and the last know processed
seqno by the current shard follow task that are missing.
Then the chunks coordinator can then chunk this range up in smaller ranges
if the requested range is above the configured max chunk size. If it is
smaller than the entire range then the chunk coordinator has just one
chuck to coordinate.
Each chunk is added to a queue and is processed by the ChunkProcessor,
that reads the translog ops from the leader shard and then indexes
these translog ops into the follow shard. After that a new chuck is polled
from the queue and the ChunkProcessor performs the same actions until
there are no more chunks in the queue to process. After that the shard
follow task executor will determine a new range of translog operations
to process.
This change changes the chunk coordinator to start polling from the chunk
queue with multiple threads at the same time to handle dealing with a higher
indexing load on the leader side better.
`IndexLifecycleInitialisationIT.testMasterFailover()` intermittently failed because the timeout of 10 seconds to check if the index had been deleted was not long enough sometimes with the poll interval set to 3 seconds. This change sets the poll interval to 1 seconds for the test so that the lifecycle is more responsive. This also means the default value for the poll interval can be safely changed without affecting the test.
This change moves the Action classes and referenced data model classes to the new :x-pack-elasticsearch:plugin:core project in preparation for splitting the x-pack features into their own gradle modules.
Note that the TransportAction classes had to be promoted to their own class file (rather than being inner classes to their Action) so they can remain in the plugin project (and will late be move to the `index-lifecycle` project when its created.
To clean up the parsing of the LifecyclePolicy this change moves the LifecycleType to its own class so it can be created in the normal parsing of LifecyclePolicy rather than having to parse to an intermediary object first. The LifecycleType is an interface which can be implemented for different lifecycle types. These types shiould be singletons and are register with the NamedXContentRegistry and NamedWriteableRegistry only so they are available when reading from a stream or parsing.
Removes the poll-interval from the IndexLifecycleMetadata and introduces it in
the form of a cluster setting that is configurable. Changes to this poll interval setting
will reflect in the Lifecycle Scheduler.
This includes changing NOCOMMIT comments to be NORELEASE comments so the build passes with them. We have tasks inGH for all these NORELEASE comments so they should be caught before merging to master
This action will rollover an index when executed if the provided conditions are met.
Users may specify the maximum age, maximum index size in bytes or maximum index size in number of documents as conditions for rollover.
When the action executes it firsts checks the local cluster state to find out if the alias exists on the index. If the alias does not exist then the index was either rolled over by a previous run or something else has rolled over the index so the action can be marked as completed. If the index still has the alias set the action will make a rollover index request using the Client. When that request returns and the listener is called the action will only be marked as complete if the response indicates the index was rolled over. If the index was not rolled over (because the conditions are not yet met) the action is not marked as complete and will be re-evaluated on the next call to execute.
* Fixed a small issue where each batch would fetch / index the previous batch last operation
* Made batch size a request param on the follow existing index api request.
This makes is easy to tune this param when running tests from scripts.
* Changed default batch size from 256 to 1024.
I forgot to configure a mapping in the follow shard shard, which caused
a dynamic update (due to type auto creation), but this was ignored.
Subsequent searches in follow index then failed due to a mapping missing.
(The _id couldn't be fetched during fetch phase, because the mapping was missing)
We should at a later stage investigate how to best solve this, but for
know to avoid confusion just fail if a dynamic update happens in a
follow shard.
This commit adds a bulk action for apply translog operations in bulk to
an index. This action is then used in the persistent task for CCR to
apply shard changes from a leader shard.
Relates #3147
This test was broken by an upstream change that no longer guarantees we
see the operations from the upstream translog in the order they appear
in that translog. As such, the assertions in this test were too strong
so this commit relaxes them.
Relates #3153
This means that the result of the action can now be async and we can then implement moving immediately to the next action if the current one is complete
The client and index metadata have now been abstracted away from the Lifecycle classes behind IndexLifecycleContext. This allow us to test the state machine without having to worry about how the state is persisted and read. It also makes the classes much easier to read and reason about.
The lifecycles are stored as custom metadata objects in the cluster state. This change also cleans up the parsing of the lifecycle state so that it can be parsed properly
Operations from a leader shard will be indexed into the engine with the
origin set to primary. The problem is here is that then we have primary
semantics in the engine such as assertions about sequence numbers being
unassigned, and we do not have correct semantics for out-of-order
delivery of operations (as we should on a following engine, whether or
not it is primary since the ordering is determined from the
leader). This commit handles this by always using the replica plan for
indexing into a following engine, whether or not the engine is for a
primary shard.
Relates #3000
A following engine even for a primary shard needs to maintain order of
operations semantics as if it were behaving like a replica. That is,
rather than assuming that the order of operations presented to the
engine is the de facto order of operations as is the case for a leader
engine for a primary shard, a following engine must behave like all
replicas behave which is that they resolve order of operations based on
sequence numbers. This commit causes this to be the case for following
engines.
Relates #2931
This test verifies that we have sufficient failover code so that
a newly elected master re-registers old schedules and fires them off.
All times are relative to the index creation date.
This commit is a first step towards a following engine
implementation. Future work will build on this by using this engine to
execute operations on a following engine from another engine (typically
a remote leader engine) that has already assigned sequence numbers to
such operations.
Relates #2776
This commit sets an index setting for the size of a translog generation
and increases the number of documents indexed to increase the chance of
multiple generations being present when testing getting operations
between two sequence numbers.
This commit fixes an off-by-one error in the shard changes action test
for getting operations between two sequence numbers. The off-by-one
error arises because sequence numbers are indexed from zero, so if N
documents are indexed then the maximum sequence number starting from
zero would be N - 1.
* xdcr: Add an internal api to read translog operations between a sequence number range.
This api will be used later by the persistent task for the following index to pull data from the leader index.
The persistent task can fetch the global checkpoint from the shard stats for each primary shard of the leader index.
Based on the global checkpoint of the primary shards of the following index, the persistent task can send several
calls to the internal api added in this commit to replicate changes from follow index to leader index in a batched manner.
Feature consists of a shell of a persistant task which will later be used to inspect the index settings and apply curator like changes to the index (move from hot to warm, rollover, shrink etc.)
This commit utilizes the pluggable engine factory feature in core to
introduce a pluggable engine factory for XDCR. For now this is only a
skeleton implementation to proof out the pluggable engine factory
concept. Future work will implement a genuine following engine for XDCR.
Relates #2655
This commit introduces the container class for CCR functionality. Future
work will expose more specific CCR functionality to the X-Pack plugin
through this class.
Relates #2704