The following stats are being kept track of:
1) The total number of times that auto following a leader index succeed.
2) The total number of times that auto following a leader index failed.
3) The total number of times that fetching a remote cluster state failed.
4) The most recent 256 auto follow failures per auto leader index
(e.g. create_and_follow api call fails) or cluster alias
(e.g. fetching remote cluster state fails).
Each auto follow run now produces a result that is being used to update
the stats being kept track of in AutoFollowCoordinator.
Relates to #33007
* MINOR: Drop Redundant Ctx. Check in ScriptService
* This check is completely redundant, the expression script
engine will throw anyway (and with a similar message) for
those contexts that it cannot compile. Moreover, the update context
is not the only context that is not suported by the expression engine
at this point so handling the update context separately here makes
no sense.
* Implement xpack.monitoring.elasticsearch.collection.enabled setting
* Fixing line lengths
* Updating constructor calls in test
* Removing unused import
* Fixing line lengths in test classes
* Make monitoringService.isElasticsearchCollectionEnabled() return true for tests
* Remove wrong expectation
* Adding unit tests for new flag to be false
* Fixing line wrapping/indentation for better readability
* Adding docs
* Fixing logic in ClusterStatsCollector::shouldCollect
* Rebasing with master and resolving conflicts
* Simplifying implementation by gating scheduling
* Doc fixes / improvements
* Making methods package private
* Fixing wording
* Fixing method access
This PR removes fields that are not actually used by the Monitoring UI. This will greatly simplify the eventual migration to using Metricbeat for monitoring Elasticsearch (see https://github.com/elastic/beats/pull/8260#discussion_r215885868 for more context and discussion around removing these fields from ES collection).
This commit changes the sanity check which ensures the test task was
properly replaced with randomized testing to have a per project check,
isntead of a global one. The previous global check assumed all test
tasks within the root project and below should be randomized testing,
but that is not the case for a multi project in which only one project
is an elasticsearch plugin. While the new check is not able to emit all
of the failed replacements in one error message, the efficacy of the
check remains.
* [CCR] Do not unnecessarily wrap fetch exception in a ElasticSearch exception and
properly map fetch_exception.exception field as object.
The extra caused by level is not necessary here:
```
"fetch_exceptions": [
{
"from_seq_no": 1,
"retries": 106,
"exception": {
"type": "exception",
"reason": "[index1] IndexNotFoundException[no such index]",
"caused_by": {
"type": "index_not_found_exception",
"reason": "no such index",
"index_uuid": "_na_",
"index": "index1"
}
}
}
],
```
When a leader index is created, it may not have a mapping yet.
Currently if you follow such an index the shard follow tasks fail with
NoSuchElementException, because they expect a single mapping.
This commit fixes that, by allowing that a leader index does not yet have
a mapping.
We can't rely on the leaf reader ordinal in a wrapped reader since
it might not correspond to the ordinal in the SegmentInfos for it's
SegmentCommitInfo.
Relates to #32844Closes#33689Closes#33755
* Same fix idea as in #10666a4 to prevent background
threads trying to reconnect after the tests are done from
throwing `ExecutionCancelledException` and breaking the test
* Closes#30714
Allows to skip shard balancing when the cluster_concurrent_rebalance threshold is already reached, which cuts down the time spent in the rebalance method of BalancedShardsAllocator.
This commit adds the Create Rollup Job API to the high level REST
client. It supersedes #32703 and adds dedicated request/response
objects so that it does not depend on server side components.
Related #29827
Currently `IndexMetadata#getCustomData(...)` wraps the custom metadata
in an unmodifiable map, but in case there is no entry for the specified
key then a NPE is thrown by Collections.unmodifiableMap(...). This is not
ideal in case callers like to throw an exception with a specific message.
(like in the case for ccr to indicate that the follow index was not created
by the create_and_follow api and therefor incompatible as follow index)
I think making `DiffableStringMap` itself immutable is better then just wrapping
custom metadata with `Collections.unmodifiableMap(...)` in all methods that access it.
Also removed the `equals()`, `hashcode()` and to `toString()` methods of
`DiffableStringMap`, because `AbstractMap` already implements these methods.
This commit switches the joda time backcompat in scripting to use
augmentation over ZonedDateTime. The augmentation methods provide
compatibility with the missing methods between joda's DateTime and
java's ZonedDateTime. Due to getDayOfWeek returning an enum in the java
API, ZonedDateTime is wrapped so that the method can return int like the
joda time does. The java time api version is renamed to
getDayOfWeekEnum, which will be kept through 7.x for compatibility while
users switch back to getDayOfWeek once joda compatibility is removed.
This commit adds a guard around the rare case that no documents in the
10 iterations actually have any values, thus making the warning check
incorrect.
closes#32779
Rather than scheduling pings to the leader index when we are caught up
to the leader, this commit introduces long polling for changes. We will
fire off a request to the leader which if we are already caught up will
enter a poll on the leader side to listen for global checkpoint
changes. These polls will timeout after a default of one minute, but can
also be specified when creating the following task. We use these time
outs as a way to keep statistics up to date, to not exaggerate time
since last fetches, and to avoid pipes being broken.
When executing CCR REST tests it is going to be expected after global
checkpoint polling goes in that shard changes tasks can still be pending
at the end of the test. One way to deal with this is to set a low
timeout on these polls, but then that means we are not executing our
REST tests with our default production settings and instead would be
using an unrealistic low timeout. Alternatively, since we expect these
tasks to be there, we can not count them against the test. That is what
this commit does.
This commit moves these REST tests (possibly temporarily) to a
sub-project of ccr. We do this (again, possibly temporarily) to keep
them within the ccr sub-project yet there are changes within 6.x that
prevent these from being in the top-level project (the cluster formation
tasks are trying to install x-pack-ccr into the
integ-test-zip). Therefore, we isolate these for now until we can
understand why there are differences between 6.x and master.
With the introduction of immutable workers into our CI, we now have a
problem where we would have to download dependencies on every single
build. To this end our infrastructure team has introduced the
possibility to download dependencies one time and then bake these into
the base immutable worker image. For this, we introduce our version of
this special script which runs a task that downloads all dependencies of
all configurations. With this script, our infrastructure team will be
rebuilding these images on an at-least daily basis. This helps us avoid
having to download the dependencies for every single build.
This commit is a cleanup of the assertions in global checkpoint
listeners, simplifying them and adding some messages to them in case the
assertions trip.
When developing ccr it is not ideal if tests are in multiple modules.
Even the classes these tests test are in the core module, it is easier
if these tests are in ccr module in order to avoid running the test task
in core module. This results in running many non ccr tests.
This way when developing ccr we can run locally:
./gradlew x-pack:plugin:core:precommit x-pack:plugin:ccr:check
before pushing to PR branches and be confident that the PR build passes,
without running x-pack:plugin:core:check task.
Ensure that the SSLConfigurationReloaderTests can run with JDK 11
by pinning the HttpClient to TLS version to TLS1.2. This is necessary
becase even if the MockWebServer is set to user TLS1.2, we don't
set its enabled protocols, so if it receives a TLS1.3 request (which
is the default behavior for HttpClient in JDK11), it will use TLS1.3
and the original issue will manifest again.
Relates #33127Resolves#32124
When we add a global checkpoint listener, it is also carries along with
it a value that it thinks is the current global checkpoint. This value
can be above the actual global checkpoint on a shard if the listener
knows the global checkpoint from another shard copy (e.g., the primary),
and the current shard copy is lagging behind. Today we notify the
listener whenever the global checkpoint advances, regardless if it goes
above the current global checkpoint known to the listener. This commit
reworks this implementation. Rather than thinking of the value
associated with the listener as the current global checkpoint known to
the listener, we think of it as the value that the listener is waiting
for the global checkpoint to advance to (inclusive). Now instead of
notifying all waiting listeners when the global checkpoint advances, we
only notify those that are waiting for a value not larger than the
actual global checkpoint that we advanced to.
This is something that we were already doing when sorting by field, which is
now also done when sorting by score. As-is this change will speed up top-k
`term` queries. This could work for `match_all` queries as well when we
implement the `setMinCompetitiveScore` API on their Scorer.
Changes the format of log events in the audit logfile.
It also changes the filename suffix from `_access` to `_audit`.
The new entry format is consistent with Elastic Common Schema.
Entries are formatted as JSON with no nested objects and field
names have a dotted syntax. Moreover, log entries themselves
are not spaced by commas and there is exactly one entry per line.
In addition, entry fields are ordered, unlike a typical JSON doc,
such that a human would not strain his eyes over jumbled
fields from one line to the other; the order is defined in the log4j2
properties file.
The implementation utilizes the log4j2's `StringMapMessage`.
This means that the application builds the log event as a map
and the log4j logic (the appender's layout) handle the format
internally. The layout, such as the set of printed fields and their
order, can be changed at runtime without restarting the node.