If the connection between clusters is disconnected or the leader cluster
is offline, then CCR shard-follow tasks can stop with "no seed node
left". CCR should retry on this error.
We switched to adoptopenjdk from oracle jdk to rely on the notarization
found in adoptopnejdk on MacOS. However, that notarization still had
issues, and we currently do our own notarization of the entire
distribution, including the jdk. The recent bump to jdk 15 has revealed
openjdk to be lax in maintaining support for older systems. Since the
notarization is no longer an issue, this PR moves the bundled jdk back
to Oracle, in order to continue supporting those older systems affected
by adoptopenjdk 15.
relates #62709
The copy constructors previously used were hard to read and the exact state changes
were not obvious at all.
Refactored those into a number of named constructors instead, added additional assertions
and moved the snapshot abort logic into `SnapshotsInProgress`.
Since match (for matching regex) is not currently in use remove it for
now.
Close#63263
(cherry picked from commit 6abd531cf457f3c5686f59709647bed3276e3c6b)
Make EQL case sensitive by default and adapt some of the string functions
Remove the case sensitive option from Between string function
Add case_insensitive option to term and wildcard queries usage
(cherry picked from commit 7550e0664c8c2f1f13519036c759b1e76345551f)
In #63242 we changed how we build `nextRoundingValue` to, well, be
correct. But the old `org.elasticsearch.common.rounding.Rounding`
implementation didn't get the fix. Which is fine, because it doesn't
that method on that implementation doesn't receive any use outside of
tests. In fact, it is entirely removed in master. Anyway, now that the
two implementation produce different values we really can't go around
asserting that they produce the same values now can we? Well, we were!
This skips that assertion if we know `nextRoundingValue` is implemented
differently.
Closes#63256
* Setting `script.painless.regex.enabled` has a new option,
`use-factor`, the default. This defaults to using regular
expressions but limiting the complexity of the regular
expressions.
In addition to `use-factor`, the setting can be `true`, as
before, which enables regular expressions without limiting them.
`false` totally disables regular expressions, which was the
old default.
* New setting `script.painless.regex.limit-factor`. This limits
regular expression complexity by limiting the number characters
a regular expression can consider based on input length.
The default is `6`, so a regular expression can consider
`6` * input length number of characters. With input
`foobarbaz` (length `9`), for example, the regular expression
can consider `54` (`6 * 9`) characters.
This reduces the impact of exponential backtracking in Java's
regular expression engine.
* add `@inject_constant` annotation to whitelist.
This annotation signals that a compiler settings will
be injected at the beginning of a whitelisted method.
The format is `argnum=settingname`:
`1=foo_setting 2=bar_setting`.
Argument numbers must start at one and must be sequential.
* Augment
`Pattern.split(CharSequence)`
`Pattern.split(CharSequence, int)`,
`Pattern.splitAsStream(CharSequence)`
`Pattern.matcher(CharSequence)`
to take the value of `script.painless.regex.limit-factor` as a
an injected parameter, limiting as explained above when this
setting is in use.
Fixes: #49873
Backport of: 93f29a4
This change makes Location a final member of IRNode as opposed to possibly changing it. This
ensures that all ir nodes have a Location for error information upon creation that cannot be updated
so each node can be tracked as where it came from originally.
We only ever use this with `XContentParser` no need to make it inline
worse by forcing the lambda and hence dynamic callsite here.
=> Extraced the exception formatting code path that is likely very cold
to a separate method and removed the lambda usage in hot loops by simplifying
the signature here.
Small refactoring to shorten the diff with the clone logic in #61839:
* Since clones will create a different kind of shard state update that
isn't the same request sent by the snapshot shards service (and cannot be
the same request because we have no `ShardId`) base the shard state updates
on a different class that can be extended to be general enough to accomodate
shard clones as well.
* Make the update executor a singleton (can't make it an inline lambda as that
would break CS update batching because the executor is used as a map key but
this change still makes it crystal clear that there's no internal state to the
executor)
* Make shard state update responses a singleton (can't use TransportResponse.Empty because
we need an action response but still it makes it clear that there's no actual
response with content here)
* Just some obvious drying up of these super complex tests.
* Mainly just shortening the diff of #61839 here by moving test utilities
to the abstract test case.
Also, making use of the now available functionality to simplify existing tests
and improve logging in them.
* Add PGSync as a new community supported tool (#62788)
* Remvoing errant space in Kafka link.
Co-authored-by: Tolu Aina <7848930+toluaina@users.noreply.github.com>
In the latest version of the GCS SDK the `404` exception is wrapped
in an `IOException` making it not pass to the unwrapping added in
the previous fix#63168.
We can't be handling `IOException` differently here now that GCS uses it
for `404`s so I adjusted the exception unwrapping accordingly.
Closes#63091
"interval" style roundings were implementing `nextRoundingValue` in a
fairly inconsistent way - it'd produce a value, but sometimes that
value would be the same as the previous rounding value. This makes it
consistently the next value that `rounding` would make.
A few of us were talking about ways to speed up the `date_histogram`
using the index for the timestamp rather than the doc values. To do that
we'd have to pre-compute all of the "round down" points in the index. It
turns out that *just* precomputing those values speeds up rounding
fairly significantly:
```
Benchmark (count) (interval) (range) (zone) Mode Cnt Score Error Units
before 10000000 calendar month 2000-10-28 to 2000-10-31 UTC avgt 10 96461080.982 ± 616373.011 ns/op
before 10000000 calendar month 2000-10-28 to 2000-10-31 America/New_York avgt 10 130598950.850 ± 1249189.867 ns/op
after 10000000 calendar month 2000-10-28 to 2000-10-31 UTC avgt 10 52311775.080 ± 107171.092 ns/op
after 10000000 calendar month 2000-10-28 to 2000-10-31 America/New_York avgt 10 54800134.968 ± 373844.796 ns/op
```
That's a 46% speed up when there isn't a time zone and a 58% speed up
when there is.
This doesn't work for every time zone, specifically those that have two
midnights in a single day due to daylight savings time will produce wonky
results. So they don't get the optimization.
Second, this requires a few expensive computation up front to make the
transition array. And if the transition array is too large then we give
up and use the original mechanism, throwing away all of the work we did
to build the array. This seems appropriate for most usages of `round`,
but this change uses it for *all* usages of `round`. That seems ok for
now, but it might be worth investigating in a follow up.
I ran a macrobenchmark as well which showed an 11% preformance
improvement. *BUT* the benchmark wasn't tuned for my desktop so it
overwhelmed it and might have produced "funny" results. I think it is
pretty clear that this is an improvement, but know the measurement is
weird:
```
Benchmark (count) (interval) (range) (zone) Mode Cnt Score Error Units
before 10000000 calendar month 2000-10-28 to 2000-10-31 UTC avgt 10 96461080.982 ± 616373.011 ns/op
before 10000000 calendar month 2000-10-28 to 2000-10-31 America/New_York avgt 10 g± 1249189.867 ns/op
after 10000000 calendar month 2000-10-28 to 2000-10-31 UTC avgt 10 52311775.080 ± 107171.092 ns/op
after 10000000 calendar month 2000-10-28 to 2000-10-31 America/New_York avgt 10 54800134.968 ± 373844.796 ns/op
Before:
| Min Throughput | hourly_agg | 0.11 | ops/s |
| Median Throughput | hourly_agg | 0.11 | ops/s |
| Max Throughput | hourly_agg | 0.11 | ops/s |
| 50th percentile latency | hourly_agg | 650623 | ms |
| 90th percentile latency | hourly_agg | 821478 | ms |
| 99th percentile latency | hourly_agg | 859780 | ms |
| 100th percentile latency | hourly_agg | 864030 | ms |
| 50th percentile service time | hourly_agg | 9268.71 | ms |
| 90th percentile service time | hourly_agg | 9380 | ms |
| 99th percentile service time | hourly_agg | 9626.88 | ms |
|100th percentile service time | hourly_agg | 9884.27 | ms |
| error rate | hourly_agg | 0 | % |
After:
| Min Throughput | hourly_agg | 0.12 | ops/s |
| Median Throughput | hourly_agg | 0.12 | ops/s |
| Max Throughput | hourly_agg | 0.12 | ops/s |
| 50th percentile latency | hourly_agg | 519254 | ms |
| 90th percentile latency | hourly_agg | 653099 | ms |
| 99th percentile latency | hourly_agg | 683276 | ms |
| 100th percentile latency | hourly_agg | 686611 | ms |
| 50th percentile service time | hourly_agg | 8371.41 | ms |
| 90th percentile service time | hourly_agg | 8407.02 | ms |
| 99th percentile service time | hourly_agg | 8536.64 | ms |
|100th percentile service time | hourly_agg | 8538.54 | ms |
| error rate | hourly_agg | 0 | % |
```
Referencing a project instance during task execution is discouraged by
Gradle and should be avoided. E.g. It is incompatible with Gradles
incubating configuration cache. Instead there are services available to handle
archive and filesystem operations in task actions.
Brings us one step closer to #57918
this adds the new field `feature_importance_baseline` and allows it to be optionally be included in the model's metadata.
Related to: https://github.com/elastic/ml-cpp/pull/1522
Add a unit test to verify that an identifier surrounded with backquotes
is not a valid syntax for eventType value, as eventType is
schemantically a string literal and not a field identifier.
Follows: #63169
(cherry picked from commit ff12c1340b3890ac52251f31259fa9a719d9eacc)
As long as `bestEffortConsistency` is `true`, the value of `latestKnownRepoGen`
can be updated as a result of reads. We can only assert that `latestKnownRepoGen`
and cluster state move in lock-step if `bestEffortConsistency` was `false` before
updating the metadata generation as well as after.
Closes#62877
Ignore by default unavailable indices (same as ES) and verify that
allowNoIndices is set to false since at least one index is required
for validating the query.
Fix#62986
(cherry picked from commit fd75ac27223cd1b699b8d9c311dc401a39f9e0c8)
Do not filter by tiebreaker while searching sequence matches as
it's not monotonic and thus can filter out valid data.
Fix#62781
(cherry picked from commit 4d62198df70f3b70f8b6e7730e888057652c18a8)
Point-In-Time queries cannot be ran on individual indices but on all.
Thus all PIT queries move their index from the request level to a filter
so this condition is fulfilled while keeping the query scoped
accordingly.
Fix#63132
(cherry picked from commit c8eb4f724d5dcc0fcc172c6219ecfbc1dc1fbbae)
There is a small race when processing the cluster state that is used to
establish a newly elected leader as master of the cluster: it can pick the term
in its master state update task from a different (newer) election. This trips
an assertion in `Coordinator.publish(...)` where we claim that the term on the
state allows to uniquely define the pre-state but this isn't so. There are no
bad consequences of this race since such a publication fails later on anyway.
This PR fixes things so that the assertion holds true by improving the handling
of terms during cluster state processing by associating each master state
update task that is used to establish a newly elected leader with the correct
corresponding term from its election. It also explicitly handles the case where
the pre-state that is used as base state has already superseded the current
state. As a nice side-effect, join batching now only happens based on the same
term.
Closes#61437
The iteration over `timeoutClusterStateListeners` starts when the CS applier
thread is still running. This can lead to entries being added to it that never
get their listener resolved on shutdown and thus leak that listener as observed
in a stuck test in #62863.
Since `listener.onClose()` is idempotent we can just call it if we run into a stopped service
on the CS thread to avoid the race with certainty (because the iteration in `doStop` starts after
the stopped state has been set).
Closes#62863
For runtime fields, we will want to do all search-time interaction with
a field definition via a MappedFieldType, rather than a FieldMapper, to
avoid interfering with the logic of document parsing. Currently, fetching
values for runtime scripts and for building top hits responses need to
call a method on FieldMapper. This commit moves this method to
MappedFieldType, incidentally simplifying the current call sites and freeing
us up to implement runtime fields as pure MappedFieldType objects.
This commit adds a tier preference when mounting a searchable
snapshot. This sets a preference that a searchable snapshot is mounted
to a node with the cold role if one exists, then the warm role, then the
hot role, assuming that no other allocation rules are in place. This
means that by default, searchable snapshots are mounted to a node with
the cold role.
Note that depending on how we implement frozen functionality of
searchable snapshots (not pre-cached/not fully-cached), we might need to
adjust this to prefer frozen if mounting a not pre-cached/fully-cached
searchable snapshot versus mounting a pre-cached/fully-cached searchable
snapshot. This is a later concern since neither this nor the frozen role
are implemented currently.
We need to complete the search before closing the iterator, which
internally closes the point in time; otherwise, the search will fail
with a missing context error.
Closes#62451
Backport #63182 to 7.x branch.
The `randomEnrichPolicy(...)` helper method stores the policy and creates the source indices.
If a source index already exists, because it was creates for a random policy created earlier then
skipping the source index fails, but that is ignored and the test continues. However if the policy
has a match field that doesn't exist in the previous random policy then the mapping is never updated
and the put policy api fails with the fact that the match field can't be found.
This pr fixes that by execute a put mapping call in the event that the source index already exists.
Closes#63126
* [ML] optimize delete expired snapshots (#63134)
When deleting expired snapshots, we do an individual delete action per snapshot per job.
We should instead gather the expired snapshots and delete them in a single call.
This commit achieves this and a side-effect is there is less audit log spam on nightly cleanup
closes https://github.com/elastic/elasticsearch/issues/62875