for get trained models include_model_definition is now deprecated.
This commit writes a deprecation warning if that parameter is used and suggests the caller to utilize the replacement
Backport #62825 to 7.x branch.
Today if a data stream is auto created, but an index with same name as the
first backing index already exists then internally that error is ignored,
which then result that later in the execution of a bulk request, the
bulk item fails due to that the data stream hasn't been auto created.
This situation can only occur if an index with same is created that
will be the backing index of a data stream prior to the creation
of the data stream.
Co-authored-by: Dan Hermann <danhermann@users.noreply.github.com>
A few of us were talking about ways to speed up the `date_histogram`
using the index for the timestamp rather than the doc values. To do that
we'd have to pre-compute all of the "round down" points in the index. It
turns out that *just* precomputing those values speeds up rounding
fairly significantly:
```
Benchmark (count) (interval) (range) (zone) Mode Cnt Score Error Units
before 10000000 calendar month 2000-10-28 to 2000-10-31 UTC avgt 10 96461080.982 ± 616373.011 ns/op
before 10000000 calendar month 2000-10-28 to 2000-10-31 America/New_York avgt 10 130598950.850 ± 1249189.867 ns/op
after 10000000 calendar month 2000-10-28 to 2000-10-31 UTC avgt 10 52311775.080 ± 107171.092 ns/op
after 10000000 calendar month 2000-10-28 to 2000-10-31 America/New_York avgt 10 54800134.968 ± 373844.796 ns/op
```
That's a 46% speed up when there isn't a time zone and a 58% speed up
when there is.
This doesn't work for every time zone, specifically those that have two
midnights in a single day due to daylight savings time will produce wonky
results. So they don't get the optimization.
Second, this requires a few expensive computation up front to make the
transition array. And if the transition array is too large then we give
up and use the original mechanism, throwing away all of the work we did
to build the array. This seems appropriate for most usages of `round`,
but this change uses it for *all* usages of `round`. That seems ok for
now, but it might be worth investigating in a follow up.
I ran a macrobenchmark as well which showed an 11% preformance
improvement. *BUT* the benchmark wasn't tuned for my desktop so it
overwhelmed it and might have produced "funny" results. I think it is
pretty clear that this is an improvement, but know the measurement is
weird:
```
Benchmark (count) (interval) (range) (zone) Mode Cnt Score Error Units
before 10000000 calendar month 2000-10-28 to 2000-10-31 UTC avgt 10 96461080.982 ± 616373.011 ns/op
before 10000000 calendar month 2000-10-28 to 2000-10-31 America/New_York avgt 10 g± 1249189.867 ns/op
after 10000000 calendar month 2000-10-28 to 2000-10-31 UTC avgt 10 52311775.080 ± 107171.092 ns/op
after 10000000 calendar month 2000-10-28 to 2000-10-31 America/New_York avgt 10 54800134.968 ± 373844.796 ns/op
Before:
| Min Throughput | hourly_agg | 0.11 | ops/s |
| Median Throughput | hourly_agg | 0.11 | ops/s |
| Max Throughput | hourly_agg | 0.11 | ops/s |
| 50th percentile latency | hourly_agg | 650623 | ms |
| 90th percentile latency | hourly_agg | 821478 | ms |
| 99th percentile latency | hourly_agg | 859780 | ms |
| 100th percentile latency | hourly_agg | 864030 | ms |
| 50th percentile service time | hourly_agg | 9268.71 | ms |
| 90th percentile service time | hourly_agg | 9380 | ms |
| 99th percentile service time | hourly_agg | 9626.88 | ms |
|100th percentile service time | hourly_agg | 9884.27 | ms |
| error rate | hourly_agg | 0 | % |
After:
| Min Throughput | hourly_agg | 0.12 | ops/s |
| Median Throughput | hourly_agg | 0.12 | ops/s |
| Max Throughput | hourly_agg | 0.12 | ops/s |
| 50th percentile latency | hourly_agg | 519254 | ms |
| 90th percentile latency | hourly_agg | 653099 | ms |
| 99th percentile latency | hourly_agg | 683276 | ms |
| 100th percentile latency | hourly_agg | 686611 | ms |
| 50th percentile service time | hourly_agg | 8371.41 | ms |
| 90th percentile service time | hourly_agg | 8407.02 | ms |
| 99th percentile service time | hourly_agg | 8536.64 | ms |
|100th percentile service time | hourly_agg | 8538.54 | ms |
| error rate | hourly_agg | 0 | % |
```
If `track_total_hits=true` is used, the exact value of the number of hits is returned - i.e. the value is effectively limitless, and not the default value of 10,000
Co-authored-by: AndyHunt66 <andrew.hunt@elastic.co>
The `migrate` action will now configure the
`index.routing.allocation.include._tier_preference` setting to the corresponding
tiers. For the HOT phase it will configure `data_hot`, for the WARM phase it will
configure `data_warm,data_hot` and for the COLD phase
`data_cold,data_warm,data_cold`.
(cherry picked from commit 9dbf0e6f0c267e40c5bcfb568bb2254da103ae40)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
Replace common Like and RLike queries that match all characters with
IsNotNull (exists) queries
Fix#62585
(cherry picked from commit 4c23fad0468a9edd7325b06c6a96f7af37625dbf)
Closes#62466. Since we're still seeing occasional failures when
checking the GID of all files in the Docker image due to Elasticsearch
running in the background, instead run a new container with ES running
at all.
In case of more than 500 transforms, get and stats return paged results which can be requested using
page parameters. For >500 transforms count wasn't parsed out of the server response but taken from
size of the list of transforms.
The change also adds client/server hlrc tests and fixes a wrong type for count in get.
fixes#56245
We use the bundled jdk for unit, integ and packaging tests. Since
upgrading to jdk 15, centos-6 and oracle enterprise linux 6 have failed
due to versions of glibc no longer supported by the jdk. This commit
adds detection of the old glibc versions to gradle, and utilizes that
when deciding which jdk to use for tests.
relates #62709closes#62635
The name `FieldFetcher` fits better with the 'fetch' terminology we use
elsewhere, for example `FetchFieldsPhase` and `ValueFetcher`.
This PR also moves the construction of the fetcher off the context and onto
`FetchFieldsPhase`, which feels like a more natural place for it, and fixes a
TODO in javadocs.
This test checks to see if the index has been created before version 6.4, in which
case index prefixes are unavailable and so it expects to see a span multi-term
wrapper. However, the production code doesn't bother with checking for versions,
because if the field in question is configured with index_prefixes then it knows that
it must have been created post 6.4 (you can't merge in a new index_prefixes
configuration).
This commit alters the test to remove the random version checks, as we know we
will always have a prefix field available in this scenario.
Fixes#58199
Backport #62766 to 7.x branch.
The bulk api cache the resolved concrete indices when resolving the user provided
index name into the actual index name. The validation that prevents write ops other
than create from being executed in a data stream was only performed if the result
wasn't cached. In case of cached resolvings, the validation never occurs.
The validation would be skipped for all bulk items for a data stream after a create
operation for that same data stream. This commit ensures that the validation is always
performed for all bulk items (whether the concrete index resolution has been cached or
not cached).
Closes#62762
This adds a method to `Grok` that matches against sections offset from
utf-8 byte arrays:
```
Map<String, Object> captures(byte[] utf8Bytes, int offset, int length)
```
This'll be useful for the grok-flavored runtime fields because they
want to match against utf-8 encoded strings stored in a big array. And
joni already supports this.
When state persistence was first implemented for data frame analytics
we had the assumption that state would always fit in a single document.
However this is not the case any more.
This commit adds handling of state that spreads over multiple documents.
Backport of #62564
This fixes reindexing progress in the scenario when a DFA job that had not finished
reindexing is resumed (either because the user called stop and start or because the
job was reassigned in the middle of reindexing). Before the fix reindexing progress
stays to the value it had reached before until it surpasses that value.
When we resume a data frame analytics job we want to preserve reindexing progress
and reset all other phases. Except for when reindexing was not completed.
In that case we are deleting the destination index and starting reindexing
from scratch. Thus we need to reset reindexing progress too.
Backport of #62772
To better align the plugin naming with other mapper plugins under x-pack (e.g.
mapper-flattened) this PR changes the plugin name and the containing directory
to "mapper-version"
This change adds support for the recently introduced case insensitivity flag for
wildcard and prefix queries. Since version field values are encoded differently we
need to adapt our own AutomatonQuery variation to add both cases if case insensitivity
is turned on.
Most of our field types have the same implementation for their `existsQuery` method which relies on doc_values if present, otherwise it queries norms if available or uses a term query against the _field_names meta field. This standard implementation is repeated in many different mappers.
There are field types that only query doc_values, because they always have them, and field types that always query _field_names, because they never have norms nor doc_values. We could apply the same standard logic to all of these field types as `MappedFieldType` has the knowledge about what data structures are available.
This commit introduces a standard implementation that does the right thing depending on the data structure that is available. With that only field types that require a different behaviour need to override the existsQuery method.
At the same time, this no longer forces subclasses to override `existsQuery`, which could be forgotten when needed. To address this we introduced a new test method in `MapperTestCase` that verifies the `existsQuery` being generated and its consistency with the available data structures.
* Make for each processor resistant to field modification (#62791)
This change provides consistent view of field that foreach processor is iterating over. That prevents it to go into infinite loop and put great pressure on the cluster.
Closes#62790
* fix compilation
This extracts the configuration for extracting values from a groked
string when building the grok expression to do two things:
1. Create a method exposing that configuration on `Grok` itself which
will be used grok `grok` flavored runtime fields.
2. Marginally speed up extracting grok values by skipping a little
string manipulation.
Eclipse was confused for two reasons:
1. `:x-pack:plugin` depended on itself.
2. `ql`, `sql`, and `eql` couldn't see some methods.
I fixed problem 1 by only adding the "depends on itself" configuration
outside of eclipse. I fixed problem 2 by making a `test` sub-project in
`ql` that contains test utilities and depending on those where possible.
This commit adds a dedicated threadpool for system index write
operations. The dedicated resources for system index writes serves as
a means to ensure that user activity does not block important system
operations from occurring such as the management of users and roles.
Backport of #61655
Java 15 requires at last glibc 2.14, but we support older Linux OSs that ship with older versions. Rather than continue to ship Java 14, which is now EOL and therefore unsupported, ES will detect this situation and print a helpful message, instead of the cryptic error that would otherwise be printed. Users on older OSs will have to set JAVA_HOME instead of using the bundled JVM.
This doesn't affect v8.0.0 because these older Linux OSs will not be supported, and all the supported ones have glibc 2.14.
* [ML] changing to not use global bulk indexing parameters in conjunction with add(object) calls (#62694)
* [ML] changing to not use global bulk indexing parameters in conjunction with add(object) calls
global parameters, outside of the global index, are ignored for internal callers in certain cases.
If the interal caller is adding requests via the following methods:
```
- BulkRequest#add(IndexRequest)
- BulkRequest#add(UpdateRequest)
- BulkRequest#add(DocWriteRequest)
- BulkRequest#add(DocWriteRequest[])
```
It is better to specifically set the desired parameters on the requests before they are added
to the bulk request object.
This commit addresses this issue for the ML plugin
* unmuting test
Since `=` is rarely used and is undocumented we its support for
equality comparisons keeping `==` as the only option. `=` is now only
used for assignments like in `maxspan=10m`.
Closes: #62650
(cherry picked from commit ad5ae4d887b5c2feca2d0e874d7bdf738e3fd54e)
This reworks the code around grok's built-in patterns to name things
more like the rest of the code. Its not a big deal, but I'm just more
used to having `public static final` constants in SHOUTING_SNAKE_CASE.
Closes#61660. When ordering shard for recovery, ensure system index shards are
ordered first so that their recovery will be started first.
Note that I rewrote PriorityComparatorTests to use IndexMetadata instead of its
local IndexMeta POJO.