Running any randomized testing task within Elasticsearch currently fails
if a project has zero tests. This was supposed to be overrideable, but
it was always set to 'fail', and the system property to override was
passed down to the test runner, but never read there. This commit
changes the value of the ifNoTests setting to randomized runner to be
read from system properties and continue to default to 'fail'.
This properly registers the `XPackFeatureSetUsage` for Logstash and
it tests it by invoking the Usage API in a Monitoring QA test.
Without those being properly registered, the test will consistently fail.
Original commit: elastic/x-pack-elasticsearch@2e8f2376fd
This switches the underlying byte output representation used by default in
`XContentBuilder` from `BytesStreamOutput` to a `ByteArrayOutputStream` (an
`OutputStream` can still be specified manually)
This is groundwork to allow us to decouple `XContent*` from the rest of the ES
core code so that it may be factored into a separate jar.
Since `BytesStreamOutput` was not using the recycling instance of `BigArrays`,
this should not affect the circuit breaking capabilities elsewhere in the
system.
Relates to #28504
* Factor UnknownNamedObjectException into its own class
This moves the inner class `UnknownNamedObjectException` from
`NamedXContentRegistry` into a top-level class. This is so that
`NamedXContentRegistry` doesn't have to depend on StreamInput and StreamOutput.
Relates to #28504
This is related to #27260. The transport-nio plugin needs socket
permissions to operate as a transport. This commit gives it these
permissions in the policy file.
We had a Usage class before, but weren't registering it with XPack.
Would be nice to add more usage info in the future (like the running
jobs on each node), but unclear the best way to do it since we'd need
to filter through the list of allocated tasks.
Original commit: elastic/x-pack-elasticsearch@5207d2758b
Up to now a job update that reduces the model memory limit
was not allowed. However, there could definitely be cases
where reducing the limit is necessary and reasonable.
This commit makes it possible to decrease the limit as long
as it does not go below the current memory usage. We obtain
the latter from the model size stats.
The conditions under which updating the model_memory_limit
is not allowed are now:
- when the job is open
- latest model_size_stats.model_bytes < new value
relates elastic/x-pack-elasticsearch#2461
Original commit: elastic/x-pack-elasticsearch@5b35923590
fixed bugs that were exposed by this:
* Duplicates query leafs were not detected in a multi level boolean query
* Tracking fields for numeric range queries did not work properly.
* The sorting that was used to find the less restrictive clauses in
disjunction query did not work too.
This reverts commit f057fc294a.
The rescorer does not resort the collapsed values inside the top docs
during rescoring. For this reason the Lucene rescorer is not compatible
with collapsing.
Relates #27243
This commit fixes the test progress logging to not produce an NPE when
there are no tests run. The onQuit method is always called, but onStart
would not be called if no tests match the test patterns.
This removes the readFrom and writeTo methods from XContentType, instead using
the more generic `readEnum` and `writeEnum` methods. Luckily they are both
encoded exactly the same way, so there is no compatibility layer needed for
backwards compatibility.
This is the x-pack side of https://github.com/elastic/elasticsearch/pull/28927
Original commit: elastic/x-pack-elasticsearch@f1c0928490
This removes the readFrom and writeTo methods from XContentType, instead using
the more generic `readEnum` and `writeEnum` methods. Luckily they are both
encoded exactly the same way, so there is no compatibility layer needed for
backwards compatibility.
Relates to #28504
* Remove BytesRef usage from XContentParser and its subclasses
This removes all the BytesRef usage from XContentParser in favor of directly
returning a CharBuffer (this was originally what was returned, it was just
immediately wraped in a BytesRef).
Relates to #28504
* Rename method after Ryan's feedback
This commit fixes the Javadocs for the class o.e.x.r.j.RollupIndexer as
these Javadocs were referring to instance methods on the class
incorrectly (using a this prefix).
Original commit: elastic/x-pack-elasticsearch@fdcc7338f9
The original example resulted in a 400 error due to the example being `-` separated instead of the default `.` separation.
```
failed to parse date field [2001-01-01] with format [YYYY.MM.dd]
```
Adds a usage example of the JLH score used in significant terms aggregation.
All other methods to calculate significance score have such an example
Closes#28513
This wraps the stream (`.streamInput()`) that is passed to many of the
`createParser` instances in the enclosing (or a new) try-with-resources block.
This ensures the `BytesReference.streamInput()` is closed.
Relates to elastic/x-pack-elasticsearch#28504
Original commit: elastic/x-pack-elasticsearch@7546e3b4d4
* Wrap stream passed to createParser in try-with-resources
This wraps the stream (`.streamInput()`) that is passed to many of the
`createParser` instances in the enclosing (or a new) try-with-resources block.
This ensures the `BytesReference.streamInput()` is closed.
Relates to #28504
* Use try-with-resources instead of closing in a finally block
This change ensures that we ignore terms removed from the analysis rather than returning a match_no_docs query for the part
that contain the stop word. For instance a query like "the AND fox" should ignore "the" if it is considered as a stop word instead of
adding a match_no_docs query.
This change also fixes the analysis of prefix terms that start with a stop word (e.g. `the*`). In such case if `analyze_wildcard` is true and `the`
is considered as a stop word this part of the query is rewritten into a match_no_docs query. Since it's a prefix query this change forces the prefix query
on `the` even if it is removed from the analysis.
Fixes#28855Fixes#28856
Pruning tombstones is quite expensive since we have to walk though all
deletes in the live version map and acquire a lock on every value even though
it's impossible to prune it. This change does a pre-check if a delete is old enough
and if not it skips acquireing the lock.
This moves the `payload = null;` statement to above the asynchronous
HTTP call. This helps to avoid a race condition relative to `doClose`
asserting that it is `null`.
This is only a realistic situation during a shutdown situation because
the thread responds immediately before it can be nullified and freeable,
which is not a realistic scenario in any other situation.
Original commit: elastic/x-pack-elasticsearch@eb3a6ff118
Remove functions without a backing matrix agg
MatrixAgg works across multiple fields and exposing it directly in SQL
does not work. Instead isolated functions are exposed which get folded
and optimized into one matrix agg per field. Thus not all matrix
functions can be exposed in SQL, at least at this time.
Instead of depending on the plugin directly, depend on the plugin client
jar (matrix-agg-client)
Remove outdated test
Original commit: elastic/x-pack-elasticsearch@ec9b31bf59
The type of BUFFER_LENGTH needs to be an integer (not NULL) and unsigned
indicate the opposite of signed.
Change isSigned from Object to primitive
Since all the consumer of isSigned expect a primitive, an Object is just causing troubles by being null.
Update description table
Original commit: elastic/x-pack-elasticsearch@8e1960bdb5
Use AggregatorTestCase's `newIndexSearcher()` instead. Lucene's
version can randomly wrap with IndexReader with things we can't handle
like ParallelCompositeReader
Original commit: elastic/x-pack-elasticsearch@b4c0e9a601
The test job helper randomizes the index name with 1-10 characters,
which can lead to randomized index names to overlap and show fewer
caps than the test expects.
The solution is to just use index names "0"-"24" to ensure none
of the names overlap, and thus the caps don't overlap.
Original commit: elastic/x-pack-elasticsearch@74a6d13213
The arrangement of the final latch meant the latch could countdown,
then the test ends before the await() triggers which caused the
thread to be interrupted and fail. The whole arrangement was incorrect
anyhow.
We need to await the latch before sending the search response as before,
but move the final atomicBoolean to the second time the persistent
task status is updated which is a signal that we are done
and can end the test
If these tests continues to be flaky, we should probably just remove them.
The headers are tested elsewhere and not required to be tested in this
context.
Original commit: elastic/x-pack-elasticsearch@0cf5603972
Increase the default limit of `index.highlight.max_analyzed_offset` to 1M instead of previous 10K.
Enhance an error message when offset increased to include field name, index name and doc_id.
Relates to https://github.com/elastic/kibana/issues/16764
* Clarifies how the query_string splits textual part to build a query
Whitespaces are not considered as operators anymore in 6x but the documentation is not clear about it.
This commit changes the example in the documentation and adds a note regarding whitespaces and operators.
Closes#28719
It is only a comment, but can confuse those reading the code
Used 6.0 as an arbitrary elasticsearch.version value since it is version that required Java 8