This allows us to remove another dependency in the decoupling of the XContent
code. Rather than move this class over or decouple it, it can simply be removed.
Relates tangentially to #28504
These classes are used only in two places, and can be replaced by the
`CharArrayReader` and `CharArrayWriter`. The JDK can also perform lock biasing
and elision as well as escape analysis to optimize away non-contended locks,
rendering their lock-free implementations unnecessary.
Ingest has been failing to apply existing pipelines from cluster-state
into the in-memory representation that are no longer valid. One example of
this is a pipeline with a script processor. If a cluster starts up with scripting
disabled, these pipelines will not be loaded. Even though GETing a pipeline worked,
indexing operations claimed that this pipeline did not exist. This is because one
gets information from cluster-state and the other is from an in-memory data-structure.
Now, two things happen
1. suppress the exceptions until after other successful pipelines are loaded
2. replace failed pipelines with a placeholder pipeline
If the pipeline execution service encounters the stubbed pipeline, it is known that
something went wrong at the time of pipeline creation and an exception was thrown to
the user at some point at start-up.
closes#28269.
Running any randomized testing task within Elasticsearch currently fails
if a project has zero tests. This was supposed to be overrideable, but
it was always set to 'fail', and the system property to override was
passed down to the test runner, but never read there. This commit
changes the value of the ifNoTests setting to randomized runner to be
read from system properties and continue to default to 'fail'.
This switches the underlying byte output representation used by default in
`XContentBuilder` from `BytesStreamOutput` to a `ByteArrayOutputStream` (an
`OutputStream` can still be specified manually)
This is groundwork to allow us to decouple `XContent*` from the rest of the ES
core code so that it may be factored into a separate jar.
Since `BytesStreamOutput` was not using the recycling instance of `BigArrays`,
this should not affect the circuit breaking capabilities elsewhere in the
system.
Relates to #28504
* Factor UnknownNamedObjectException into its own class
This moves the inner class `UnknownNamedObjectException` from
`NamedXContentRegistry` into a top-level class. This is so that
`NamedXContentRegistry` doesn't have to depend on StreamInput and StreamOutput.
Relates to #28504
This is related to #27260. The transport-nio plugin needs socket
permissions to operate as a transport. This commit gives it these
permissions in the policy file.
fixed bugs that were exposed by this:
* Duplicates query leafs were not detected in a multi level boolean query
* Tracking fields for numeric range queries did not work properly.
* The sorting that was used to find the less restrictive clauses in
disjunction query did not work too.
This reverts commit f057fc294a.
The rescorer does not resort the collapsed values inside the top docs
during rescoring. For this reason the Lucene rescorer is not compatible
with collapsing.
Relates #27243
This commit fixes the test progress logging to not produce an NPE when
there are no tests run. The onQuit method is always called, but onStart
would not be called if no tests match the test patterns.
This removes the readFrom and writeTo methods from XContentType, instead using
the more generic `readEnum` and `writeEnum` methods. Luckily they are both
encoded exactly the same way, so there is no compatibility layer needed for
backwards compatibility.
Relates to #28504
* Remove BytesRef usage from XContentParser and its subclasses
This removes all the BytesRef usage from XContentParser in favor of directly
returning a CharBuffer (this was originally what was returned, it was just
immediately wraped in a BytesRef).
Relates to #28504
* Rename method after Ryan's feedback
The original example resulted in a 400 error due to the example being `-` separated instead of the default `.` separation.
```
failed to parse date field [2001-01-01] with format [YYYY.MM.dd]
```
Adds a usage example of the JLH score used in significant terms aggregation.
All other methods to calculate significance score have such an example
Closes#28513
* Wrap stream passed to createParser in try-with-resources
This wraps the stream (`.streamInput()`) that is passed to many of the
`createParser` instances in the enclosing (or a new) try-with-resources block.
This ensures the `BytesReference.streamInput()` is closed.
Relates to #28504
* Use try-with-resources instead of closing in a finally block
This change ensures that we ignore terms removed from the analysis rather than returning a match_no_docs query for the part
that contain the stop word. For instance a query like "the AND fox" should ignore "the" if it is considered as a stop word instead of
adding a match_no_docs query.
This change also fixes the analysis of prefix terms that start with a stop word (e.g. `the*`). In such case if `analyze_wildcard` is true and `the`
is considered as a stop word this part of the query is rewritten into a match_no_docs query. Since it's a prefix query this change forces the prefix query
on `the` even if it is removed from the analysis.
Fixes#28855Fixes#28856
Pruning tombstones is quite expensive since we have to walk though all
deletes in the live version map and acquire a lock on every value even though
it's impossible to prune it. This change does a pre-check if a delete is old enough
and if not it skips acquireing the lock.
Increase the default limit of `index.highlight.max_analyzed_offset` to 1M instead of previous 10K.
Enhance an error message when offset increased to include field name, index name and doc_id.
Relates to https://github.com/elastic/kibana/issues/16764
* Clarifies how the query_string splits textual part to build a query
Whitespaces are not considered as operators anymore in 6x but the documentation is not clear about it.
This commit changes the example in the documentation and adds a note regarding whitespaces and operators.
Closes#28719
It is only a comment, but can confuse those reading the code
Used 6.0 as an arbitrary elasticsearch.version value since it is version that required Java 8
Values for the network.host setting can often contain a colon which is a
character that is considered special by YAML (these arise in IPv6
addresses and some of the special tags like ":ipv4"). As such, these
values need to be quoted or a YAML parser will be unhappy with
them. This commit adds a note to the docs regarding this.
When virtual lock is not possible because JNA is unavailable, we log a
warning message. Yet, this log message refers to mlockall rather than
virtual lock, presumably because of a copy/paste error. This commit
fixes this issue.
This commit specifies that the working directory of the destroy task for
destroying test VMs is the root of the build. This is necessary in case
the build was run from a sub-directory, the Vagrant command would then
not be able to locate the Vagrantfile for the VMs in question.
Today we do not destroy Vagrant boxes before tests. This is because
constantly reprovisioning these boxes is time-consuming. Yet, not
destroying these boxes can lead to state being left around that impacts
subsequent test runs. To address this, we now always destroy these boxes
before tests and provide a flag to set if this is not desired while
iterating locally.
This commit replaces `org.apache.logging.log4j.util.Supplier` by
`java.util.function.Supplier` in non-logging code. These usages are
neither incorrect nor wrong but rather than accidental. I think our
intention was to use the JDK's Supplier in these places.
Applying the rest test gradle plugin already uses the zip distribution
by default, so specifying it explicitly is not necessary. These are
leftovers from before zip was the default for rest tests.
* Reject regex search if regex string is too long (#28344)
* Add docs
* Introduce index level setting `index.max_regex_length`
to control the maximum length of the regular expression
Closes#28344