`BaseRandomBinaryDocValuesRangeQueryTestCase.testRandomBig` should only run with
nightly tests. It doesn't make sense to make it part of every test run.
`UUIDTests` had a slow test for compression, which I made a bit faster by
decreasing the number of indexed docs.
This commit fixes the formatting of the values in the composite
aggregation response. `date` fields should return timestamp as longs
when used in a `terms` source and `ip` fields should always be formatted as strings.
This commit also fixes the parsing of the `after` key for these field types.
Finally, this commit disables the index optimization for the `ip` field and any source that provides a `missing` value.
This commit simplifies the invocations to
Translog#closeOnTragicEvent. This method already catches all possible
exceptions and suppresses the non-AlreadyClosedExceptions into the
exception that triggered the invocation. Therefore, there is no need for
callers to do this same logic (which would never execute).
* Move Streams.copy into elasticsearch-core and make a multi-release jar
This moves the method `Streams.copy(InputStream in, OutputStream out)` into the
`elasticsearch-core` project (inside the `o.e.core.internal.io` package). It
also makes this class into a multi-release class where the Java 9 equivalent
uses `InputStream#transferTo`.
This is a followup from
https://github.com/elastic/elasticsearch/pull/29300#discussion_r178147495
* Move ObjectParser into the x-content lib
This moves `ObjectParser`, `AbstractObjectParser`, and
`ConstructingObjectParser` into the libs/x-content dependency. This decoupling
allows them to be used for parsing for projects that don't want to depend on the
entire Elasticsearch jar.
Relates to #28504
* Move Tuple into elasticsearch-core
This allows us to use Tuple from other projects that don't want to rely on the
entire Elasticsearch jar.
I have also added very simple tests, since there were none.
Relates tangentially to #28504
Today we close the translog write tragically if we experience any I/O
exception on a write. These tragic closes lead to use closing the
translog and failing the engine. Yet, there is one case that is missed
which is when we touch the write channel during a read (checking if
reading from the writer would put us past what has been flushed). This
commit addresses this by closing the writer tragically if we encounter
an I/O exception on the write channel while reading. This becomes
interesting when we consider that this method is invoked from the engine
through the translog as part of getting a document from the
translog. This means we have to consider closing the translog here as
well which will cascade up into us finally failing the engine.
Note that there is no semantic change to, for example, primary/replica
resync and recovery. These actions will take a snapshot of the translog
which syncs the translog to disk. If an I/O exception occurs during the
sync we already close the writer tragically and once we have synced we
do not ever read past the position that was synced while taking the
snapshot.
* Fixes query_string query equals timezone check
This change fixes a bug where two `QueryStringQueryBuilder`s were found
to be equal if they had the same timezone set even if the query string
in the builders were different
Closes#29403
* Adds mutate function to QueryStringQueryBuilderTests
* iter
Fixes instances of
- Equals methods without type check
- Equals methods where the field of `this` was compared to the same
field of `this` instead of the `that` object that is compared to
Since #26542 the NodeVersionAllocationDecider tries to explain its NO decisions
as follows:
... may not support codecs or postings formats for a newer Lucene version
However, this message often appears during a rolling upgrade, and experience
has shown that it seems to cause more confusion and worry than it needs to.
This change fixes that by removing the explanation again, reducing the message
to a statement of fact about the respective nodes' versions.
Additionally, the same wording was used for version incompatibilities when
allocating a primary (vs its previous location) and a replica (vs its primary).
This change separates these two cases so they can have separate, clearer
wording.
Fixes#29228
This change fixes the handling of the `quote_field_suffix` option on `query_string`
query. The expansion was not applied to default fields query.
Closes#29324
`action.master.force_local` was only ever used internally and never documented. It was one of those settings that were
automatically added to a tribe node, to make sure that cluster state read operations would work locally rather than failing when trying to forward the request to the master (as the tribe node never had a master).
Given that we recently removed the tribe node, we can also remove this setting.
Today when you input a byte size setting that is out of bounds for the
setting, you get an error message that indicates the maximum value of
the setting. The problem is that because we use ByteSize#toString, we
end up with a representation of the value that does not really tell you
what the bound is. For example, if the bound is 2^31 - 1 bytes, the
output would be 1.9gb which does not really tell you want the limit as
there are many byte size values that we format to the same 1.9gb with
ByteSize#toString. We have a method ByteSize#getStringRep that uses the
input units to the value as the output units for the string
representation, so we end up with no loss if we use this to report the
bound. This commit does this.
Before doing any kind of validation on a new mapping, we should first do the multi-type validation in
order to provide better error messages. For #29313, this means that the exception message will be
Rejecting mapping update to [range_index_new] as the final mapping would have more than 1 type:
[_doc, mytype]
instead of
[expected_attendees] is defined as an object in mapping [mytype] but this name is already used for
a field in other types
Today we report thread pool info using a common object. This means that
we use a shared set of terminology that is not consistent with the
terminology used to the configure thread pools. This holds in particular
for the minimum and maximum number of threads in the thread pool where
we use the following terminology:
thread pool info | fixed | scaling
min core size
max max size
A previous change addressed this for the nodes info API. This commit
changes the display of thread pool info in the cat thread pool API too
to be dependent on the type of the thread pool so that we can align the
terminology in the output of thread pool info with the terminology used
to configure a thread pool.
Today we reply on `IndexWriter#hasDeletions` to check if an index
contains "update" operations. However, this check considers both deletes
and updates. This commit replaces that check by tracking and checking
Lucene operations explicitly. This would provide us stronger assertions.
Correctly setup classpath/dependencies and fix checkstyle task that was partly broken because delayed setup of Java9 sourcesets. This also cleans packaging of META-INF. It also prepares forbiddenapis 2.6 upgrade
relates #29292
This improves the way similarities are plugged in in order to:
- reject the classic similarity on 7.x indices and emit a deprecation
warning otherwise
- reject unkwown parameters on 7.x indices and emit a deprecation
warning otherwise
Even though this breaks the plugin API, I'd like to backport to 7.x so
that users can get deprecation warnings when they are doing something
that will become unsupported in the future.
Closes#23208Closes#29035
This moves the `Nullable` annotation into the elasticsearch-core project, so it
may be used without relying entirely on the server jar. This will allow us to
decouple more pieces to make them smaller.
In addition, there were two different `Nullable` annotations, these have all
been moved to the ES version rather than the inject version.
We have seen exceptions bubble up to the uncaught exception handler. Checking the blocks can
lead for example to IndexNotFoundException when the indices are resolved. In order to make
TransportMasterNodeAction more resilient against such expected exceptions, this code change
wraps the execution of doStart() into a try catch and informs the listener in case of failures.
DiskThresholdDecider currently assumes that the source index of a resize operation (e.g. shrink)
is available, and throws an IndexNotFoundException otherwise, thereby breaking any kind of shard
allocation. This can be quite harmful if the source index is deleted during a shrink, or if the source
index is unavailable during state recovery.
While this behavior has been partly fixed in 6.1 and above (due to #26931), it relies on the order in
which AllocationDeciders are executed (i.e. that ResizeAllocationDecider returns NO, ensuring that
DiskThresholdDecider does not run, something that for example does not hold for the allocation
explain API).
This change adds a more complete fix, and also solves the situation for 5.6.
* Pass script level params into scripted metric aggs (#28819)
Now params that are passed at the script level and at the aggregation level
are merged and can both be used in the aggregation scripts. If there are
any conflicts, aggregation level params will win. This may be followed
by another change detecting that case and throwing an exception to
disallow such conflicts.
* Disallow duplicate parameter names between scripted agg and script (#28819)
If a scripted metric aggregation has aggregation params and script params
which have the same name, throw an IllegalArgumentException when merging
the parameter lists.
I am not sure why we have this leniency for HTTP max content length, it
has been there since the beginning
(5ac51ee93f) with no explanation of its
source. That said, our philosophy today is different than the philosophy
of the past where Elasticsearch would be quite lenient in its handling
of settings and today we aim for predictability for both users and
us. This commit removes leniency in the parsing of
http.max_content_length.
* Begin moving XContent to a separate lib/artifact
This commit moves a large portion of the XContent code from the `server` project
to the `libs/xcontent` project. For the pieces that have been moved, some
helpers have been duplicated to allow them to be decoupled from ES helper
classes. In addition, `Booleans` and `CheckedFunction` have been moved to the
`elasticsearch-core` project.
This decoupling is a move so that we can eventually make things like the
high-level REST client not rely on the entire ES jar, only the parts it needs.
There are some pieces that are still not decoupled, in particular some of the
XContent tests still remain in the server project, this is because they test a
large portion of the pluggable xcontent pieces through
`XContentElasticsearchException`. They may be decoupled in future work.
Additionally, there may be more piecese that we want to move to the xcontent lib
in the future that are not part of this PR, this is a starting point.
Relates to #28504
Fix a couple of minor things in the InternalEngine:
* Rename loadOrGenerateHistoryUUID to reflect that it always generates a UUID
* Move .acquire() call next to the associated try {} block.
Removes a set of assertions in the test framework that verified that
Streamable objects could be serialized and deserialized across different
versions. When this was discussed the consensus was that this approach
has not caught many bugs in a long time and that serialization testing of
objects was best left to their respective unit and integration tests.
This commit also removes a transport interceptor that was used in
ESIntegTestCase tests to make these assertions about objects coming in
or off the wire.
The parsing code for script query currently silently skips by any tokens
it does not know about within its parsing loop. The only token it does
not catch is an array, which means pasing multiple scripts in via an
array will cause the last script to be parsed and one, silently dropping
the others. This commit adds validation that arrays are not seen while
parsing.
Since #29260, unsafe commits must be trimmed before opening an engine.
This makes the engine constructor follow Lucene standard semantics and
use the last commit. However, we haven't fully applied this change in some
tests.
Relates #29260
As follow up to #28245 , this PR removes the logic for selecting the
right start commit from the Engine constructor in favor of explicitly
trimming them in the Store, before the engine is opened. This makes the
constructor in engine follow standard Lucene semantics and use the last
commit.
Relates #28245
Relates #29156
Due to special treatment for the 0xFFFFFF... value in GeoHashUtils'
encodeLatLon method, the hashcode for lat 90, lon 180 is incorrectly
encoded as `"000000000000"` instead of "zzzzzzzzzzzz". This commit
removes the special treatment and fixes the issue.
Closes#22163
When deleting a snapshot, it is not necessary to load and to parse the
global metadata of the snapshot to delete. Now indices are stored in
the snapshot metadata file, we have all the information to resolve the
shards files to delete.
This commit removes the readSnapshotMetaData() method that was used to
load both global and index metadata files. Test coverage should be
enough as SharedClusterSnapshotRestoreIT already contains several
deletion tests.
Related to #28934
Today we have a few problems with how we handle bad requests:
- handling requests with bad encoding
- handling requests with invalid value for filter_path/pretty/human
- handling requests with a garbage Content-Type header
There are two problems:
- in every case, we give an empty response to the client
- in most cases, we leak the byte buffer backing the request!
These problems are caused by a broader problem: poor handling preparing
the request for handling, or the channel to write to when the response
is ready. This commit addresses these issues by taking a unified
approach to all of them that ensures that:
- we respond to the client with the exception that blew us up
- we do not leak the byte buffer backing the request
We historically removed reading from the transaction log to get consistent
results from _GET calls. There was also the motivation that the read-modify-update
principle we apply should not be hidden from the user. We still agree on the fact
that we should not hide these aspects but the impact on updates is quite significant
especially if the same documents is updated before it's written to disk and made serachable.
This change adds back the ability to read from the transaction log but only for update calls.
Calls to the _GET API will always do a refresh if necessary to return consistent results ie.
if stored fields or DocValues Fields are requested.
Closes#26802
When the `BulkProcessor` is used with the high-level REST client, a scheduler is internally created that allows to schedule tasks. Such scheduler is not exposed to users and needs to be closed once the `BulkProcessor` is closed. There are two ways to close the `BulkProcessor` though, one is the ordinary `close` method and the other one is `awaitClose`. The former closes the scheduler while the latter doesn't, leaving threads lingering.
DocumentParser: The checks for Text and Keyword were masked by the
earlier check for String, which they are child classes of. As String
field types are no longer supported, this check can be removed.
Restoring a snapshot, or getting the status of finished
snapshots, currently always load the global state metadata
file from the repository even if it not required. This
slows down the restore process (or listing statuses process)
and can also be an issue if the global state cannot be
deserialized (because it has unknown customs for example).
This commit splits the Repository.getSnapshotMetadata()
method into two distincts methods: getGlobalMetadata()
and getIndexMetadata() that are now called only when needed.