I am not sure why we have this leniency for HTTP max content length, it
has been there since the beginning
(5ac51ee93f) with no explanation of its
source. That said, our philosophy today is different than the philosophy
of the past where Elasticsearch would be quite lenient in its handling
of settings and today we aim for predictability for both users and
us. This commit removes leniency in the parsing of
http.max_content_length.
* Begin moving XContent to a separate lib/artifact
This commit moves a large portion of the XContent code from the `server` project
to the `libs/xcontent` project. For the pieces that have been moved, some
helpers have been duplicated to allow them to be decoupled from ES helper
classes. In addition, `Booleans` and `CheckedFunction` have been moved to the
`elasticsearch-core` project.
This decoupling is a move so that we can eventually make things like the
high-level REST client not rely on the entire ES jar, only the parts it needs.
There are some pieces that are still not decoupled, in particular some of the
XContent tests still remain in the server project, this is because they test a
large portion of the pluggable xcontent pieces through
`XContentElasticsearchException`. They may be decoupled in future work.
Additionally, there may be more piecese that we want to move to the xcontent lib
in the future that are not part of this PR, this is a starting point.
Relates to #28504
* Add test matrix axis files for periodic java testing
* Add properties file defining java versions to use
* We have no openjdk8
* Remove openjdk
Oracle Java and OpenJDK basically only differ in license, so we don't
need to test both.
Fix a couple of minor things in the InternalEngine:
* Rename loadOrGenerateHistoryUUID to reflect that it always generates a UUID
* Move .acquire() call next to the associated try {} block.
Today this part of the documentation just says that Geo queries are not 100%
accurate, but in fact we can be more precise about which kinds of queries see
which kinds of error. This commit clarifies this point.
At time of writing, GeoJSON did not enforce a specific ordering of vertices in
a polygon, but it now does. We occasionally get reports of Elasticsearch
rejecting apparently-valid GeoJSON because of badly oriented polygons, and it's
helpful to be able to point at this bit of the documentation when responding.
Removes a set of assertions in the test framework that verified that
Streamable objects could be serialized and deserialized across different
versions. When this was discussed the consensus was that this approach
has not caught many bugs in a long time and that serialization testing of
objects was best left to their respective unit and integration tests.
This commit also removes a transport interceptor that was used in
ESIntegTestCase tests to make these assertions about objects coming in
or off the wire.
The parsing code for script query currently silently skips by any tokens
it does not know about within its parsing loop. The only token it does
not catch is an array, which means pasing multiple scripts in via an
array will cause the last script to be parsed and one, silently dropping
the others. This commit adds validation that arrays are not seen while
parsing.
We use a latch when sending requests during tests so that we do not hang
forever waiting for replies on those requests. This commit increases the
timeout on that latch to 30 seconds because sometimes 10 seconds is just
not enough.
This commit changes the sysprop for overriding the branch bwc builds use
to be branch specific. There are 3 different bwc branches built, but all
of them currently read the exact same sysprop. For example, with this change
and current branches, you can now specify eg `-Dtests.bwc.refspec.6.x=my_6x`
and it will build only next-minor-snapshot with that branch, while
next-bugfix-snapshot will continue to use 5.6.
Since #29260, unsafe commits must be trimmed before opening an engine.
This makes the engine constructor follow Lucene standard semantics and
use the last commit. However, we haven't fully applied this change in some
tests.
Relates #29260
As follow up to #28245 , this PR removes the logic for selecting the
right start commit from the Engine constructor in favor of explicitly
trimming them in the Store, before the engine is opened. This makes the
constructor in engine follow standard Lucene semantics and use the last
commit.
Relates #28245
Relates #29156
Due to special treatment for the 0xFFFFFF... value in GeoHashUtils'
encodeLatLon method, the hashcode for lat 90, lon 180 is incorrectly
encoded as `"000000000000"` instead of "zzzzzzzzzzzz". This commit
removes the special treatment and fixes the issue.
Closes#22163
When deleting a snapshot, it is not necessary to load and to parse the
global metadata of the snapshot to delete. Now indices are stored in
the snapshot metadata file, we have all the information to resolve the
shards files to delete.
This commit removes the readSnapshotMetaData() method that was used to
load both global and index metadata files. Test coverage should be
enough as SharedClusterSnapshotRestoreIT already contains several
deletion tests.
Related to #28934
The sysprop repos.mavenLocal may be used to add the local .m2 maven
repository for testing snapshots of locally build dependencies.
Unfortunately this has to be checked in two different places (they cannot
be shared, due to buildSrc being built essentially as a separate
project), and the casing of the string sysprop lookups did not align.
This commit fixes BuildPlugin's checking of repos.mavenLocal to use the
correct casing (camelCase, to match the gradle dsl element).
Today we have a few problems with how we handle bad requests:
- handling requests with bad encoding
- handling requests with invalid value for filter_path/pretty/human
- handling requests with a garbage Content-Type header
There are two problems:
- in every case, we give an empty response to the client
- in most cases, we leak the byte buffer backing the request!
These problems are caused by a broader problem: poor handling preparing
the request for handling, or the channel to write to when the response
is ready. This commit addresses these issues by taking a unified
approach to all of them that ensures that:
- we respond to the client with the exception that blew us up
- we do not leak the byte buffer backing the request
We historically removed reading from the transaction log to get consistent
results from _GET calls. There was also the motivation that the read-modify-update
principle we apply should not be hidden from the user. We still agree on the fact
that we should not hide these aspects but the impact on updates is quite significant
especially if the same documents is updated before it's written to disk and made serachable.
This change adds back the ability to read from the transaction log but only for update calls.
Calls to the _GET API will always do a refresh if necessary to return consistent results ie.
if stored fields or DocValues Fields are requested.
Closes#26802
When the `BulkProcessor` is used with the high-level REST client, a scheduler is internally created that allows to schedule tasks. Such scheduler is not exposed to users and needs to be closed once the `BulkProcessor` is closed. There are two ways to close the `BulkProcessor` though, one is the ordinary `close` method and the other one is `awaitClose`. The former closes the scheduler while the latter doesn't, leaving threads lingering.
We previously had a property to specify the location of the REST test
spec files but this was removed in a previous refactoring yet left
behind in the docs. This commit removes the last remaining vestige of
this parameter.
DocumentParser: The checks for Text and Keyword were masked by the
earlier check for String, which they are child classes of. As String
field types are no longer supported, this check can be removed.
Restoring a snapshot, or getting the status of finished
snapshots, currently always load the global state metadata
file from the repository even if it not required. This
slows down the restore process (or listing statuses process)
and can also be an issue if the global state cannot be
deserialized (because it has unknown customs for example).
This commit splits the Repository.getSnapshotMetadata()
method into two distincts methods: getGlobalMetadata()
and getIndexMetadata() that are now called only when needed.
When a module or plugin register that it has a client JAR, we copy
artifacts like the Javadoc and sources JARs as the JARs for the client
as well (with -client added to the name). I previously had to disable
the Javadoc task on JDK 10 due to a bug in bin/javadoc. After JDK 10
went GA without a fix for this bug, I added workaround to fix the
Javadoc task on JDK 10. However, I made a mistake reverting the
previously skipped Javadocs tasks and missed that one that copies the
Javadoc JAR for client JARs. This commit fixes that issue.
* Decouple NamedXContentRegistry from ElasticsearchException
This commit decouples `NamedXContentRegistry` from using either
`ElasticsearchException`, `ParsingException`, or `UnknownNamedObjectException`.
This will allow us to move NamedXContentRegistry to its own lib as part of the
xcontent extraction work.
Relates to #28504
HttpInfo is passed the maxContentLength as a parameter, but this value should
never be negative. This fixes the test to only pass a positive random value.
* Remove all dependencies from XContentBuilder
This commit removes all of the non-JDK dependencies from XContentBuilder, with
the exception of `CollectionUtils.ensureNoSelfReferences`. It adds a third
extension point around dealing with time-based fields and formatters to work
around the Joda dependency.
This decoupling allows us to be able to move XContentBuilder to a separate lib
so it can be available for things like the high level rest client.
Relates to #28504
In 5.2 `ignore_unmapped` was added to `inner_hits` in order to ignore invalid mapping.
This value was automatically set to the value defined in the parent query (`nested`, `has_child`, `has_parent`) but the refactoring of the parent/child in 5.6 removed this behavior unintentionally.
This commit restores this behavior but also makes sure that we always automatically enforce this value when the query builder is used directly (previously this was only done by the XContent deserialization).
Closes#29071
The default timeout (eg. 10 seconds) may not be enough for CI to
re-allocate shards after the partion is healed. This commit increases
the timeout to 30 seconds and enables logging in order to have more
detailed information in case this test failed again.
Closes#29060
This was the plan from day one but due to a silly bug nodes were immediately retried after they were marked as dead for the first time. From the second time on, the expected backoff was applied.
In #testPruneOnlyDeletesAtMostLocalCheckpoint, we create a new engine
but mistakenly use the same translog directory of the existing engine.
This prevents translog files from cleaning up when closing the engines.
ERROR 0.12s J2 | InternalEngineTests.testPruneOnlyDeletesAtMostLocalCheckpoint <<< FAILURES!
> Throwable #1: java.io.IOException: could not remove the following files (in the order of attempts):
> translog-primary-060/translog-2.tlog: java.io.IOException: access denied:
This commit makes sure to use a separate directory for each engine in
this tes.
When processing an append-only operation, primary knows that operations
can only conflict with another instance of the same operation. This is
true as the id was freshly generated. However this property doesn't hold
for replicas. As soon as an auto-generated ID was indexed into the
primary, it can be exposed to a search and users can issue a follow up
operation on it. In extremely rare cases, the follow up operation can be
arrived and processed on a replica before the original append-only
request. In this case we can't simply proceed with the append-only
request and blindly add it to the index without consulting the version
map.
The following scenario can cause difference between primary and
replica.
1. Primary indexes an auto-gen-id doc. (id=X, v=1, s#=20)
2. A refresh cycle happens on primary
3. The new doc is picked up and modified - say by a delete by query
request - Primary gets a delete doc (id=X, v=2, s#=30)
4. Delete doc is processed first on the replica (id=X, v=2, s#=30)
5. Indexing operation arrives on the replica, since it's an auto-gen-id
request and the retry marker is lower, we put it into lucene without
any check. Replica has a doc the primary doesn't have.
To deal with a potential conflict between an append-only operation and a
normal operation on replicas, we need to rely on sequence numbers. This
commit maintains the max seqno of non-append-only operations on replica
then only apply optimization for an append-only operation only if its
seq# is higher than the seq# of all non-append-only.
The vagrant test plugin adds tasks for the groovy packaging tests,
which run after the bats packaging test tasks.Rename the 'bats'
configuration to 'packaging' and remove the option to inherit
archives from this configuration.