This commit removes the usage of system properties for the HttpAsyncClient as this overrides some
defaults that we intentionally change. In order to set the default SSLContext to the system context
we set the SSLContext on the builder explicitly.
Closes#27827
This reverts commit e04e5ab037 as we no longer need the increased
logging for the mixed cluster tests. This will reduce the size of logs for some build failures.
This commit is related to #27260. It adds a base NioGroup for use in
different transports. This class creates and starts the underlying
selectors. Different protocols or transports are established by passing
the ChannelFactory to the bindServerChannel or openChannel
methods. This allows a TcpChannelFactory to be passed which will
create and register channels that support the elasticsearch tcp binary
protocol or a channel factory that will create http channels (or other).
Today we still maintain a version map even if we only index append-only
or in other words, documents with auto-generated IDs. We can instead maintain
an un-safe version map that will be swapped to a safe version map only if necessary
once we see the first document that requires access to the version map. For instance:
* a auto-generated id retry
* any kind of deletes
* a document with a foreign ID (non-autogenerated
In these cases we forcefully refresh then internal reader and start maintaining
a version map until such a safe map wasn't necessary for two refresh cycles.
Indices / shards that never see an autogenerated ID document will always meintain a version
map and in the case of a delete / retry in a pure append-only index the version map will be
de-optimized for a short amount of time until we know it's safe again to swap back. This
will also minimize the requried refeshes.
Closes#19813
I've seen several cases where match_all queries were being used inside percolator queries,
because these queries were created generated by other systems.
Extracting these queries will allow the percolator at query time in a filter context
to skip over these queries without parsing or validating that these queries actually
match with the document being percolated.
We document that users can set custom service names on Windows. Alas,
the functionality does not work. This commit fixes the issue by passing
the environment variable SERVICE_ID as the service name otherwise
defaulting to elasticsearch-service-x64.
Relates #25255
When an ESSelector is created an underlying nio selector is opened. This
selector is closed by the event loop after close has been signalled by
another thread.
However, there is a possibility that an ESSelector is created and some
exception in the startup process prevents it from ever being started
(however, close will still be called). The allows the selector to leak.
This commit addresses this issue by having the signalling thread close
the selector if the event loop is not running when close is signalled.
Allowing `_doc` as a type will enable users to make the transition to 7.0
smoother since the index APIs will be `PUT index/_doc/id` and `POST index/_doc`.
This also moves most of the documentation to `_doc` as a type name.
Closes#27750Closes#27751
This commit clarifies that we recommended using supported LTS versions
of Java as opposed to supporting a minimum version and any version above
that.
Relates #27795
* Fixes ByteSizeValue to serialise correctly
This fix makes a few fixes to ByteSizeValue to make it possible to perform round-trip serialisation:
* Changes wire serialisation to use Zlong methods instead of VLong methods. This is needed because the value `-1` is accepted but previously if `-1` is supplied it cannot be serialised using the wire protocol.
* Limits the supplied size to be no more than Long.MAX_VALUE when converted to bytes. Previously values greater than Long.MAX_VALUE bytes were accepted but would be silently interpreted as Long.MAX_VALUE bytes rather than erroring so the user had no idea the value was not being used the way they had intended. I consider this a bug and so fine to include this bug fix in a minor version but I am open to other points of view.
* Adds a `getStringRep()` method that can be used when serialising the value to JSON. This will print the bytes value if the size is positive, `”0”` if the size is `0` and `”-1”` if the size is `-1`.
* Adds logic to detect fractional values when parsing from a String and emits a deprecation warning in this case.
* Modifies hashCode and equals methods to work with long values rather than doubles so they don’t run into precision problems when dealing with large values. Previous to this change the equals method would not detect small differences in the values (e.g. 1-1000 bytes ranges) if the actual values where very large (e.g. PBs). This was due to the values being in the order of 10^18 but doubles only maintaining a precision of ~10^15.
Closes#27568
* Fix bytes settings default value to not use fractional values
* Fixes test
* Addresses review comments
* Modifies parsing to preserve unit
This should be bwc since in the case that the input is fractional it reverts back to the old method of parsing it to the bytes value.
* Addresses more review comments
* Fixes tests
* Temporarily changes version check to 7.0.0
This will be changed to 6.2 when the fix has been backported
The CountedBitSet can automatically release its internal bitsets when
all bits are set to reduce memory usage. This structure can work well
for sequence numbers as these numbers are likely to form contiguous
ranges. This commit replaces FixedBitSet by CountedBitSet in
LocalCheckpointTracker.
The test testWithRandomException was not updated accordingly to the
latest translog policy. Method setTranslogGenerationOfLastCommit should
be called before whenever setMinTranslogGenerationForRecovery is called.
Relates #27606
We need to keep index commits and translog operations up to the current
global checkpoint to allow us to throw away unsafe operations and
increase the operation-based recovery chance. This is achieved by a new
index deletion policy.
Relates #10708
If you assert that a pattern of files exists but it matches more then
one file the "assert this file exists" code failed with a misleading
error message. This tests if the patter resolved to multiple files and
prints a better error message if it did.
When running the release tests, we set build.snapshot to false and this
causes all version numbers to not have "-SNAPSHOT". This is true even
for the tips of the branches (e.g., currently 5.6.6 on the 5.6
branch). Yet, if we do not set snapshot to false, then we would still be
trying to find artifacts with "-SNAPSHOT" appended which would not have
been build since build.snapshot is false. To fix this, we have to push
build.snapshot into the version logic.
Relates #27778
This commit reorganizes some of the content in the configuring
Elasticsearch section of the docs. The changes are:
- move JVM options out of system configuration into configuring
Elasticsearch
- move JVM options to its own page of the docs
- move configuring the heap to important Elasticsearch settings
- move configuring the heap to its own page of the docs
- move all important settings to individual pages in the docs
- remove bootstrap.memory_lock from important settings, this is covered
in the swap section of system configuration
Relates #27755
This commit changes the RestoreService so that it now fails the snapshot
restore if one of the shards to restore has failed to be allocated. It also adds
a new RestoreInProgressAllocationDecider that forbids such shards to be
allocated again. This way, when a restore is impossible or failed too many
times, the user is forced to take a manual action (like deleting the index
which failed shards) in order to try to restore it again.
This behaviour has been implemented because when the allocation of a
shard has been retried too many times, the MaxRetryDecider is engaged
to prevent any future allocation of the failed shard. If it happens while
restoring a snapshot, the restore hanged and was never completed because
it stayed around waiting for the shards to be assigned (and that won't happen).
It also blocked future attempts to restore the snapshot again. With this commit,
the restore does not hang and is marked as failed, leaving failed shards
around for investigation.
This is the second part of the #26865 issue.
Closes#26865
This pull request changes the S3BlobContainer.blobExists() method implementation
to make it use the AmazonS3.doesObjectExist() method instead of
AmazonS3.getObjectMetadata(). The AmazonS3 implementation takes care of
catching any thrown AmazonS3Exception and compares its response code with 404,
returning false (object does not exist) or lets the exception be propagated.
We have tests that manually unpackage the RPM and Debian package
distributions and start a cluster manually (not from the service) and
run a basic suite of integration tests against them. This is problematic
because it is not how the packages are intended to be used (instead,
they are intended to be installed using the package installation tools,
and started as services) and so violates assumptions that we make about
directory paths. This commit removes these integration tests, instead
relying on the packaging tests to ensure the packages are not
broken. Additionally, we add a sanity check that the package
distributions can be unpackaged. Finally, with this change we can remove
some leniency from elasticsearch-env about checking for the existence of
the environment file which the leniency was there solely for these
integration tests.
Relates #27725
Previously we would unidle a primary shard during recovery in case the
recovery target would miss a background global checkpoint sync. However,
the background global checkpoint syncs are no longer tied to the primary
shard falling idle and so this unidling is no longer needed.
Relates #27757
This removes special casing for documents without a sequence ID.
This code is complex enough with seq IDs we should clean up things
when we can and we don't support 5.x indexing in 7.x anymore