We had been obtaining JDK distributions from download.java.net. This
site is now presenting a certificate that does not list
download.java.net as a SAN. Therefore with host verification, the build
can not use this site. This commit switches to using download.oracle.com
which appears to be an alternative name for the same CNAME
download.oracle.com.edgekey.net. This allows our builds to resume.
A stuck peer recovery in #40913 reveals that we indefinitely retry on
new cluster states if indexing translog operations hits a mapper
exception. We should not wait and retry if the mapping on the target is
as recent as the mapping that the primary used to index the replaying
operations.
Relates #40913
After changing the allowed minimum value for min_children in has_child query from 0 to 1 in
the next major version, this PR adds a deprecation warning for these cases.
Closes#41548
This commit refactors GeoHashUtils class into a new Geohash utility class located in the ES geo library. The intent is to not only better control what geo methods are whitelisted for painless scripting but to clean up the geo utility API in general.
* Clarify that peer recovery settings apply to shard relocation
* Fix awkward wording of 1st sentence
* [DOCS] Remove snapshot recovery reference.
Call out link to [[cat-recovery]].
Separate expert settings.
Motivated by slow snapshot deletes reported in e.g. #39656 and the fact that these likely are a contributing factor to repositories accumulating stale files over time when deletes fail to finish in time and are interrupted before they can complete.
* Makes snapshot deletion async and parallelizes some steps of the delete process that can be safely run concurrently via the snapshot thread poll
* I did not take the biggest potential speedup step here and parallelize the shard file deletion because that's probably better handled by moving to bulk deletes where possible (and can still be parallelized via the snapshot pool where it isn't). Also, I wanted to keep the size of the PR manageable.
* See https://github.com/elastic/elasticsearch/pull/39656#issuecomment-470492106
* Also, as a side effect this gives the `SnapshotResiliencyTests` a little more coverage for master failover scenarios (since parallel access to a blob store repository during deletes is now possible since a delete isn't a single task anymore).
* By adding a `ThreadPool` reference to the repository this also lays the groundwork to parallelizing shard snapshot uploads to improve the situation reported in #39657
* Thanks to #39793 dynamic mapping updates don't contain blocking operations anymore so we don't have to manually put the mapping in this test and can keep it a little simpler
The subfields of the search_as_you_type are prefixed with the name of their root field.
However they should used the full path of the root field rather than just the name since
these fields can appear in a multi-`fields` definition or under an object field.
Since this field type is not released yet, this should be considered as a non-issue.
* Add Restore Operation to SnapshotResiliencyTests
* Expand the successful snapshot test case to also include restoring the snapshop
* Add indexing of documents as well to be able to meaningfully verify the restore
* This is part of the larger effort to test eventually consistent blob stores in #39504
Added integrations for a couple of frameworks.
Removed community clients where the last commit was more than three
years ago. Also added the official go client link and removed the
official groovy client, as it is outdated.
Under random seed 4304ED44CB755610 the generated byte pattern causes
BC-FIPS to throw
java.io.IOException: DER length more than 4 bytes: 101
Rather than simply returning an empty list (as it does for most random
values).
Backport of: #40939
Thanks to #34071, there is enough information in field caps to infer
the table structure and thus use the same API consistently across the
IndexResolver.
(cherry picked from commit f99946943a3350206b6bca774b2f060f41a787b3)
This is related to #27260. Currently for the SSLDriver we allocate a
dedicated network write buffer and encrypt the data into that buffer one
buffer at a time. This requires constantly switching between encrypting
and flushing. This commit adds a dedicated outbound buffer for SSL
operations that will internally allocate new packet sized buffers as
they are need (for writing encrypted data). This allows us to totally
encrypt an operation before writing it to the network. Eventually it can
be hooked up to buffer recycling.
TLS 1.3 changes to the SSLEngine introduced a scenario where a UNWRAP
call during a handshake can consume a close notify alerty without
throwing an exception. This means that we continue down a codepath where
we assert that we are still in handshaking mode. Transitioning to closed
from handshaking is a valid scenario. This commit removes this
assertion.
* The check doesn't add much if anything practically, since the S3 repository is eventually consistent and we only log the non-existence of a blob anyway
* We don't do the check on writes for this very reason and documented it as such
* Removing the check saves one API call per single delete speeding up the deletion process and lowering costs
* [ML] Adds progress reporting for transforms
* fixing after master merge
* Addressing PR comments
* removing unused imports
* Adjusting afterKey handling and percentage to be 100*
* Making sure it is a linked hashmap for serialization
* removing unused import
* addressing PR comments
* removing unused import
* simplifying code, only storing total docs and decrementing
* adjusting for rewrite
* removing initial progress gathering from executor
Today the `_field_caps` API returns the list of indices where a field
is present only if this field has different types within the requested indices.
However if the request is an index pattern (or an alias, or both...) there
is no way to infer the indices if the response contains only fields that have
the same type in all indices. This commit changes the response to always return
the list of indices in the response. It also adds a way to retrieve unmapped field
in a specific section per field called `unmapped`. This section is created for each field
that is present in some indices but not all if the parameter `include_unmapped` is set to
true in the request (defaults to false).
* Introduce Delegating ActionListener Wrappers
* Dry up use cases of ActionListener that simply pass through the response or exception to another listener
* Name Snapshot Data Blobs by UUID
* There is no functional reason why we need incremental naming for these files but
* As explained in #38941 it is a possible source of corrupting the repository
* It wastes API calls for the list operation
* Is just needless complication
* Since we store the exact names of the data blobs in all the metadata anyway, we can make this change without any BwC considerations
* Even on the worst case scenario of a downgrade the functionality would continue working since the incremental names wouldn't conflict with the uuids and the number parsing for finding the next incremental name suppresses the exception when encountring a non-numeric value after the double underscore prefix
In order to support empty action metadata in the first msearch item,
we need to remove support for prepending msearch request body with an
empty line, which prevents us from parsing the empty line as action
metadata for the first search item.
Relates to #41011
The test is testing the java time API and fails in case it hits daylight saving time changes.
Java time has the right implementation and we don't need to test this.
more details on how the test was affected by the DST change on this comment
closes#39617
backport(#41493)
The Has Privileges API allows to tap into the authorization process, to validate
privileges without actually running the operations to be authorized. This commit
fixes a bug, in which the Has Privilege API returned spurious results when checking
for index privileges over restricted indices (currently .security, .security-6,
.security-7). The actual authorization process is not affected by the bug.
hamcrest has some improvements in newer versions, like FileMatchers
that make assertions regarding file exists cleaner. This commit upgrades
to the latest version of hamcrest so we can start using new and improved
matchers.
* Due to #40866 one of the two parallel bulk requests can randomly be
rejected outright when the write queue is full already, we can catch
this situation and ignore it since we can still have the rejection for
the dynamic mapping udate for the other reuqest and it's somewhat rare
to run into this anyway
* Closes#41363