The warning header used by Elasticsearch for delivering deprecation
warnings has a specific format (RFC 7234, section 5.5). The format
specifies that the warning header should be of the form
warn-code warn-agent warn-text [warn-date]
Here, the warn-code is a three-digit code which communicates various
meanings. The warn-agent is a string used to identify the source of the
warning (either a host:port combination, or some other identifier). The
warn-text is quoted string which conveys the semantic meaning of the
warning. The warn-date is an optional quoted date that can be in a few
different formats.
This commit corrects the warning header within Elasticsearch to follow
this specification. We use the warn-code 299 which means a
"miscellaneous persistent warning." For the warn-agent, we use the
version of Elasticsearch that produced the warning. The warn-text is
unchanged from what we deliver today, but is wrapped in quotes as
specified (this is important as a problem that exists today is that
multiple warnings can not be split by comma to obtain the individual
warnings as the warnings might themselves contain commas). For the
warn-date, we use the RFC 1123 format.
Relates #23275
We recently added parsing code to parse suggesters responses into java api objects. This was done using a switch based on the type of the returned suggestion. We can now replace the switch with using NamedXContentRegistry, which will also be used for aggs parsing.
Previously when multiple reduces occured for the terms aggregation we would add up the errors for the aggregations but would not take into account the errors that had already been calculated for the previous reduce phases.
This change corrects that by adding the previously created errors to the new error value.
Closes#23286
This test makes little sense when sent from the REST layer, as WrapperQueryBuilder is supposed to be used from the Java api. Also, providing the inner query as base64 string will work only for string formats and break for binary formats like SMILE and CBOR, whcih doesn't play well with randomizing content type in our REST tests
This allows to set content-type together with the body itself. At the moment it is always json, but this change allows makes it easier to randomize it later
Previously this code was duplicated across the 3 different topdocs variants
we have. It also had no real unittest (where we tested with holes in the results)
which caused a sneaky bug where the comparison used `result.size()` vs `results.size()`
causing several NPEs downstream. This change adds a static method to fill the top docs
that is shared across all variants and adds a unittest that would have caught the issue
very quickly.
Closes#19356Closes#23357
This commit adds support for an info() method to the High Level Rest
client that returns the cluster information usually obtained by performing a
`GET hostname:9200` request.
Console.readText may return null in certain cases. This commit fixes a
bug in Terminal.promptYesNo which assumed a non-null return value. It
also adds a test for this, and modifies mock terminal to be able to
handle null input values.
This commit fixes the reproduce line output when the vagrant packagingTest
fails. Before only the `gradle packagingTest` would be output, but the
seed and list of versions was swallowed by groovy with an ancillary
failure (due to the `+` being on the wrong line for a string
continuation). With the new reproduce line, it is now output next to
the task right after failure, contains the actual task (specific to the
box that fails), and contains the seed. It also no longer contains the
upgrade versions list, as the seed is used to determine which of those
to use, and the same file would be read when testing a failure on a
particular git commit. Finally, this also ties bats test setup directly
to packagingTest, instead of to the vagrant up command.
The dependencyLicenses check has the ability to map multiple jar files
to the same license file. However, netty was not taking advantage of
this, and had duplicate copies of its license/notice files for each jar.
This commit reduces the copies to one and uses the mapping feature.
This commit removes an necessary test that ensures if
wait_for_active_shards cannot be fulfilled on index creation, that the
response returns shardsAcknowledged=false. However, this is already
tested in WaitForActiveShardsIT and it would improve the speed of the
test runs to get rid of any unnecessary tests, especially those that
depend on timeouts.
The IndexShardOperationsLock has a mechanism to delay operations if there is currently a block on the lock. These
delayed operations are executed when the block is released and are executed by a different thread. When the different
thread executes the operations, the ThreadContext is that of the thread that was blocking operations. In order to
preserve the ThreadContext, we need to store it and wrap the listener when the operation is delayed.
The REST Client is split into 2 parts:
* Low level
* High level
The High level client has a main common section and the document delete API documentation as a start.
NamedXContentRegistry will be used by the high level REST client to parse aggregation responses, and any section whose class type depends on a json key. There are currently on entries in the standard registry, but soon aggs and most likely suggesters will be added. Also it is possible for subclasses to provide additional named xcontent parsers.
Load the geoip database the first time a pipeline gets created that has a geoip processor.
This saves memory (measured ~150MB for the city db) in cases when the plugin is installed, but not used.
This change exposes the new Lucene graph based word delimiter token filter in the analysis filters.
Unlike the `word_delimiter` this token filter named `word_delimiter_graph` correctly handles multi terms expansion at query time.
Closes#23104
This is fallout from #23297. That commit wrapped
`InstanceProfileCredentialsProvider` to ensure that the `getCredentials`
and `refresh` methods had privileged access. However, it looks like
there was a test ensuring that `buildCredentials` returned the correct
clazz type. This commit adjusts that test to check that the correct
wrapper is returned.
This commit adds a boundary_scanner property to the search highlight
request so the user can specify different boundary scanners:
* `chars` (default, current behavior)
* `word` Use a WordBreakIterator
* `sentence` Use a SentenceBreakIterator
This commit also adds "boundary_scanner_locale" to define which locale
should be used when scanning the text.
There are two ways to determine the latest index-N blob that contains
the truth of the contents of the repository: (1) list all index-N blobs
and figure out what the latest value of N is, and (2) read the
index.latest blob, which contains the latest value of N explicitely.
Note that the index.latest blob is not written atomically and can be
re-written, as opposed to the index-N blobs which are never re-written
(to create an updated index blob, index-{N+1} is written).
Previously, the latest index-N was determined by first trying to read
the index.latest blob and if that blob was missing (it was deleted
before being re-written and in between deleting it and re-writing it,
the system crashed), then all index-N blobs were listed to pick the
highest N value.
For non-read-only repositories, this could produce race conditions with
the file system. In particular, it is possible that the index.latest
blob is being read in order to serve a read request (e.g. get snapshots)
and while doing so, an attempt is made to delete the index.latest blob
and re-write it in order to finalize a snapshot operation. On some file
systems (e.g. Windows), it is forbidden to delete a file while it is
open for reading by another process/thread.
This commit changes the priority so that figuring out the latest index-N
blob is first done by listing all index-N blobs and determining the
latest N value. If that values because the repository does not
support listing blobs (e.g. the URL repository), then the index.latest
blob is read. This is safe because in read-only repositories that do
not support listing blobs, the index.latest blob is never deleted and
then re-written, so the aforementioned issue does not arise.
The test setup for hdfs is a little complicated for windows, needing to
check if the hdfs fixture can be run at all. This was unfortunately not
updated when the integ tests were reorganized into separate runner and
cluster setups.
This commit sets the intial size of the pipeline handler queue small to
prevent waste if pipelined requests are never sent. Since the queue will
grow quickly if pipeline requests are indeed set, this should not be
problematic.
Relates #23335
This commit fixes an issue that was missed in #22534.
`AWSCredentialsProvider.getCredentials()` appears to potentially open a
socket connect. This operation needed to be wrapped in `doPrivileged()`.
This should fix issue #23271.
When pipelined responses are sent to the pipeline handler for writing,
they are not necessarily written immediately. They must be held in a
priority queue until all responses preceding the given response are
written. This means that when write is invoked on the handler, the
promise that is attached to the write invocation will not necessarily be
the promise associated with the responses that are written while the
queue is drained. To address this, the promise associated with a
pipelined response must be held with the response and then used when the
channel context is actually written to. This was introduced when
ensuring that the releasing promise is always chained through on write
calls lest the releasing promise never be invoked. This leads to many
failing test cases, so no new test cases are needed here.
Relates #23317
Previous changes aligned HEAD requests to be consistent with GET
requests to the same endpoint. This commit aligns the REST spec for the
impacted endpoints.
Relates #23313
In oder to use lucene's utilities to merge top docs the results
need to be passed in a dense array where the index corresponds to the shard index in
the result list. Yet, we were sorting results before merging them just to order them
in the incoming order again for the above mentioned reason. This change removes the
obsolet sort and prevents unnecessary materializing of results.