Currently the task manager is tied to the transport and can only create tasks based on TransportRequests. This commit enables task manager to support tasks created by non-transport services such as the persistent tasks service.
* Add support for fragment_length in the unified highlighter
This commit introduce a new break iterator (a BoundedBreakIterator) designed for the unified highlighter
that is able to limit the size of fragments produced by generic break iterator like `sentence`.
The `unified` highlighter now supports `boundary_scanner` which can `words` or `sentence`.
The `sentence` mode will use the bounded break iterator in order to limit the size of the sentence to `fragment_length`.
When sentences bigger than `fragment_length` are produced, this mode will break the sentence at the next word boundary **after**
`fragment_length` is reached.
This commit changes the method for checking the interrupt status of a
thread that is intentionally interrupted during
AdapterActionFutureTests#testInteruption. Namely, we want to check and
clear the interrupt status before joining on the interrupting thread. If
we do not clear the status, when we lose a race where the interrupting
thread is not yet finished, an interrupted exception will be thrown when
we try to join on it. Clearing the interrupted status on the main thread
addresses this issue.
This commit moves the checkstyle rule of max line length from 140
characters to 100 characters. We whitelist all existing violations and
will address them in follow-ups.
Relates #23623
In Gradle 3.4, the buildSrc plugin seems to be packaged into a jar before it is accessed by the rest of the build and the signatures file for the third-party audit task cannot be accessed as
getClass().getResource('/forbidden/third-party-audit.txt') then points to a file entry in a JAR, which cannot be loaded directly as a File object. This commit changes the third-party audit task to pass the content of the signatures file as a String instead.
When a thread blocking on an adapter action future is interrupted, we
throw an illegal state exception. This is documented, but it is rude to
not restore the interrupt flag. This commit restores the interrupt flag
in this situation, and adds a test.
Relates #23618
The reindex API is mature now, and we will work to maintain backwards
compatibility in accordance with our backwards compatibility
policy. This commit unmarks the reindex API as experimental.
Relates #23621
In cases where the user specifies only the `text` option on the top level
suggest element (either via REST or the java api), this gets transferred to the
`text` property in the SuggestionSearchContext. CompletionSuggestionContext
currently requires prefix or regex to be specified, otherwise errors. We should
use the global `text` property as a fallback if neither prefix nor regex is provided.
Closes to #23340
In SI units, "kilobyte" or "kB" would mean 1000 bytes, whereas "KiB" is
used for 1024. Add a note in `api-conventions.asciidoc` to clarify the
meaning in Elasticsearch.
In this repository, `Settings.builder` is used everywhere although it does exactly same as `Settings.settingsBuilder`. With the reference of the commit 42526ac28e , I think mistakenly this `Settings.settingsBuilder` remains in.
This fixes an NPE in finding scaled float stats. The type of min/max
methods on the wrapped long stats returns a boxed type, but in the case
this is null, the unbox done for the FieldStats.Double ctor primitive
types will cause the NPE. These methods would have null for min/max when
the field exists, but does not actually have points values.
fixes#23487
This commit adds a system property that enables end-users to explicitly
enforce the bootstrap checks, independently of the binding of the
transport protocol. This can be useful for single-node production
systems that do not bind the transport protocol (and thus the bootstrap
checks would not be enforced).
Relates #23585
This change adds a new method that returns the underlying char[] of a SecureString and the ability
to clone the SecureString so that the original SecureString is not vulnerable to modification.
Closing the cloned SecureString will wipe the char[] that backs the clone but the original SecureString remains unaffected.
Additionally, while making a separate change I found that SecureSettings will fail when diff is called on them and there
is no fallback setting. Given the idea behind SecureSetting, I think that diff should just be a no-op and I have
implemented this here as well.
Currently GeoHashGridAggregatorTests#testWithSeveralDocs increases the expected
document count per hash for each geo point added to a document. When points
added to the same doc fall into one bucket (one hash cell) the document should
only be counted once.
Closes#23555
While trying to improve the failure output in #23547, the stderr was
also captured from jrunscript. This was under the assumption that stderr
is only written to in case of an error. However, with java 9, when
JAVA_TOOL_OPTIONS are set, they are output to stderr. And our CI sets
JAVA_TOOL_OPTIONS for some reason. This commit fixes the jrunscript call
to use a separate buffer for stderr.
A previous change to the multi-search request execution to avoid stack
overflows regressed on limiting the number of concurrent search requests
from a batched multi-search request. In particular, the replacement of
the tail-recursive call with a loop could asynchronously fire off all of
the remaining search requests in the batch while max concurrent search
requests are already executing. This commit attempts to address this
issue by taking a more careful approach to the initial problem of
recurisve calls. The cause of the initial problem was due to possibility
of individual requests completing on the same thread as invoked the
search action execution. This can happen, for example, in cases when an
individual request does not resolve to any shards. To address this
problem, when an individual request completes we check if it completed
on the same thread as fired off the request. In this case, we loop and
otherwise safely recurse. Sadly, there was a unit test to check that the
maximum number of concurrent search requests was not exceeded, but that
test was broken while modifying the test to reproduce a case that led to
the possibility of stack overflow. As such, we randomize whether or not
search actions execute on the same thread as the thread that invoked the
action.
Relates #23538
This commit improves the output when jrunscript fails to include the
full output of the command. It also makes the quoting that is needed for
windows only happen on windows (which worked on java 8, but for some
reason does not work with java 9)
When plugins are installed on a union filesystem (for example, inside a
Docker container), removing them can fail because we attempt an atomic
move which will not work if the plugin is not installed in the top
layer. This commit modifies removing a plugin to fall back to a
non-atomic move in cases when the underlying filesystem does not support
atomic moves.
Relates #23548
This commit upgrades the Netty dependencies from version 4.1.8 to
version 4.1.9. This commit picks up a few bug fixes that impacted us:
- Netty was incorrectly ignoring interfaces with self-assigned MAC
addresses (e.g., instances running in Docker containers or on EC2)
- incorrect handling of the Expect: 100-continue header
Relates #23540
With this commit we change the default receive predictor size for Netty
from 32kB to 64kB as our testing has shown that this leads to less
allocations on smaller heaps like the default out of the box
configuration and this value also works reasonably well for larger
heaps.
Closes#23185
Today when handling a multi-search request, we asynchornously execute as
many search requests as the minimum of the number of search requests in
the multi-search request and the maximum number of concurrent
requests. When these search requests return, we poll more search
requests from a queue of search requests from the original multi-search
request. The implementation of this was recursive, and if the number of
requests in the multi-search request was large, a stack overflow could
arise due to the recursive invocation. This commit replaces this
recursive implementation with a simple iterative implementation.
Relates #23527
We previously removed setting the vagrant group because sles-12 and
opensuse-13 did not have this group. Now that those images have the
group, we can go back to setting both user and group to vagrant.
This commit upgrades to the newest version of randomized runner. There
is a new additional check that allows ensuring the working directory
for each child jvm is empty. By default, this check will fail the test
run. However, for elasticsearch, we default to wipe the directory. For
example, if you previously told the runner to not wipe the directory, in
order to investigate a failure, the wipe option will delete this data
upon re-running the test.
This commit adds a note to the resiliency status page regarding the fact
that replicas can fall out of sync with the primary shard after primary
promotion occurs due to a failing primary shard.
Relates #23503
When parsing the control groups to which the Elasticsearch process
belongs, we extract a map from subsystems to paths by parsing
/proc/self/cgroup. This file contains colon-delimited entries of the
form hierarchy-ID:subsystem-list:cgroup-path. For control group version
1 hierarchies, the subsystem-list is a comma-delimited list of the
subsystems for that hierarchy. For control group version 2 hierarchies
(which can only exist on Linux kernels since version 4.5), the
subsystem-list is an empty string. The previous parsing of
/proc/self/cgroup incorrectly accounted for this possibility (a +
instead of a * in a regular expression). This commit addresses this
issue, adds a test case that covers this possibility, and simplifies the
code that parses /proc/self/cgroup.
Relates #23493
Previously, the Azure blob store would depend on a 404 StorageException
coming back from Azure if trying to open an input stream to a
non-existent blob. This works for Azure repositories which access a
primary location path. For those configured to access a secondary
location path, the Azure SDK keeps trying for a long while before
returning a 404 StorageException, causing potential delays in the
snapshot APIs. This commit makes an initial check if the blob exists in
Azure and returns immediately with a NoSuchFileException, instead of
trying to open the input stream to the blob.
Closes#23480