We'd like to be able to support context-sensitive whitelists in
Painless but we can't now because the whitelist is a static thing.
This begins to de-static the whitelist, in particular removing
the static keyword from most of the methods on `Definition` and
plumbing the static instance into the appropriate spots as though
it weren't static. Once we de-static all the methods we should be
able to fairly simply build context-sensitive whitelists.
The only "fun" bit of this is that I added another layer in the
chain of methods that bootstraps `def` calls. Instead of running
`invokedynamic` directly on `DefBootstrap` we now `invokedynamic`
`$bootstrapDef` on the script itself loads the `Definition` that
the script was compiled against and then calls `DefBootstrap`.
I chose to put `Definition` into `Locals` so I didn't have to
change the signature of all the `analyze` methods. I could have
do it another way, but that seems ok for now.
We want to upgrade to Lucene 7 ahead of time in order to be able to check whether it causes any trouble to Elasticsearch before Lucene 7.0 gets released. From a user perspective, the main benefit of this upgrade is the enhanced support for sparse fields, whose resource consumption is now function of the number of docs that have a value rather than the total number of docs in the index.
Some notes about the change:
- it includes the deprecation of the `disable_coord` parameter of the `bool` and `common_terms` queries: Lucene has removed support for coord factors
- it includes the deprecation of the `index.similarity.base` expert setting, since it was only useful to configure coords and query norms, which have both been removed
- two tests have been marked with `@AwaitsFix` because of #23966, which we intend to address after the merge
Checks that IndicesClusterStateService stays consistent with incoming cluster states that contain no_master blocks (especially
discovery.zen.no_master_block=all which disables state persistence). In particular this checks that active shards which have no in-memory data
structures on a node are failed.
This changes the trace level logging to warn, and adds the needed number to the message as well.
My fear is that it may get noisy, but this is an issue that you want to be noisy.
The docs don't clearly explain that the deleted doc count also comes from lucene.
IMHO, it is worth highlighting this information separately, as a Note.
Apart from that, there should be an official recommended alternative as well.
After splitting integ tests into cluster configuration and the test
runner task, we still have dependencies of the test runner added as deps
of the cluster. This commit adds dependencies directly to the cluster,
so that the runner can have other dependencies independent of what is
needed for the cluster.
The JVM caches `Integer` objects. This is known. A test in Painless
was relying on the JVM not caching the particular integer `1000`.
It turns out that when you provide `-XX:+AggressiveOpts` the JVM
*does* cache `1000`, causing the test to fail when that is
specified.
This replaces `1000` with a randomly selected integer that we test
to make sure *isn't* cached by the JVM. *Hopefully* this test is
good enough. It relies on the caching not changing in between when
we check that the value isn't cached and when we run the painless
code. The cache now is a simple array but there is nothing
preventing it from changing. If it does change in a way that thwarts
this test then the test fail fail again. At least when that happens
the next person can see the comment about how it is important
that the integer isn't cached and can follow that line of inquiry.
Closes#24041
When preparing the final settings in the environment, we unconditionally
set path.data even if path.data was not explicitly set. This confounds
detection for whether or not path.data was explicitly set, and this is
trappy. This commit adds logic to only set path.data in the final
settings if path.data was explicitly set, and provides a test case that
fails without this logic.
Relates #24132
Today when a flush is performed, the translog is committed and if there
are no outstanding views, only the current translog generation is
preserved. Yet for the purpose of sequence numbers, we need stronger
guarantees than this. This commit migrates the preservation of translog
generations to keep the minimum generation that would be needed to
recover after the local checkpoint.
Relates #24015
In Elasticsearch 5.3.0 a bug was introduced in the merging of default
settings when the target setting existed as an array. When this bug
concerns path.data and default.path.data, we ended up in a situation
where the paths specified in both settings would be used to write index
data. Since our packaging sets default.path.data, users that configure
multiple data paths via an array and use the packaging are subject to
having shards land in paths in default.path.data when that is very
likely not what they intended.
This commit is an attempt to rectify this situation. If path.data and
default.path.data are configured, we check for the presence of indices
there. If we find any, we log messages explaining the situation and fail
the node.
Relates #24099
The cat APIs and rest tables would obtain a stream from the RestChannel, which happened to be a
ReleasableBytesStreamOutput. These APIs used the stream to write content to, closed the stream,
and then tried to send a response. After #23941 was merged, closing the stream meant that the bytes
were released for use elsewhere. This caused occasional corruption of the response when the bytes
were used prior to the response being sent.
This commit changes these two usages to wrap the stream obtained from the channel in a flush on
close stream so that the bytes are still reserved until the message is sent.
Empty IDs are rejected during indexing, so we should not randomly
produce them during tests. This commit modifies the simple versioning
tests to no longer produce empty IDs.
When indexing a document via the bulk API where IDs can be explicitly
specified, we currently accept an empty ID. This is problematic because
such a document can not be obtained via the get API. Instead, we should
rejected these requets as accepting them could be a dangerous form of
leniency. Additionally, we already have a way of specifying
auto-generated IDs and that is to not explicitly specify an ID so we do
not need a second way. This commit rejects the individual requests where
ID is specified but empty.
Relates #24118
Internal indexing requests in Elasticsearch may be processed out of order and repeatedly. This is important during recovery and due to concurrency in replicating requests between primary and replicas. As such, a replica/recovering shard needs to be able to identify that an incoming request contains information that is old and thus need not be processed. The current logic is based on external version. This is sadly not sufficient. This PR moves the logic to rely on sequences numbers and primary terms which give the semantics we need.
Relates to #10708
When building headers for a REST response, we de-duplicate the warning
headers based on the actual warning value. The current implementation of
this uses a capturing regular expression that is prone to excessive
backtracking. In cases a request involves a large number of warnings,
this extraction can be a severe performance penalty. An example where
this can arise is a bulk indexing request that utilizes a deprecated
feature (e.g., using deprecated forms of boolean values). This commit is
an attempt to address this performance regression. We already know the
format of the warning header, so we do not need to use a regular
expression to parse it but rather can parse it by hand to extract the
warning value. This gains back the vast majority of the performance lost
due to the usage of a deprecated feature. There is still a performance
loss due to logging the deprecation message but we do not address that
concern in this commit.
Relates #24114
This commit makes closing a ReleasableBytesStreamOutput release the underlying BigArray so
that we can use try-with-resources with these streams and avoid leaking memory by not returning
the BigArray. As part of this change, the ReleasableBytesStreamOutput adds protection to only
release the BigArray once.
In order to make some of the changes cleaner, the ReleasableBytesStream interface has been
removed. The BytesStream interface is changed to a abstract class so that we can use it as a
useable return type for a new method, Streams#flushOnCloseStream. This new method wraps a
given stream and overrides the close method so that the stream is simply flushed and not closed.
This behavior is used in the TcpTransport when compression is used with a
ReleasableBytesStreamOutput as we need to close the compressed stream to ensure all of the data
is written from this stream. Closing the compressed stream will try to close the underlying stream
but we only want to flush so that all of the written bytes are available.
Additionally, an error message method added in the BytesRestResponse did not use a builder
provided by the channel and instead created its own JSON builder. This changes that method to use
the channel builder and in turn the bytes stream output that is managed by the channel.
Note, this commit differs from 6bfecdf921 in that it updates
ReleasableBytesStreamOutput to handle the case of the BigArray decreasing in size, which changes
the reference to the BigArray. When the reference is changed, the releasable needs to be updated
otherwise there could be a leak of bytes and corruption of data in unrelated streams.
This reverts commit afd45c1432, which reverted #23572.
There are test failures that suggest that the import of dangling indices is happening too early, before the dangling indices are ready to be consumed.
This commit adds an ensureGreen() at the end of cluster initialization to make sure that no cluster state updates are happening while the dangling
indices are prepared on-disk.
We have some packaging tests where we use a custom jvm.options file. The
flags we use here are barebones, just enough to exercise that using a
custom jvm.options files actually works. Due to this, we are missing a
Log4j flag that prevents Log4j from trying to use JMX. If Log4j tries to
use JMX, it hits a security manager exception and tries to log
this. This attempt to log happens before we've configured
logging. Previously, Elasticsearch was lenient here so this was treated
as harmless and the test could march on. Now, we fail startup if we
detect an attempt to log before logging is configured so this prevents
Elasticsearch from starting if we do not have jvm.options files in place
that prevent these log messages from being written before logging is
being configured. This commit adds jvm.options files in the places need
to prevent this.
TaskInfo is stored as a part of TaskResult and therefore can be read by nodes with an older version. If we add any additional information to TaskInfo (for #23250, for example), nodes with an older version should be able to ignore it, otherwise they will not be able to read TaskResults stored by newer nodes.
This commit collapses the SyncBulkRequestHandler and
AsyncBulkRequestHandler into a single BulkRequestHandler. The new
handler executes a bulk request and awaits for the completion if the
BulkProcessor was configured with a concurrentRequests setting of 0.
Otherwise the execution happens asynchronously.
As part of this change the Retry class has been refactored.
withSyncBackoff and withAsyncBackoff have been replaced with two
versions of withBackoff. One method takes a listener that will be
called on completion. The other method returns a future that will been
complete on request completion.
Today Elasticsearch allows default settings to be used only if the
actual setting is not set. These settings are trappy, and the complexity
invites bugs. This commit removes support for default settings with the
exception of default.path.data, default.path.conf, and default.path.logs
which are maintainted to support packaging. A follow-up will remove
support for these as well.
Relates #24093
These tests were marked as awaits fix due to JNA requiring a version of
glibc greater than or equal to version 2.14. Since we still support
systems that would not have this version, we have released our own JNA
dependency that is built to support earlier versions of glibc. This
commit removes some await fixes that were added to tests that failed as
a result of this situation.
In Elasticsearch 5.3.0 a bug was introduced in the merging of default
settings when the target setting existed as an array. This arose due to
the fact that when a target setting is an array, the setting key is
broken into key.0, key.1, ..., key.n, one for each element of the
array. When settings are replaced by default.key, we are looking for the
target key but not the target key.0. This leads to key, and key.0, ...,
key.n being present in the constructed settings object. This commit
addresses two issues here. The first is that we fix the merging of the
keys so that when we try to merge default.key, we also check for the
presence of the flattened keys. The second is that when we try to get a
setting value as an array from a settings object, we check whether or
not the backing map contains the top-level key as well as the flattened
keys. This latter check would have caught the first bug. For kicks, we
add some tests.
Relates #24074
This new version of jna is rebuilt from the official release of jna, but
with native libs linked against older glibc in order to support all
platforms elasticsearch supports.
closes#23640