Today, we maintain two sets in a SeqNoSet: ongoing sets and completed
sets. We can remove the completed sets and use only the ongoing sets by
releasing the internal bitset of a CountedBitSet when all its bits are
set. This behaves like two sets but simpler. This commit also makes
CountedBitSet as a drop-in replacement for BitSet.
Relates #27268
The LimitMEMLOCK suggestion was removed from systemd service file and
instead users should use an override file, so a comment in the
environment file should be updated to reflect the same.
Relates #27630
For too long we have been groping around in the dark when faced with GC
issues because we rarely have GC logs at our disposal. This commit
enables GC logging by default out of the box.
Relates #27610
* Sense HA HDFS settings and remove permission restrictions during regular execution.
This PR adds integration tests for HA-Enabled HDFS deployments, both regular and secured.
The Mini HDFS fixture has been updated to optionally run in HA-Mode. A new test suite has
been added for reproducing the effects of a Namenode failing over during regular repository
usage. Going forward, the HDFS Repository will still be subject to its self imposed permission
restrictions during normal use, but will no longer restrict them when running against an HA
enabled HDFS cluster. Instead, the plugin will rely on the provided security policy and not
further restrict the permissions so that the transparent operation to failover to a different
Namenode in the client does not raise security exceptions. Additionally, we are now testing the
secure mode with SASL based wire encryption of data between Elasticsearch and HDFS. This
includes a missing library (commons codec) in order to support this change.
#27611 broke the docs tests because $node_name in the URL doesn't (#27616)seem to be replaced.
Changing this to a * to match all nodes seems to fix the test
* Add accounting circuit breaker and track segment memory usage
This commit adds a new circuit breaker "accounting" that is used for tracking
the memory usage of non-request-tied memory users. It also adds tracking for the
amount of Lucene segment memory used by a shard as a user of the new circuit
breaker.
The Lucene segment memory is updated when the shard refreshes, and removed when
the shard relocates away from a node or is deleted. It should also be noted that
all tracking for segment memory uses `addWithoutBreaking` so as not to fail the
shard if a limit is reached.
The `accounting` breaker has a default limit of 100% and will contribute to the
parent breaker limit.
Resolves#27044
When running unit tests direct from the IDE this setting change is needed
in addition to the idea.no.launcher property that previous versions of IntelliJ
needed.
Today we carry on the size of the live version map to ensure that
we minimze rehashing. Yet, once we are idle or we can issue a sync-commit
we can resize it to defaults to free up memory.
Relates to #27516
This change ensures that the temporary directory used for java.io.tmpdir
is a private temporary directory. To achieve this we use mktemp on macOS
and Linux to give us a private temporary directory and the value of the
environment variable TMP on Windows. For this to work with our
packaging, we add java.io.tmpdir=${ES_TMPDIR} to our packaged
jvm.options, we set ES_TMPDIR respectively in our startup scripts, and
resolve the value of the template ${ES_TMPDIR} at startup.
Relates #27609
Once a shard goes inactive we want the shard to be refreshed if
the refresh interval is default since we might hold on to unnecessary
segments and in the inactive case we stopped indexing and can release
old segments.
Relates to #27500
Add an index level setting `index.analyze.max_token_count` to control
the number of generated tokens in the _analyze endpoint.
Defaults to 10000.
Throw an error if the number of generated tokens exceeds this limit.
Closes#27038
The ChecksumBlobStoreFormat.writeAtomic() method writes a blob using a
temporary name and then moves the blob to its final name. The move
operation can fail and in this case the temporary blob is deleted. If
this delete operation also fails, then the initial exception is lost.
This commit ensures that when something goes wrong during the move
operation the initial exception is kept and thrown, and if the delete
operation also fails then this additional exception is added
as a suppressed exception to the initial one.
Gradle 4.2 introduced a feature for safer handling of stale output files. Unfortunately, due to the way some of our tasks are written, this broke execution of our REST tests tasks. The reason for this is that the extract task (which extracts the ES distribution) would clean its output directory, and thereby also remove the empty cwd subdirectory which was created by the clean task. The reason why Gradle would remove the directory is that the earlier running clean task would programmatically create the empty cwd directory, but not make Gradle aware of this fact, which would result in Gradle believing that it could just safely clean the directory.
This commit explicitly defines a task to create the cwd subdirectory, and marks the directory as output of the task, so that the subsequent extract task won't eagerly clean it. It thereby restores full compatibility of the build with Gradle 4.2 and above.
This commit also removes the @Input annotation on the waitCondition closure of AntFixture, which conflicted with the extended input/output validation of Gradle 4.3.
When these docs were moved they should have been moved to the system
configuration docs. This commit does that, and also fixes a missing
heading that broke the docs build.
This potential issue was exposed when I saw this PR #27542. Essentially
we currently execute the write listeners all over the place without
consistently catching and handling exceptions. Some of these exceptions
will be logged in different ways (including as low as `debug`).
This commit adds a single location where these listeners are executed.
If the listener throws an execption, the exception is caught and logged
at the `warn` level.
Today when configuring the data paths for the environment, we set data
paths to either the specified path.data or default to data relative to
the Elasticsearch home. Yet if node.local_storage is false, data paths
do not even make sense. In this case, we should reject if path.data is
set, and instead of defaulting data paths to data relative to home, we
should set this to empty paths. This commit does this.
Relates #27587
This awaits fix has been there forever and no one seems to know what to
do with this test. I say let CI churn on it because it passed for me
three out of three times. If there is something wrong with it, we will
know quickly and can then address with the new information that we have.
Previously the bootstrap check for max number of threads was increased
from 2048 to 4096 yet the docs were never adjusted for this change. This
commit addresses this so the docs are in-line with the limit enforced in
the bootstrap check.
Relates #27511
This commit clarifies that the preferences menu in IntelliJ can differ
by OS (IntelliJ -> Preferences on macOS and Settings on Linux/Windows).
Relates #27575
today a refresh listener won't preserve the entire context ie. won't carry
on response headers etc. from the caller side. This change adds support for
stored contexts.
Today we only expose the external readers segments. Yet, from a statistics
perspective both internal and external segments are relevant. This commit
exposes the additional segments of the internal and external reader respectively.
A compressible bytes output stream is a stream output which supports a
reset method. However, compressible bytes output streams are unusual in
that the current implementation sometimes supports a reset (if the
stream is not compressed) and sometimes does not support a rest (if the
stream is compressed). This inconsistent behavior is puzzling and
instead we should simply always throw an unsupported operation
exception.
Relates #27564
The GlobalOrdinalsStringTermsAggregator.LowCardinality aggregator casts global
values to `GlobalOrdinalMapping`, even though the implementation of global
values is different when a `missing` value is configured.
This commit adds a new API that gives access to the ordinal remapping in order
to fix this problem.
The main highlight of this new snapshot is that it introduces the opportunity
for queries to opt out of caching. In case a query opts out of caching, not only
will it never be cached, but also no compound query that wraps it will be
cached.
Also include _type and _id for parent/child hits inside inner hits.
In the case of top_hits aggregation the nested search hits are
directly returned and are not grouped by a root or parent document, so
it is important to include the _id and _index attributes in order to know
to what documents these nested search hits belong to.
Closes#27053
* Caps
* Fix awkward wording that took multiple passes to parse
* Floating point _number_
* Something more descriptive about the `scaled_float` scaling factor.
Method TruncateTranslogIT#corruptTranslogFiles corrupts some random
existing *.tlog files in a translog directory. However, this may not
actually corrupt translog at all if it corrupts only tlog files which
are not referenced by the Checkpoint (eg. their translog generations are
smaller the Checkpoint).
This commit makes sure that we corrupt some tlog files which are
referenced by the Checkpoint.
Closes#27538