The RPM and Debian packages depend on coreutils (for mktemp among
others). This commit adds an explicit package dependency on coreutils.
Relates #27660
GNU mktemp and BSD mktemp have different command line flags. On some
macOS systems users have mktemp from coreutils in their PATH overriding
the system mktemp from BSD. This commit adds detection for the coreutils
mktemp versus the BSD mktemp and uses the appropriate syntax based on
the detection.
Relates #27659
In the global checkpoint sync action, we fsync the translog. However,
the last synced global checkpoint might already be equal to the current
global checkpoint in which case the fsyncing the translog is unnecessary
as either the sync needed guard in the translog will skip the translog,
or the translog needs an fsync for another reason that will be picked up
elsewhere (e.g., at the end of a bulk request).
Relates #27652
The hashCode contract states that equal objects must have equal hash
codes, however the unequal objects are not required to have unequal
hashCodes.
This commit rewrites GeoPointParsingTests#testEqualsHashCodeContract
using#checkEqualsAndHashCode helper.
Closes#27633
* Fix highlighting on a keyword field that defines a normalizer
The `plain` and sometimes the `unified` highlighters need to re-analyze the content to highlight a field
This change makes sure that we don't ignore the normalizer defined on the keyword field for this analysis.
After write operations in some situations we fire a post-operation
global checkpoint sync. The global checkpoint sync unconditionally
fsyncs the translog and this can then look like an fsync
per-request. This violates the translog durability settings on the index
if this durability is set to async. This commit changes the global
checkpoint sync to observe the translog durability.
Relates #27641
Today we exclude internal refreshes in the refresh stats. Yet, it's very much
confusing to not take these into account. This change includes internal refreshes
into the stats until we have a dedicated stats for this.
Using custom rules in the icu_collation filter can fail on Windows. If the rules are interpreted
as a file location, this leads to an InvalidPathException when trying to read the rules from a file.
This new snapshot mostly brings a change to TopFieldCollector which can now
early terminate collection when trackTotalHits is `false`.
As a follow-up, we should replace our usage of
`EarlyTerminatingSortingCollector` with this new option.
Today, we maintain two sets in a SeqNoSet: ongoing sets and completed
sets. We can remove the completed sets and use only the ongoing sets by
releasing the internal bitset of a CountedBitSet when all its bits are
set. This behaves like two sets but simpler. This commit also makes
CountedBitSet as a drop-in replacement for BitSet.
Relates #27268
The LimitMEMLOCK suggestion was removed from systemd service file and
instead users should use an override file, so a comment in the
environment file should be updated to reflect the same.
Relates #27630
For too long we have been groping around in the dark when faced with GC
issues because we rarely have GC logs at our disposal. This commit
enables GC logging by default out of the box.
Relates #27610
* Sense HA HDFS settings and remove permission restrictions during regular execution.
This PR adds integration tests for HA-Enabled HDFS deployments, both regular and secured.
The Mini HDFS fixture has been updated to optionally run in HA-Mode. A new test suite has
been added for reproducing the effects of a Namenode failing over during regular repository
usage. Going forward, the HDFS Repository will still be subject to its self imposed permission
restrictions during normal use, but will no longer restrict them when running against an HA
enabled HDFS cluster. Instead, the plugin will rely on the provided security policy and not
further restrict the permissions so that the transparent operation to failover to a different
Namenode in the client does not raise security exceptions. Additionally, we are now testing the
secure mode with SASL based wire encryption of data between Elasticsearch and HDFS. This
includes a missing library (commons codec) in order to support this change.
#27611 broke the docs tests because $node_name in the URL doesn't (#27616)seem to be replaced.
Changing this to a * to match all nodes seems to fix the test
* Add accounting circuit breaker and track segment memory usage
This commit adds a new circuit breaker "accounting" that is used for tracking
the memory usage of non-request-tied memory users. It also adds tracking for the
amount of Lucene segment memory used by a shard as a user of the new circuit
breaker.
The Lucene segment memory is updated when the shard refreshes, and removed when
the shard relocates away from a node or is deleted. It should also be noted that
all tracking for segment memory uses `addWithoutBreaking` so as not to fail the
shard if a limit is reached.
The `accounting` breaker has a default limit of 100% and will contribute to the
parent breaker limit.
Resolves#27044
When running unit tests direct from the IDE this setting change is needed
in addition to the idea.no.launcher property that previous versions of IntelliJ
needed.
Today we carry on the size of the live version map to ensure that
we minimze rehashing. Yet, once we are idle or we can issue a sync-commit
we can resize it to defaults to free up memory.
Relates to #27516
This change ensures that the temporary directory used for java.io.tmpdir
is a private temporary directory. To achieve this we use mktemp on macOS
and Linux to give us a private temporary directory and the value of the
environment variable TMP on Windows. For this to work with our
packaging, we add java.io.tmpdir=${ES_TMPDIR} to our packaged
jvm.options, we set ES_TMPDIR respectively in our startup scripts, and
resolve the value of the template ${ES_TMPDIR} at startup.
Relates #27609
Once a shard goes inactive we want the shard to be refreshed if
the refresh interval is default since we might hold on to unnecessary
segments and in the inactive case we stopped indexing and can release
old segments.
Relates to #27500
Add an index level setting `index.analyze.max_token_count` to control
the number of generated tokens in the _analyze endpoint.
Defaults to 10000.
Throw an error if the number of generated tokens exceeds this limit.
Closes#27038