* Adds mutate function to various tests
Relates to #25929
* fix test
* implements mutate function for all single bucket aggs
* review comments
* convert getMutateFunction to mutateIInstance
Currently there is an issue where the send listener is not called in the
nio transport when an exception is throw during channel flush. This
leads to memory leaks. This commit ensures that the listener is called
The s3 repository plugin has "third party" integ tests which rely
on external service and configuration setup. These tests are really
internal verification of the plugin (and should be moved to real integ
tests). Running them is not something a user should do, and the
documentation has been out of date for all of 5.x. This commit removes
the docs, removing potential confusion for users.
This commit adds the nio transport as an option in place of the mock tcp
transport for tests. Each test will only use one transport type. The
transport type is decided by a random boolean generated inside of the
`ESTestCase` class.
This commit updates the version for master to 7.0.0-alpha1. It also adds
the 6.1 version constant, and fixes many tests, as well as marking some
as awaits fix.
Closes#25893Closes#25870
We are currently quite lenient about the targets of `copy_to`. However in a
number of cases we can detect illegal use of `copy_to` at mapping update time.
For instance, it does not make sense to use object fields as targets of
`copy_to`, or fields that would end up in a different nested document.
When ES starts up we verify we can write to all data folders and that they support atomic moves. We do so by creating and deleting temp files. If for some reason the files was successfully created but not successfully deleted, we still shut down correctly but subsequent start attempts will fail with a file already exists exception.
This commit makes sure to first clean any existing temporary files.
Superseeds #21007
This commit fixes an issue with the Netty 4 multi-port test that a
transport client can connect. The problem here is that in case the
bottom of the random port range was already bound to (for example, by
another JVM) then then transport client could not connect to the data
node. This is because the transport client was in fact using the bottom
of the port range only. Instead, we simply try all the ports that the
data node might be bound to.
Closes#24441
In the refresh REST tests we setup some persistent settings for debug
logging. In the teardown, we try to restore the logging level back to
info via another persistent setting but this is a mistake because other
tests check if there are no persistent settings. To fix this, we remove
the persistent setting that we added.
We are chasing a test failure in the "refresh=wait_for waits until
changes are visible in search" test yet the logs currently give us no
indication what is happening. This commit adds debug logging for this
test, and cleans up this logging in a teardown section. We can remove
this additional logging after we chase the test failure down.
This commit adds a small note to the discovery docs to include a note
that we recommend that the unicast hosts list be maintained as the list
of master-eligible nodes in the cluster.
Relates #25991
Some REST tests can rapid-fire script compilations that exceed the
default script compilations per minute. Rather than subjecting ourselves
to spurious failures because of the limit being too low, we opt for a
larger limit here.
This commit updates the docs for the config files to explain the new
mechanism for customizing the configuration directory via the
environment variable CONF_DIR.
Relates #25990
ToXContentToBytes is used as a base class that adds toString and buildAsBytes method implementation to classes that implement ToXContent. With the ongoing cleanups, this class is limited and doesn't add a lot of value, given that buildAsBytes can be replaced with XContentHelper.toXContent and toString can be replaced with Strings.toString(this).
The plan would be to remove ToXContentToBytes entirely, and AbstractQueryBuilder is the first place where we can remove its usage.
During peer recoveries, we need to copy over lucene files and replay the operations they miss from the source translog. Guaranteeing that translog files are not cleaned up has seen many iterations overtime. Back in the old 1.0 days, recoveries went through the Engine and actively prevented both translog cleaning and lucene commits. We then moved to a notion called Translog Views, which allowed the recovery code to "acquire" a view into the translog which is then guaranteed to be kept around until the view is closed. The Engine code was free to commit lucene and do what it ever it wanted without coordinating with recoveries. Translog file deletion logic was based on reference counting on the file level. Those counters were incremented when a view was acquired but also when the view was used to create a `Snapshot` that allowed you to read operations from the files. At some point we removed the file based counting complexity in favor of constructs on the Translog level that just keep track of "open" views and the minimum translog generation they refer to. To do so, Views had to be kept around until the last snapshot that was made from them was consumed. This was fine in recovery code but lead to [a subtle bug](https://github.com/elastic/elasticsearch/pull/25862) in the [Primary Replica Resyncer](https://github.com/elastic/elasticsearch/pull/25862).
Concurrently, we have developed the notion of a `TranslogDeletionPolicy` which is responsible for the liveness aspect of translog files. This class makes it very simple to take translog Snapshot into account for keep translog files around, allowing people that just need a snapshot to just take a snapshot and not worry about views and such. Recovery code which actually does need a view can now prevent trimming by acquiring a simple retention lock (a `Closable`). This removes the need for the notion of a View.
* Improves AbstractWireSerializingTestCase equals test
`AbstractWireSerializingTestCase.testEqualsAndHashcode()` now uses `EqualsHashcodeTestUtils` to perform the hashCode and equals checks. To support this `AbstractWireSerializingTestCase` has two new methods: `getCopyFunction()` and `getMutateFunction` which are used when calling `EqualsHashcodeTestUtils`
* Adds TODO
* Makes equivalent changes to AbstractStreamableTestCase
* corrects javadoc error
We publish javadocs to artifacts.elastic.co (and snapshots.elastic.co) for a while. This commit adds the link to them to the transport client, low level REST client, sniffer and high level REST client pages.
Closes#23761
The following token filters were moved: delimited_payload_filter, keep, keep_types, classic, apostrophe, decimal_digit, fingerprint, min_hash and scandinavian_folding.
Relates to #23658
This commit removes an outdated reference to http_address in the nodes
info docs. This information is available in the http object for each
node in the nodes info API response.
Relates #25980
We set some limits in the service file for Elasticsearch when installed
as a service on systemd-based systems. This commit adds a packaging test
that these limits are indeed set correctly.
Relates #25976
We have a bootstrap check for the maximum size of the virtual memory
address space for the Elasticsearch process. We can set this in the
service file for Elasticsearch when installed as a service on
systemd-based systems for a better user experience than them fumbling
through thinking they should set this via /etc/security/limits.d (as a
lot of pages on the Internet would tell them) not realizing that systemd
completely ignores these for services and then trying to figure out how
to add a unit file for the Elasticsearch service.
Relates #25975
The systemd service file that ships with Elasticsearch installs on
systemd-based systems contains a suggestion for setting LimitMEMLOCK if
the user wants to enable bootstrap.memory_lock. However, this setting
this in the installed service file goes against best practices for
working with systemd, and goes against our existing documentation for
how to set this. Therefore, we should not have this suggestion in the
service file otherwise users might be led to think they should edit it
there.
Relates #25979
On non-Windows platforms, we ignore the environment variable
JAVA_TOOL_OPTIONS (this is an environment variable that the JVM respects
by default for picking up extra JVM options). The primary reason that we
ignore this because of the Jayatana agent on Ubuntu; a secondary reason
is that it produces an annoying "Picked up JAVA_TOOL_OPTIONS: ..."
output message. When the elasticsearch-env batch script was introduced
for Windows, ignoring this environment variable was deliberately not
carried over as the primary reason does not apply on Windows. However,
after additional thinking, it seems that we should simply be consistent
to the extent possible here (and also avoid that annoying "Picked up
JAVA_TOOL_OPTIONS: ..." on Windows too). This commit causes the Windows
version of elasticsearch-env to also ignore JAVA_TOOL_OPTIONS.
Relates #25968
This commit removes some useless empty lines checks from the evil JNA
tests. These empty lines checks are useless because if the lines are
actually empty, the for loop will never be entered and we will hit the
fail condition at the bottom as intended anyway.
This commit adds a bootstrap check for the maximum file size, and
ensures the limit is set correctly when Elasticsearch is installed as a
service on systemd-based systems.
Relates #25974
When invoking the elasticsearch-env.bat batch script on Windows, if the
script exits due to an error (e.g., Java can not be found, or the wrong
version of Java is found), then the script exits. Sadly, on Windows,
this does not also terminate the caller, instead returning control. This
means we have to explicitly exit so that is what we do in this commit.
Relates #25959
We have a command-line flag -V or --version that can be used to display
the version of Elasticsearch. However, the version that we display does
not contain whether or not the version is a snapshot build. This commit
changes the behavior here so that if the build is a snapshot, that is
included in the version string.
Relates #25970
Today we strip some ignored JVM options before starting the main Java
process (e.g., we unset JAVA_TOOL_OPTIONS, and we ignore
JAVA_OPTS). However, there is another Java process that we start before
starting the main process: the Java version checker. We are currently
starting this before ignoring the undesired JVM options so the Java
version checker will pick up JAVA_TOOL_OPTIONS and it will silently
ignore JAVA_OPTS. Instead, we should ignore JAVA_TOOL_OPTIONS here too,
and not silently ignore JAVA_OPTS but instead warn before doing so (as
we already do for the main Java process). This commit rearranges the
execution of these steps so that we do the right thing here.
Relates #25969
Previously we manually checked if mutually exclusive options are passed
on the command line. Yet, after an upgrade to our option parser
dependency, we were able to use built-in functionality to establish
these mutually exclusive options and the parser would take care of
checking if such options are passed on the command line. However, the
previous manually checking code is now dead and was left behind. This
commit removes that dead code.
Relates #19278