QA tests that use security need to use a trial license instead of a
basic license. Basic licenses do not enable security so these tests are
not running in the expected configuration. This can also lead to issues
that seem completely unrelated such as authentication failures right
after cluster formation.
The authentication failure right after cluster formation happens since
a request makes it past the authentication license checks and the
request starts the authentication process. During authentication the
cluster forms and the license is updated to a basic license, which
disables all realms. The request that is being authenticated then tries
to iterate over the realms, but the realms are empty and the request
cannot be authenticated. This results in a authentication failure even
though the credentials provided are correct.
Closes#30306
When dealing with filtering, a composite aggregation might return empty
buckets (which have been filtered) which gets sent as is to the client.
Unfortunately this interprets the response as no more data instead of
retrying.
This now has changed and the listener keeps retrying until either the
query has ended or data passes the filter.
Fix#30292
This commit fixes an issue with the data diagnostics were
empty buckets are not reported even though they should. Once
a job is reopened, the diagnostics do not get initialized from
the current data counts (especially the latest record timestamp).
The result is that if the data that is sent have a time gap compared
to the previous ones, that gap is not accounted for in the empty bucket
count.
This commit fixes that by initializing the diagnostics with the current
data counts.
Closes#30080
We were recently looking at bugs that can only occur if two different documents were indexed concurrently. For example, what happens if the local checkpoint advances above the sequence number of a document that's being indexed. That can only happen if another concurrent operation caused the checkpoint to advance. It has to be another document to allow concurrency as we acquire a per uid lock.While our investigation proved that the suspected bug doesn't exists, we still discovered our unit testing coverage is not good enough to cover this case.
This PR extend the test concurrent out of order replica processing to use two documents in its history.
The Get Repositories response object held a list of RepositoryMetaData
entries. This object does not have the from/toXContent methods that are
needed to expose this to the high level REST client. The
RepositoriesMetaData, however, does, and it also contains a list of
RepositoryMetaData objects within it. So rather than duplicate this
logic or move it (RepositoriesMetaData is a fragment object used by
cluster state), the object holding state in the Response was changed to
use the RepositoriesMetaData instead. This also cleans up the read/write
methods in the response, as they can now use the same read/write in
RepositoriesMetaData, which also were not present in the singular class.
Each test now has its own watch id that is being used.
This ensures there are no old history entries, which can potentially
lead to broken test assertions.
Uses a filter on the copy task for the eclipse settings files to
replace the token @@LICENSE_HEADER_TEXT@@ with the correct licence
header from the new buildSrc/src/main/resources/license-headers
directory
The current implementation starts/stops watcher using an executor. This
can result in our of order operations.
This commit reduces those executor calls to an absolute minimum in order
to be able to do state changes within the cluster state listener method,
which runs in sequence.
When a state change occurs that forces the watcher service to pause
(like no watcher index, no master node, no local shards), the service is
now in a paused state.
Pausing is a super lightweight operation, which marks the
ExecutionService as paused and waits for the currently executing watches
to finish in the background via an executor. The same applies for
stopping, the potentially long running operation is outsourced in to an
executor, as waiting for executed watches is decoupled from the current
state.
The only other long running operation is starting, where watches need to
be loaded. This is also done via an executor, but has an additional
protection by checking the cluster state version it was started with. If
another cluster state version was trying to load the watches, then this
loading will not take effect.
This PR also cleans up some unused states, like the a simple boolean in
the HistoryStore/TriggeredWatchStore marking it as started or stopped,
as this can now be caught in the execution service.
Another advantage of this approach is the fact, that now only triggered
watches are not getting executed, while watches that are run via the
Execute Watch API will still be executed regardless if watcher is
stopped or not.
Lastly the TickerScheduleTriggerEngine thread now only starts on data nodes.
This commit is a follow up to #30135. It updates the stream
compatibility versions in the start_trial requests and responses to
reflect that fact that this work has been backported to 6.3.
This commit adds setting the homedir for the elasticsearch user to the
adduser command in the packaging preinstall script. While the
elasticsearch user is a system user, it is sometimes conventient to have
an existing homedir (even if it is not writeable). For example, running
cron as the elasticsearch user will try to change dir to the homedir.
closes#14453
Fix NPE when CumulativeSum agg encounters null/empty bucket
If the cusum agg encounters a null value, it's because the value is
missing (like the first value from a derivative agg), the path is
not valid, or the bucket in the path was empty.
Previously cusum would just explode on the null, but this changes it
so we only increment the sum if the value is non-null and finite.
This is safe because even if the cusum encounters all null or empty
buckets, the cumulative sum is still zero (like how the sum agg returns
zero even if all the docs were missing values)
I went ahead and tweaked AggregatorTestCase to allow testing pipelines,
so that I could delete the IT test and reimplement it as AggTests.
Closes#27544
Necessary changes so that the licensing functionality can be
used in a JVM in FIPS 140 approved mode.
* Uses adequate salt length in encryption
* Changes key derivation to PBKDF2WithHmacSHA512 from a custom
approach with SHA512 and manual key stretching
* Removes redundant manual padding
Other relevant changes:
* Uses the SAH512 hash instead of the encrypted key bytes as the
key fingerprint to be included in the license specification
* Removes the explicit verification check of the encryption key
as this is implicitly checked in signature verification.
This commit removes the http.enabled setting. While all real nodes (started with bin/elasticsearch) will always have an http binding, there are many tests that rely on the quickness of not actually needing to bind to 2 ports. For this case, the MockHttpTransport.TestPlugin provides a dummy http transport implementation which is used by default in ESIntegTestCase.
closes#12792
Suggester Options have a collate match field that is returned when the prune
option is set to true. These values should be merged together in the query
reduce phase, otherwise good suggestions that result in rare hits in shards with
results that do not arrive first may be incorrectly marked as not matching the
collate query.
At the end of recovery, we mark the recovering shard as "in sync" on the primary. From this point on
the primary will treat any replication failure on it as critical and will reach out to the master to fail the
shard. To do so, we wait for the local checkpoint of the recovered shard to be above the global
checkpoint (in order to maintain global checkpoint invariant).
If the master decides to cancel the allocation of the recovering shard while we wait, the method can
currently hang and fail to return. It will also ignore the interrupts that are triggered by the cancelled
recovery due to the primary closing.
Note that this is crucial as this method is called while holding a primary permit. Since the method
never comes back, the permit is never released. The unreleased permit will then block any primary
relocation *and* while the primary is trying to relocate all indexing will be blocked for 30m as it
waits to acquire the missing permit.
Systemd overrides should happen through /etc/systemd/system, not
directly editing the service file. This commit removes marking the
service file as configuration for rpm and deb packages.
* SQL: Reduce number of ranges generated for comparisons
Rewrote optimization rule for combining ranges by improving the
detection of binary comparisons in a tree to better combine
them in a range, regardless of their place inside an expression.
Additionally, improve the comparisons of Numbers of different types
Also, improve reassembly of conjunction/disjunction into balanced
trees.
Do not promote BinaryComparisons to Ranges since it introduces NULL
boundaries and thus a corner-case that needs too much handling
Compare BinaryComparisons directly between themselves and to Ranges
Fix#30017
This commit refactors VersionUtils.resolveReleasedVersions to be
simpler, and in the process fixes the behavior to match that of
VersionCollection.groovy.
closes#30133
The code in `SourceRecoveryHandler` runs under a `CancellableThreads` instance in order to allow long running operations to be interrupted when the recovery is cancelled. Sadly if this happens at just the wrong moment while acquiring a permit from the primary, that primary can be leaked and never be freed.
Note that this is slightly better than it sounds - we only cancel recoveries on the source side if the primary shard itself is closed.
Relates to https://github.com/elastic/elasticsearch/pull/30316
This commit adds a tombstone document into Lucene for every No-op.
With this change, Lucene index is expected to have a complete history
of operations like Translog. In fact, this guarantee is subjected to the
soft-deletes retention merge policy.
Relates #29530
If the elasticsearch-env bash script chooses $ES_TMPDIR
then it also creates the directory. This change makes
elasticsearch-env.bat do the same thing: if %ES_TMPDIR%
is chosen by the script then the script will ensure it
exists, but if %ES_TMPDIR% is already set then the user
is responsible for creating it.
Relates #27609
Relates #28217
Many tests are added with a version check so that they do not run against a
version that doesn't have the feature yet. Master is 7.0, so all tests that
do not run against 6.0+ can be removed and the version check can be removed
on all tests that always run on 6.0+.
This adds a new `_ignored` meta field which indexes and stores fields that have
been ignored at index time because of the `ignore_malformed` option. It makes
malformed documents easier to identify by using `exists` or `term(s)` queries
on the `_ignored` field.
Closes#29494
Similarly to what has been done in for the repository-s3 plugin, this
pull request moves the fixture test into a dedicated
repository-azure/qa/microsoft-azure-storage project.
It also exposes some environment variables which allows to execute the
integration tests against the real Azure Storage service. When the
environment variables are not defined, the integration tests are
executed using the fixture added in #29347.
Closes#29349
* master:
Fix message content in users tool (#30293)
[DOCS] Fixes links to breaking changes
[DOCS] Adds new installation package details (#29590)
Revert "Build: Move gradle wrapper jar to a dot dir (#30146)"
The elasticsearch-users utility had various messages that were
outdated or incorrect. This commit updates the output from this
command to reflect current terminology and configuration.