Make all bool constructs use match/should (that is a query context) as
that is controlled and changed to a filter context by ES automatically
based on the sort order (_doc, field vs _sort) and trackScores.
Fix#29685
This change is to support rolling upgrade from a pre-6.3 default
distribution (i.e. without X-Pack) to a 6.3+ default distribution
(i.e. with X-Pack).
The ML metadata is no longer eagerly added to the cluster state
as soon as the master node has X-Pack available. Instead, it
is added when the first ML job is created.
As a result all methods that get the ML metadata need to be able
to handle the situation where there is no ML metadata in the
current cluster state. They do this by behaving as though an
empty ML metadata was present. This logic is encapsulated by
always asking for the current ML metadata using a static method
on the MlMetadata class.
Relates #30731
By default ML native processes are only allowed to use
30% of RAM, so the previous 2GB setting prevented the
test passing on VMs with only 4GB RAM. This change
reduces the limit to 1200MB, which means it can now
pass on VMs with 4GB RAM.
Meta plugins existed only for a short time, in order to enable breaking
up x-pack into multiple plugins. However, now that x-pack is no longer
installed as a plugin, the need for them has disappeared. This commit
removes the meta plugins infrastructure.
As the first record is random, there's a chance it will
be aligned on a bucket start. Thus we need to check the
bucket count is in [23, 24].
Closes#30715
These tests aim to check the set model memory limit is
respected. Additionally, it was asserting counts of
partition, by, over fields in an attempt to check that
the used memory is spent meaningfully. However, this
made the tests fragile, as changes in the ml-cpp could
lead to CI failures.
This commit removes those assertions. We are working on
adding tests in ml-cpp that will compensate.
diskspace and creates a subfolder for storing data outside of Lucene
indexes, but as part of the ES data paths.
Details:
- tmp storage is managed and does not allow allocation if disk space is
below a threshold (5GB at the moment)
- tmp storage is supposed to be managed by the native component but in
case this fails cleanup is provided:
- on job close
- on process crash
- after node crash, on restart
- available space is re-checked for every forecast call (the native
component has to check again before writing)
Note: The 1st path that has enough space is chosen on job open (job
close/reopen triggers a new search)
Watcher tests now always fail hard when watches that were
tried to be triggered in a test using the trigger() method,
but could not because they were not found on any of the
nodes in the cluster.
This change adds version information in case a native ML process crashes, the version is important for choosing the right symbol files when analyzing the crash. Adding the version combines all necessary information on one line.
relates elastic/ml-cpp#94
It is possible for state documents to be
left behind in the state index. This may be
because of bugs or uncontrollable scenarios.
In any case, those documents may take up quite
some disk space when they add up. This commit
adds a step in the expired data deletion that
is part of the daily maintenance service. The
new step searches for state documents that
do not belong to any of the current jobs and
deletes them.
Closes#30551
Adjust fast forward for token expiration test
Adjusts the maximum fast forward time for token expiration tests
to be 5 seconds before actual token expiration so that the test
won't fail even when upperlimit is randomly selected.
Resolves: #30062
The part of the history template responsible for slack attachments had a
dynamic mapping configured which could lead to problems, when a string
value looking like a date was configured in the value field of an
attachment.
This commit fixes the template by setting this field always to text.
This also requires a change in the template numbering to be sure this
will be applied properly when starting watcher.
This commit removes xpack from being a meta-plugin-as-a-module.
It also fixes a couple tests which were missing task dependencies, which
failed once the gradle execution order changed.
Removes dependency for server's version from the JDBC driver code. This
should allow us to dramatically reduce driver's size by removing the
server dependency from the driver.
Relates #29856
This commit increases the logging level around search to aid in
debugging failures in LicensingTests#testSecurityActionsByLicenseType
where we are seeing all shards failed error while trying to search the
security index.
See #30301
This change adds a `listTasks` method to the high level java
ClusterClient which allows listing running tasks through the
task management API.
Related to #27205
* Refactors ClientHelper to combine header logic
This change removes all the `*ClientHelper` classes which were
repeating logic between plugins and instead adds
`ClientHelper.executeWithHeaders()` and
`ClientHelper.executeWithHeadersAsync()` methods to centralise the
logic for executing requests with stored security headers.
* Removes Watcher headers constant
When the encrpytion of sensitive date is enabled, test that a
scheduled watch is executed as expected and produces the correct value
from a secret in the basic auth header.
Make SSLContext reloadable
This commit replaces all customKeyManagers and TrustManagers
(ReloadableKeyManager,ReloadableTrustManager,
EmptyKeyManager, EmptyTrustManager) with instances of
X509ExtendedKeyManager and X509ExtendedTrustManager.
This change was triggered by the effort to allow Elasticsearch to
run in a FIPS-140 environment. In JVMs running in FIPS approved
mode, only SunJSSE TrustManagers and KeyManagers can be used.
Reloadability is now ensured by a volatile instance of SSLContext
in SSLContectHolder.
SSLConfigurationReloaderTests use the reloadable SSLContext to
initialize HTTP Clients and Servers and use these for testing the
key material and trust relations.
This commit is related to #28898. It adds an nio driven http server
transport. Currently it only supports basic http features. Cors,
pipeling, and read timeouts will need to be added in future PRs.
This commit exposes the master version to the REST test context. This
will be needed in a follow-up where the master version will be used to
determine whether or not a certain warning header is expected.
Due to the way composite aggregation works, ordering in GROUP BY can be
applied only through grouped columns which now the analyzer verifier
enforces.
Fix 29900
This commit removes the SecurityLifecycleService, relegating its former
functions of listening for cluster state updates to SecurityIndexManager
and IndexAuditTrail.
This change adds a grok_pattern field to the GET categories API
output in ML. It's calculated using the regex and examples in the
categorization result, and applying a list of candidate Grok
patterns to the bits in between the tokens that are considered to
define the category.
This can currently be considered a prototype, as the Grok patterns
it produces are not optimal. However, enough people have said it
would be useful for it to be worthwhile exposing it as experimental
functionality for interested parties to try out.
This is fixing an issue that has come up in some builds. In some
scenarios I see an assertion failure that we are trying to move to
application mode when we are not in handshake mode. What I think is
happening is that we are in handshake mode and have received the
completed handshake message AND an application message. While reading in
handshake mode we switch to application mode. However, there is still
data to be consumed so we attempt to continue to read in handshake mode.
This leads to us attempting to move to application mode again throwing
an assertion.
This commit fixes this by immediatly exiting the handshake mode read
method if we are not longer in handshake mode. Additionally if we swap
modes during a read we attempt to read with the new mode to see if there
is data that needs to be handled.
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
The errors were caused because release tests would use a copy of
the public key that was formatted differently. The change to the
public key format was introduced in [1].
Release tests Jenkins job has now been updated to use the correct
key format depending on the branch they run on [2]
Closes#30430
[1] https://github.com/elastic/elasticsearch/pull/30251
[2] https://github.com/elastic/infra/pull/4944
This commit adds the ability to specify a plugin from maven for a
test cluster to use. Currently, only local projects may be used as
plugins, except when testing bwc, where the coordinates of the project
are used. However, that assumes all projects always keep the same
coordinates, or are even still plugins, which is no longer the case for
x-pack. The full cluster and rolling restart tests are changed to use
this new method when pulling x-pack versions before 6.3.0.
Modifies the SQL tests to use the new `Request` object flavored methods
introduced onto the `RestClient` in #29623. We'd like to remove the old
methods eventually so we should stop using them.
Since adding back the per-watch statistics, we do not need to access
every trigger engine implementation to get the current total job count.
This commit removes the unused methods to do so.
Tweak the return data, in particular with regards for ODBC columns to
better align with the spec
Fix order for SYS TYPES and TABLES according to the JDBC/ODBC spec
Fix#30386Fix#30521
This commit cleans up some code in the FileUserPasswdStore and the
FileUserRolesStore classes. The maps used in these classes are volatile
so we need to make sure that we don't perform multiple operations with
the map unless we are sure we are using a reference to the same map.
The maps are also never null, but there were a few null checks in the
code that were not needed. These checks have been removed.
The TokenMetaData equals method compared byte arrays using `.equals` on
the arrays themselves, which is the equivalent of an `==` check. This
means that a seperate byte[] with the same contents would not be
considered equivalent to the existing one, even though it should be.
The method has been updated to use `Array#equals` and similarly the
hashcode method has been updated to call `Arrays#hashCode` instead of
calling hashcode on the array itself.
These tests are both in the file `watcher/stats/10_basic`, and have been
failing fairly frequently over the last month with a start-up issue.
The issue is being tracked in #30298.
Dates internally contain milliseconds (which appear when converting them
to Strings) however parsing does not accept them (and is being strict).
The parser has been changed so that Date is mandatory but the time
(including its fractions such as millis) are optional.
Fix#30002
This commit adds a general state listener to the SecurityIndexManager,
and replaces the existing health and up-to-date listeners with that. It
also moves helper methods relating to health to SecurityIndexManager
from SecurityLifecycleService.
This commit moves the generated-resources directory to be within
the build directory for the openldap-tests and saml-idp-tests
projects. Both projects create a generated-resources directory that
should have been in the build directory but were instead at the same
level as the build directory.
As conformance to best practices, this changes ensures that if a
SAML Response is signed, we verify the signature before processing
it any further. We were only checking the InResponseTo and
Destination attributes before potential signature validation but
there was no reason to do that up front either.
With the opening of xpack, we still retained a run task within
:x-pack:plugin. However, the root level run task also runs with the
default distribution. This change removes the extra run task inside
xpack in favor of using the root level task, and moves the
license/configuration code for run into the main run configuration.
This commit removes the hardcoded list of unconfigured ciphers in the
SslIntegrationTests. This list may include ciphers that are not
supported on certain JVMs. This list is replaced with code that
dynamically computes the set of ciphers that are not configured for
use by default.
The HTTPClient used in watcher is based on the apache http client. The
current client is using a lot of defaults - which are not always
optimal. Two of those defaults are the maximum number of total
connections and the maximum number of connections to a single route.
If one of those limits is reached, the HTTPClient waits for a connection
to be finished thus acting in a blocking fashion. In order to prevent
this when many requests are being executed, we increase the limit of
total connections as well as the connections per route (a route is
basically an endpoint, which also contains proxy information, not
containing an URL, just hosts).
On top of that an additional option has been set to evict
long running connections, which can potentially be reused after some
time. As this requires an additional background thread, this required
some changes to ensure that the httpclient is closed properly. Also the
timeout for this can be configured.
This commit renames IndexLifecycleManager to SecurityIndexManager as it
is not actually a general purpose class, but specific to security. It
also removes indirection in code calling the lifecycle service, instead
calling the security index manager directly.
Starting watcher should wait for the watcher to be started before
marking the status as started, which is now done via a callback.
Also, reloading watcher could set the execution service to paused. This could
lead to watches not being executed, when run in tests. This fix does not
change the paused flag in the execution service, just clears out the
current queue and executions.
Closes#30381
Today when processing a request for a URL path for which we can not find
a handler we send back a plain-text response. Yet, we have the accept
header in our hand and can respect the accepted media type of the
request. This commit addresses this.
This commit removes the unnecessary transport_client cluster permission
from the role that is used as an example in our documentation. This
permission is not needed to use cross cluster search.
When validating the search request, we make sure any date_histogram
aggregations have timezones that match the jobs. But we didn't
do any such validation on range queries.
While it wouldn't produce incorrect results, it would be confusing
to the user as no documents would match the aggregation (because we
add a filter clause on the timezone for the agg).
Now the user gets an exception up front, and some helpful text about
why the range query didnt match, and which timezones are acceptable
This commit updates the multi cluster search test with security so that
the user that is simply performing a multi cluster search does not have
any cluster permissions. This is done as none are needed by this user
and excess privileges could mask a behavior change.
Upgrade to lucene-7.4.0-snapshot-1ed95c097b
This version contains:
* An Analyzer for Korean
* An IntervalQuery and IntervalsSource that retrieve minimum intervals of positional queries.
* A new API to retrieve matches (offsets and positions) of a query for a single document.
* Support for soft deletes in the index writer.
* A fixed shingle filter that handles index time synonyms.
* Support for emoji sequence in ICUTokenizer (with an upgrade to icu 61.1)
When the watcher service pauses execution due to a cluster state update,
the trigger service and its engines also need to pause properly instead
of keeping going. This is also important when the .watches index is
deleted, so that watches don't stay in a triggered mode.
The IndexAndAliasesResolver resolves the indices and aliases for each
request and also handles local and remote indices. The current
implementation uses the ResolvedIndices class to hold the resolved
indices and aliases. While evaluating the indices and aliases against
the user's permissions, the final value for ResolvedIndices is
constructed. Prior to this change, this was done by creating a
ResolvedIndices for the first set of indices and for each additional
addition, a new ResolvedIndices object is created and merged with
the existing one. With a small number of indices and aliases this does
not pose a large problem; however as the number of indices/aliases
grows more list allocations and array copies are needed resulting in a
large amount of garbage and severely impacted performance.
This change introduces a builder for ResolvedIndices that appends to
mutable lists until the final value has been constructed, which will
ultimately reduce the amount of garbage generated by this code.
QA tests that use security need to use a trial license instead of a
basic license. Basic licenses do not enable security so these tests are
not running in the expected configuration. This can also lead to issues
that seem completely unrelated such as authentication failures right
after cluster formation.
The authentication failure right after cluster formation happens since
a request makes it past the authentication license checks and the
request starts the authentication process. During authentication the
cluster forms and the license is updated to a basic license, which
disables all realms. The request that is being authenticated then tries
to iterate over the realms, but the realms are empty and the request
cannot be authenticated. This results in a authentication failure even
though the credentials provided are correct.
Closes#30306
When dealing with filtering, a composite aggregation might return empty
buckets (which have been filtered) which gets sent as is to the client.
Unfortunately this interprets the response as no more data instead of
retrying.
This now has changed and the listener keeps retrying until either the
query has ended or data passes the filter.
Fix#30292
This commit fixes an issue with the data diagnostics were
empty buckets are not reported even though they should. Once
a job is reopened, the diagnostics do not get initialized from
the current data counts (especially the latest record timestamp).
The result is that if the data that is sent have a time gap compared
to the previous ones, that gap is not accounted for in the empty bucket
count.
This commit fixes that by initializing the diagnostics with the current
data counts.
Closes#30080
Each test now has its own watch id that is being used.
This ensures there are no old history entries, which can potentially
lead to broken test assertions.
The current implementation starts/stops watcher using an executor. This
can result in our of order operations.
This commit reduces those executor calls to an absolute minimum in order
to be able to do state changes within the cluster state listener method,
which runs in sequence.
When a state change occurs that forces the watcher service to pause
(like no watcher index, no master node, no local shards), the service is
now in a paused state.
Pausing is a super lightweight operation, which marks the
ExecutionService as paused and waits for the currently executing watches
to finish in the background via an executor. The same applies for
stopping, the potentially long running operation is outsourced in to an
executor, as waiting for executed watches is decoupled from the current
state.
The only other long running operation is starting, where watches need to
be loaded. This is also done via an executor, but has an additional
protection by checking the cluster state version it was started with. If
another cluster state version was trying to load the watches, then this
loading will not take effect.
This PR also cleans up some unused states, like the a simple boolean in
the HistoryStore/TriggeredWatchStore marking it as started or stopped,
as this can now be caught in the execution service.
Another advantage of this approach is the fact, that now only triggered
watches are not getting executed, while watches that are run via the
Execute Watch API will still be executed regardless if watcher is
stopped or not.
Lastly the TickerScheduleTriggerEngine thread now only starts on data nodes.
This commit is a follow up to #30135. It updates the stream
compatibility versions in the start_trial requests and responses to
reflect that fact that this work has been backported to 6.3.
Necessary changes so that the licensing functionality can be
used in a JVM in FIPS 140 approved mode.
* Uses adequate salt length in encryption
* Changes key derivation to PBKDF2WithHmacSHA512 from a custom
approach with SHA512 and manual key stretching
* Removes redundant manual padding
Other relevant changes:
* Uses the SAH512 hash instead of the encrypted key bytes as the
key fingerprint to be included in the license specification
* Removes the explicit verification check of the encryption key
as this is implicitly checked in signature verification.
This commit removes the http.enabled setting. While all real nodes (started with bin/elasticsearch) will always have an http binding, there are many tests that rely on the quickness of not actually needing to bind to 2 ports. For this case, the MockHttpTransport.TestPlugin provides a dummy http transport implementation which is used by default in ESIntegTestCase.
closes#12792
* SQL: Reduce number of ranges generated for comparisons
Rewrote optimization rule for combining ranges by improving the
detection of binary comparisons in a tree to better combine
them in a range, regardless of their place inside an expression.
Additionally, improve the comparisons of Numbers of different types
Also, improve reassembly of conjunction/disjunction into balanced
trees.
Do not promote BinaryComparisons to Ranges since it introduces NULL
boundaries and thus a corner-case that needs too much handling
Compare BinaryComparisons directly between themselves and to Ranges
Fix#30017
Many tests are added with a version check so that they do not run against a
version that doesn't have the feature yet. Master is 7.0, so all tests that
do not run against 6.0+ can be removed and the version check can be removed
on all tests that always run on 6.0+.
The elasticsearch-users utility had various messages that were
outdated or incorrect. This commit updates the output from this
command to reflect current terminology and configuration.
The variadic constructor was only used in a few places and the
RepositoriesMetaData class is backed by a List anyway, so just using a
List will make it simpler to instantiate it.
This commit increases the logging for authentication in the x-pack
multi-node qa test project. This is needed to assist in debugging HTTP
authorization failures in waiting for the second node in these tests.
See #30306
Cause the CLI to ignore commands that are empty or consist only of
newlines. This is a fairly standard thing for SQL CLIs to do.
It looks like:
```
sql> ;
sql>
|
| ;
sql> exit;
Bye!
```
I think I *could* have implemented this with a `CliCommand` that throws
out empty string but it felt simpler to bake it in to the `CliRepl`.
Closes#30000
xpack core contains a fork of `Cron` from quartz who's javadoc has a
`<table>` with non-html5 compatible stuff. This html5ifies the table and
switches the `:x-pack:plugin:core` project to building javadoc with
HTML5.
This commit refactors the DataStreamDiagnostics class
achieving the following advantages:
- simpler code; by encapsulating the moving bucket histogram
into its own class
- better performance; by using an array to store the buckets
instead of a map
- explicit handling of gap buckets; in preparation of fixing #30080
Starting with the refactoring in https://github.com/elastic/elasticsearch/pull/22778 (released in 5.3) we may fail to properly replicate operation when a mapping update on master fails. If a bulk
operations needs a mapping update half way, it will send a request to the master before continuing
to index the operations. If that request times out or isn't acked (i.e., even one node in the cluster
didn't process it within 30s), we end up throwing the exception and aborting the entire bulk. This is
a problem because all operations that were processed so far are not replicated any more to the
replicas. Although these operations were never "acked" to the user (we threw an error) it cause the
local checkpoint on the replicas to lag (on 6.x) and the primary and replica to diverge.
This PR does a couple of things:
1) Most importantly, treat *any* mapping update failure as a document level failure, meaning only
the relevant indexing operation will fail.
2) Removes the mapping update callbacks from `IndexShard.applyIndexOperationOnPrimary` and
similar methods for simpler execution. We don't use exceptions any more when a mapping
update was successful.
I think we need to do more work here (the fact that a single slow node can prevent those mappings
updates from being acked and thus fail operations is bad), but I want to keep this as small as I can
(it is already too big).
The overall NOTICE file for the ML X-Pack module should
include the notices from the 3rd party C++ components as
well as the 3rd party Java components.
Currently, the only way to get the REST response for the `/_cluster/state`
call to return the `cluster_uuid` is to request the `metadata` metrics,
which is one of the most expensive response structures. However, external
monitoring agents will likely want the `cluster_uuid` to correlate the
response with other API responses whether or not they want cluster
metadata.
We had a number of awaitsFix links that weren't updated after the xpack
merge.
Where possible I changed the links to the new locations, but in some
circumstances the original ticket was closed (suggesting the awaitsfix
should be removed) or was otherwise unclear the status.
This commit moves the repository-s3 fixture test added in #29296 in a
new `repository-s3/qa/amazon-s3` project. This new project allows the
REST integration tests to be executed using the real S3 service when
all the required environment variables are provided. When no env var
is provided, then the tests are executed using the fixture added
in #29296.
The REST tests located at the `repository-s3`plugin project now only
verify that the plugin is correctly loaded.
The REST tests have been adapted to allow a bucket name and a base
path to be specified as env vars. This way it is possible to run the tests
with different base paths (could be anything, like a CI job name or a
branch name) without multiplicating buckets.
Related to #29349
Email message IDs are supposed to be unique. In order to guarantee this,
we need to take the action id of a watch action into account as well,
not just the watch id from the watch execution context. This prevents
that two actions from the same watch execution end up with the same
message id.
This is related to #30134. It modifies the start_trial action to require
an acknowledgement parameter in the rest request to actually start the
trial license. There are backwards compatibility issues as prior ES
versions did not support this parameter. To handle this, it is assumed
that a request coming from a node prior to 6.3 is acknowledged. And
attempts to write a non-acknowledged request to a prior to 6.3 node will
throw an exception.
Additionally this PR adds messages about the trial license the user is
generating.
Currently the test picks random java.util.TimeZone ids in some places.
Internally we still need to convert back to joda DateTimeZone by id
occassionally (e.g. when serializing to pre 6.3 versions). There are
some deprecated "SystemV/*" time zones that Jodas DateTimeZone refuses
to convert. This change excludes those rare cases from the set of
allowed random time zones. It would be quiet odd for them to appear in
practice.
Closes#30156
A few of the old style license got kept around because their comment
string did not start with a space. This caused the license check to not
see it as a license and skip it. This commit cleans it up.
This removes some monitoring tests that have been silenced for a long
time. These tests don't really provide any value for the upgrade suite and
they just create noise due to their occasional timing-related failures.
Add the oss tar distribution to the packaging test plugin. Test the oss
tar distribution in the core packaging tests, and the non-oss tar
distribution in the x-pack packaging tests.
In order to have less qa/ projects, this commit removes the
watcher-smoke-test-mustache and the watcher-smoke-test-painless projects
and moves the tests of both into the watcher-smoke-test projects.
This results in less projects to parse when calling gradle and a shorter test
time due to being able to run everything within one elasticsearch
instance.
Adds tasks that check that the all jars that we build have LICENSE.txt
and NOTICE.txt files and that the files are correct. Sets check to
depend on these task.
This is mostly there for extra parnoia because we automatically
configure all Jar tasks to include the LICENSE.txt and NOTICE.txt
files anyway. But it is quite possible to add configuration to those
tasks that would override either file.
This causes check to depend on several more things than it used to.
Take, for example, javadoc:
check depends on the new verifyJavadocJarNotice which depends on
extractJavadocJar which depends on javadocJar which depends on
javadoc, this check now depends on javadoc.
The bundled configuration isn't recognised by eclipse so these
dependencies are missed when it imports the `x-pack:plugin:sql:jdbc`
project. This change makes these dependencies compile dependencies if
the build is running for Eclipse.
Tests need to wait for changes to the job's established memory usage to
propagate and an over enthusiastic optimisation meant jobs were updated
from stale state causing recent change to be lost.