* Introduce reusable QL plugin for SQL and EQL (#50815)
Extract reusable functionality from SQL into its own dedicated project QL.
Implemented as a plugin, it provides common components across SQL and the upcoming EQL.
While this commit is fairly large, for the most part it's just a big file move from sql package to the newly introduced ql.
(cherry picked from commit ec1ac0d463bfa12a02c8174afbcdd6984345e8b4)
* SQL: Fix incomplete registration of geo NamedWritables
(cherry picked from commit e295763686f9592976e551e504fdad1d2a3a566d)
* QL: Extend NodeSubclass to read classes from jars (#50866)
As the test classes are spread across more than one project, the Gradle
classpath contains not just folders but also jars.
This commit allows the test class to explore the archive content and
load matching classes from said source.
(cherry picked from commit 25ad74928afcbf286dc58f7d430491b0af662f04)
* QL: Remove implicit conversion inside Literal (#50962)
Literal constructor makes an implicit conversion for each value given
which turns out has some subtle side-effects.
Improve MathProcessors to preserve numeric type where possible
Fix bug on issue compatibility between date and intervals
Preserve the source when folding inside the Optimizer
(cherry picked from commit 9b73e225b0aa07a23859550fb117bae571a2b672)
* QL: Refactor DataType for pluggability (#51328)
Change DataType from enum to class
Break DataType enums into QL (default) and SQL types
Make data type conversion pluggable so that new types can be introduced
As part of the process:
- static type conversion in QL package (such as Literal) has been
removed
- several utility classes have been broken into base (QL) and extended
(SQL) parts based on type awareness
- operators (+,-,/,*) are
- due to extensibility, serialization of arithmetic operation has been
slightly changed and pushed down to the operator executor itself
(cherry picked from commit aebda81b30e1563b877a8896309fd50633e0b663)
* Compilation fixes for 7.x
We added a new rounding in #50609 that handles offsets to the start and
end of the rounding so that we could support `offset` in the `composite`
aggregation. This starts moving `date_histogram` to that new offset.
This is a redo of #50873 with more integration tests.
This reverts commit d114c9db3e1d1a766f9f48f846eed0466125ce83.
We've been parsing the `time_zone` parameter on `date_hitogram` for a
while but it hasn't *done* anything. This wires it up.
Closes#45199
Inspired by #45200
The DATE and DATESTAMP Grok patterns match 2 digit years
as well as 4 digit years. The pattern determination in
find_file_structure worked correctly in this case, but
the regex used to create a multi-line start pattern was
assuming a 4 digit year. Also, the quick rule-out
patterns did not always correctly consider 2 digit years,
meaning that detection was inconsistent.
This change fixes both problems, and also extends the
tests for DATE and DATESTAMP to check both 2 and 4 digit
years.
This commit enables autoscaling in docs tests based on snapshot
builds. This is done so that when the API docs are added to the docs,
then the cluster will have been started to enable autoscaling so these
APIs are available for testing.
We shouldn't be using potentially changing versions of the cluster state
when answering a snapshot status API call by calling `SnapshotService#currentSnapshots` multiple times (each time using `ClusterService#state` under the hood) but instead pass down the state from the transport action.
Having these API behave more in a more deterministic way will make it easier to use them once parallel repository operations
are introduced.
We still test remote reindex against version 0.90. This failed on mac a
few times and rather than spend time investigating this, we no longer
test remote reindex against 0.90 on mac.
Closes#51202
In 2bb31fe (v0.6.0!) we added DEBUG-level logging to the default config of
action loggers "for easier debugging". This change to the default config lives
on to this day. It does not obviously make debugging any easier any more, but
it does result in a good deal of log noise sometimes. This commit removes this
special case from the default config.
Closes#51198
* Fix TimeSeriesLifecycleActionsIT.testShrinkAction
Shrinking a 6 shard index to 3 shards can be quite time consuming and
assertBusy probes the conditions at exponentially growing intervals.
This separates the one assertion that was used for all the conditions
into multiple assertBusy statements and increases the timeout for waiting
for the shrink to complete.
* Allow more time for shrink to complete
This commit allows more time for the shrink operation to complete in
testRetryFailedShrinkAction (separating the assertBusy calls too) and
testMoveToRolloverStep.
* Shrink to no more than 2 shards in tests
(cherry picked from commit 5fe780148fa3536915d61475b087896a5b9ace82)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
This change changes the way to run our test suites in
JVMs configured in FIPS 140 approved mode. It does so by:
- Configuring any given runtime Java in FIPS mode with the bundled
policy and security properties files, setting the system
properties java.security.properties and java.security.policy
with the == operator that overrides the default JVM properties
and policy.
- When runtime java is 11 and higher, using BouncyCastle FIPS
Cryptographic provider and BCJSSE in FIPS mode. These are
used as testRuntime dependencies for unit
tests and internal clusters, and copied (relevant jars)
explicitly to the lib directory for testclusters used in REST tests
- When runtime java is 8, using BouncyCastle FIPS
Cryptographic provider and SunJSSE in FIPS mode.
Running the tests in FIPS 140 approved mode doesn't require an
additional configuration either in CI workers or locally and is
controlled by specifying -Dtests.fips.enabled=true
This commit overrides the stdout and stderr print streams to be
redirected to the main elasticsearch.log file. While the Elasticsearch
project ensures stdout and stderr are not written to, the jdk or 3rd
party libs may do this, which can be unexepected for users used to
looking the elasticsearch log.
closes#50156
If a worktree is used, say for 7.x, and packaging tests are run, the
build within the VM will fail due to the parent checkout not being
accessible. This is because the path of the worktree is for the host
systtem, not the VM. This commit makes the git info unknown, just as if
the .git directory did not exist.
It is the job of the http server transport to release the request in the handler
but the mock fails to do so since we never override `incomingRequest`.
* Centralize mocks initialization in ILM steps tests
This change centralizes initialization of `Client`, `AdminClient`
and `IndicesAdminClient` for all classes extending `AbstractStepTestCase`.
This removes a lot of code duplication and make it easier to write tests.
This also removes need for `AsyncActionStep#setClient`
* Unused imports removed
* Added missed tests
* Fix OpenFollowerIndexStepTests
* Check all snapshots in SnapshotLifecycleRestIT.testFullPolicy
Rather than check the first returned snapshot for a snapshot starting with `snap-` in
SnapshotLifecycleRestIT.testFullPolicy, this commit changes the test to find any snapshots starting
with `snap-`.
In the event that there are no snapshots (the failure case), this also exposes the full results map
so we can diagnose why a failure occurred.
Relates to #50358
* Use a more imperative style for checking
When we get Elasticsearch logs from journald, we want to fetch only log
messages from the last run. There are two reasons for this. First, if
there are many logs, we might get a string that's too large for our
utility methods. Second, when we're looking for a specific message or
error, we almost certainly want to look only at messages from the last
execution.
Previously, we've been trying to do this by clearing out the physical
files under the journald process. But there seems to be some contention
over these directories: if journald writes a log file in between when
our deletion command deletes the file and when it deletes the log
directory, the deletion will fail.
Instead, we can use the cursor capablity of journald to retrieve journal
entries that occur only after a certain cursor. This avoids any effort
to interfere with the underlying file operations of journald.
Wait for the cluster to have settled down and have the same accepted version on all nodes before
executing and cancelling request so that a slow CS accept on one node doesn't make it fall behind
and then get sent the full CS because of the diff-version mismatch, breaking the mechanics of this test.
Closes#51308
Added node closed exception to the retryable remote exceptions as it's possible to run into this exception instead of a connect exception when the master node is just shutting down but still responding to requests.
When building clusters for integration tests, today we install plugins
sequentially. We recently introduced the ability to install plugins in a
single invocation of the install plugin command. Using this can save
substantial time starting up JVMs. This commit changes the build
infrastructure to install multiple plugins at once when building
clusters for integration tests.
For the docs integration tests in particular, where we install many
plugins, this change makes a substantial difference. On my laptop, prior
to this change, installing the plugins sequentially took 115
seconds. After this change, it takes 14 seconds.
* Separate aliases used for tests in TimeSeriesLifecycleActionsIT
This is related to #51375 and hopes to help illuminate why some of those tests are failing. This
commit switches the aliases used in the test to use a random alias name every time (since there were
some complaints in the tests about aliases having more than one write index). With this we hope to
determine the actual cause of the failure in the test.
This also adds additional information to the exception returned when calling move-to-step with the
incorrect current step.
* Fix rest test
* Fix Inconsistent Shard Failure Count in Failed Snapshots
This fix was necessary to allow for the below test enhancement:
We were not adding shard failure entries to a failed snapshot for those
snapshot entries that were never attempted because the snapshot failed during
the init stage and wasn't partial. This caused the never attempted snapshots
to be counted towards the successful shard count which seems wrong and
broke repository consistency tests.
Also, this change adjusts snapshot resiliency tests to run another snapshot
at the end of each test run to guarantee a correct `index.latest` blob exists
after each run.
Closes#47550
The way it was originally written, it sounds like
we are boosting at query time.
Of course, the effect is at query time,
but the point here is that boosting is done at index time
* [ML][Inference] add tags url param to GET (#51330)
Adds a new URL parameter, `tags` to the GET _ml/inference/<model_id> endpoint.
This parameter allows the list of models to be further reduced to those who contain all the provided tags.
Previously this test failed waiting for yellow:
https://gradle-enterprise.elastic.co/s/fv55holsa36tg/console-log#L2676
Oddly cluster health returned red status, but there were no unassigned, relocating or initializing shards.
Placed the waiting for green in a try-catch block, so that when this fails again then cluster state gets printed.
Relates to #48381