In certain situations, such as when configured in FIPS 140 mode,
the Java security provider in use might throw a subclass of
java.lang.Error. We currently do not catch these and as a result
the JVM exits, shutting down elasticsearch.
This commit attempts to address this by catching subclasses of Error
that might be thrown for instance when a PBKDF2 implementation
is used from a Security Provider in FIPS 140 mode, with the password
input being less than 14 bytes (112 bits).
- In our PBKDF2 family of hashers, we catch the Error and
throw an ElasticsearchException while creating or verifying the
hash. We throw on verification instead of simply returning false
on purpose so that the message bubbles up and the cause becomes
obvious (otherwise it would be indistinguishable from a wrong
password).
- In KeyStoreWrapper, we catch the Error in order to wrap and re-throw
a GeneralSecurityException with a helpful message. This can happen when
using any of the keystore CLI commands, when the node starts or when we
attempt to reload secure settings.
- In the `elasticsearch-users` tool, we catch the ElasticsearchException that
the Hasher class re-throws and throw an appropriate UserException.
Tests are missing because it's not trivial to set CI in fips approved mode
right now, and thus any tests would need to be muted. There is a parallel
effort in #64024 to enable that and tests will be added in a followup.
KeyStoreAwareCommand attempted to deduce whether an error occurred
because of a wrong password by checking the cause of the
SecurityException that KeyStoreWrapper.decrypt() throws. Checking
for AEADBadTagException was wrong becase that exception could be
(and usually is) wrapped in an IOException. Furthermore, since we
are doing the check already in KeyStoreWrapper, we can just return
the message of the SecurityException to the user directly, as we do
in other places.
Adjusted the README file to mention both the option to preserve the test
data when simple reproducing/executing the tests, but also when starting
the server node manually and issuing the query(ies) against it.
Follows: #65400
(cherry picked from commit e3a1910d28d8b0ed20997754c74fa4d4d52cda15)
* Fix the `CombineBinaryComparisons` optimizer rule, so that semantic
equality taken into account during the optimization of `NotEquals`
Examples that previously removed the `NotEquals` expressions (leading
to incorrect results):
```
double >= 10 AND integer != 9
--> double >= 10
keyword != '2021' AND datetime >= '2020-01-01T00:00:00'
--> datetime >= '2020-01-01T00:00:00'
```
With the fix, expressions like the above will not be touched.
`NotEquals` will only be eliminated from the `AND` expression if the
left side of the `NotEquals` `semanticEquals()` to the left side
of the other expressions within the conjunction (comparisons against
the same field/expression).
* Unit tests and integration tests
Close#65322
(cherry-picked from 8b2b7fa)
Slack no longer recommends the legacy "integrations" setup (https://api.slack.com/legacy/custom-integrations/incoming-webhooks). Updated documentation to reference https://api.slack.com/messaging/webhooks instead.
Removed screenshots from our documentation related to Slack setup. We should avoid these screenshots (and simply point to Slack documentation) for Slack may change the instructions/their UI in the future.
Also added a short note on the use case of notifying multiple Slack channels.
Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
In #65332, the serialization of the WatcherSearchTemplateRequest class
changed to use IndicesOptions built in XContent facilities. This had
the side effect of fixing the handling of `all` for `expand_wildcards`
to include hidden indices. However, the tests in WatcherUtilsTests were
missed. This change updates those tests.
Backport of #65379
Watcher has a search template that stores indices options to be used as
part of a search during watch execution, but this was not updated to be
aware of hidden indices and the `hidden` expand_wildcards option. This
change makes use of the `IndicesOptions#toXContent` method in Watcher,
which already handles the new value. Additionally, the XContent parsing
is moved to the IndicesOptions class so that we will be less likely to
miss updating this in the future.
Closes#65148
Backport of #65332
The recovery stats assertions in this test ran without any waiting for
the recoveries to actually finish. The fact that they ran after the concurrent
searches checks generally meant that they would pass (because of searches warming caches
+ general relative slowness of searches) but there is no hard guarantees this will work
reliably as the pre-fetch threads which will update the recovery state might still be slow
to do so randomly, causing the assertions to trip.
closes#65302
If we fail to create the `FileChannelReference` (e.g. because the directory it should be created in
was deleted in a test) we have to remove the listener from the `listeners` set to not trip internal
consistency assertions.
Relates #65302 (does not fix it though, but reduces noise from failures by removing secondary
tripped assertions after the test fails)
In aa1ea96b8698aa12bed1c4e8d704882a2a639791 I made all
`testReduceRandom` tests for aggs mimick production more precisely.
More precisely, they pick the correct "lead" result when performing
partial reduction. This is great, but, sadly, some tests assumed that we
always reduced against the "first" aggregator. This fixes those tests.
Closes#65163
This commit updates the IndexAbstractionResolver so that hidden indices
are properly resolved when date math is in use and when we are checking
if the index is visible.
Closes#65157
Backport of #65236
If a shard follow-task hits a non-retryable error and stops, then we
should also stop the retention-leases renewal process associated with
that follow-task.
This change fixes the initialization of the async results service
for the EQL get async action. The boolean that differentiates EQL
from normal _async_search request is set incorrectly, which results
in errors (404) when extending the keep alive of a running EQL search.
Fixes#65108
The current until implementation in sequences is too optimistic, leading
to an aggressive match that discards correct data leading to invalid
results.
This commit addresses this issue and also unifies the until usage inside
TumblingWindow.
Further more it packs together the UntilGroup with SequenceGroup to
minimize memory usage and improve clean-up.
(cherry picked from commit de2724e92c732c66436939dbbedef93c9981b435)
(cherry picked from commit a60757756aae5f5abb31176fee972a7cdeac3649)
It appears that occasionally 30 seconds are not enough for CI workers
to complete DFA jobs. In order to eliminate such failures we increase
the time we wait for DFA jobs to complete in integration tests to
60 seconds.
Fixes#64926
Backport of #65126
Align Ordinal comparator to consider nulls last (higher) in tiebreakers.
Add unit tests to Ordinal comparisons and criterion extraction.
Fix#64706
(cherry picked from commit 93dc883abd6b8855ff1618a574412b7f773b8ff5)
(cherry picked from commit 936e5f1a2cc29c1d5662cb8aa90c629af563a987)
Extract request settings into dedicated methods for easier adjustments
(cherry picked from commit 4f93591cc561c7f8ff7c2f070dd1180f209810b7)
(cherry picked from commit ff7e8427345c304f5a37612c870b48555484b692)
Enable previously disabled tests - only two type of queries remain
disabled: one that does pattern matching and another one for
case-insensitivity.
Fix#63742
(cherry picked from commit 20210cc43b34438c40b8b5aebf0aa2b8161c4104)
(cherry picked from commit 95d08f2c8d0aac52cc1ed470fa489c239ee25159)
Introduce option for specifying whether the results are returned from
the tail (end) of the stream or the head (beginning).
Improve sequencing algorithm by significantly eliminating the number
of in-flight sequences for spare datasets.
Refactor the sequence class by eliminating some of the redundant code.
Change matching behavior for tail sequences.
Return results based on their first entry ordinal instead of
insertion order (which was ordered on the last match ordinal).
Randomize results position inside test suite.
Close#58646
(cherry picked from commit e85d9d1bbee13ad408e789fd62efb30bc8d223f2)
(cherry picked from commit 452c674a10cdc16dced3cde7babf5d5a9d64a6d9)
This commit fixes two problems:
- When extracting a doc value, we allow boolean scalars to be used as input
- The output order of processed feature names is deterministic. Previous custom one hot fields used to be non-deterministic and thus could cause weird bugs.
Today this test fails because the sizes of the snapshot
shards are only kept in a very short period of time in
the InternalSnapshotsInfoService and are not
guaranteed to exist once the shards are correctly
assigned.
closes#64167
If we encounter an exception during extracting data in a data
frame analytics job, we retry once. However, we were not catching
exceptions thrown from processing the search response. This may
result in an infinite loop that causes a stack overflow.
This commit fixes this problem.
Backport of #64947
Fixes the inconsistency between the type of the object returned by the
`SIGN()/SIGNUM()` SQL functions and the specified `DataType`.
In the Class Sign, DataType is DataTypes.INTEGER. The source code is as
follows:
```
public DataType dataType() {
return DataTypes.INTEGER;
}
```
But In the Class MathProcessor, the source code of SIGN((Object l),
Parameter and return value types are the same. Therefore, when using
double or float parameters to test, there is a little problem, the test
method is like the following curl :
```
curl -XPOST 127.0.0.1:9200/_sql -d "{\"query\":\"select SIGN(1.0) \"}" \
-H 'Content-Type: application/json'
```
The result is:
```
{"columns":[{"name":"SIGN(1.0)","type":"integer"}],"rows":[[1.0]]}
```
The result value is `1.0`, but the type is `integer`.
Signed-off-by: mantuliu <240951888@qq.com>
Co-authored-by: Marios Trivyzas <matriv@gmail.com>
(cherry picked from commits aa78301e71f, ced3c1281c7, 40e5b9b)
If the deleted index has N shards, then ShardFollowTaskCleaner can send
N*(N-1)/2 requests to remove N shard-follow tasks. I think that's fine
as the implementation is straightforward. The test failed when the
deleted index has 8 shards. This commit increases the timeout in the
test.
Closes#64311
When using custom processors, the field names extracted from the documents are not the
same as the feature names used for training.
Consequently, it is possible for the stratified sampler to have an incorrect view of the feature rows.
This can lead to the wrong column being read for the class label, and thus throw errors on training
row extraction.
This commit changes the training row feature names used by the stratified sampler so that it matches
the names (and their order) that are sent to the analytics process.
It is possible that a value mapped as a `keyword` has any scalar value type. This includes any numerical value, String, or boolean.
This commit allows `boolean` types to be considered as a part of the categorical feature collection when this is the case.
* Consistency in writing style
Removing spaces before and after brackets for consistency.
* Remove typo
Remove one of two consecutive "the"s
Co-authored-by: Johannes Mahne <johannes.mahne@elastic.co>
This commit converts build code that downloads distributions or other
artifacts to use the new no-kpi subdomain, and removes the formerly used
no-kpi header.
Random the compression factor starting with 1 (to elimitinate nearly 0 values)
which will only use one centroid (and yield 0 for MAD as the aproximate median
is the same as the single centroid mean value)
(cherry picked from commit 940e0f1fde0f40f99af117dd03ab0891c9eedae6)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
Summary of the issue and the root cause:
```
(1) SELECT 100, 100 -> success
(2) SELECT ?, ? (with params: 100, 100) -> success
(3) SELECT 100, 100 FROM test -> Unknown output attribute exception for the second 100
(4) SELECT ?, ? FROM test (params: 100, 100) -> Unknown output attribute exception for the second ?
(5) SELECT field1 as "x", field1 as "x" FROM test -> Unknown output attribute exception for the second "x"
```
There are two separate issues at play here:
1. Construction of `AttributeMap`s keeps only one of the `Attribute`s with the same name even if the `id`s are different (see the `AttributeMapTests` in this PR). This should be fixed no matter what, we should not overwrite attributes with one another during the construction of the `AttributeMap`.
2. The `id` on the `Alias`es is not the same in case the `Alias`es have the same `name` and same `child`
It was considered to simpy fix the second issue by just reassigning the same `id`s to the `Alias`es with the same name and child, but it would not solve the `unknown output attribute exception` (see notes below). This PR covers the fix for the first issue.
Relates to #56013