* [CCR] Delay auto follow license check
so that we're sure that there are auto follow patterns configured
Otherwise we log a warning in case someone is running with basic or gold
license and has not used the ccr feature.
This is a new index privilege that the user needs to have in the follow cluster.
This privilege is required in addition to the `manage_ccr` cluster privilege in
order to execute the create and follow api.
Closes#33555
* Correctly handle NONE keyword for system keystore
As defined in the PKCS#11 reference guide
https://docs.oracle.com/javase/8/docs/technotes/guides/security/p11guide.html
PKCS#11 tokens can be used as the JSSE keystore and truststore and
the way to indicate this is to set `javax.net.ssl.keyStore` and
`javax.net.ssl.trustStore` to `NONE` (case sensitive).
This commits ensures that we honor this convention and do not
attempt to load the keystore or truststore if the system property is
set to NONE.
* Handle password protected system truststore
When a PKCS#11 token is used as the system truststore, we need to
pass a password when loading it, even if only for reading
certificate entries. This commit ensures that if
`javax.net.ssl.trustStoreType` is set to `PKCS#11` (as it would
when a PKCS#11 token is in use) the password specified in
`javax.net.ssl.trustStorePassword` is passed when attempting to
load the truststore.
Relates #33459
In some cases we want to deprecate a setting, and then automatically
upgrade uses of that setting to a replacement setting. This commit adds
infrastructure for this so that we can upgrade settings when recovering
the cluster state, as well as when such settings are dynamically applied
on cluster update settings requests. This commit only focuses on cluster
settings, index settings can build on this infrastructure in a
follow-up.
When requesting job stats for `_all`, all ES tasks are accepted
resulting to loads of cluster traffic and a memory overhead.
This commit correctly filters out non ML job tasks.
Closes#33515
We may use different global checkpoints to validate/normalize the range
of a change request if the global checkpoint is advanced between these
calls. If this is the case, then we generate an invalid request range.
This commit reverses the logic for CCR license checks in a few
actions. This is done so that the successful case, which tends to be a
larger block of code, does not require indentation.
We have some listeners in the CCR license tests that invoke Assert#fail
if the onSuccess method for the listener is unexpectedly invoked. This
can leave the main test thread hanging until the test suite times out
rather than failing quickly. This commit adds some latch countdowns so
that we fail quickly if these cases are hit.
In the multi-cluster-with-non-compliant-license tests, we try to write
out a java.policy to a temporary directory. However, if this temporary
directory does not already exist then writing the java.policy file will
fail. This commit ensures that the temporary directory exists before we
attempt to write the java.policy file.
This commit adds license checks for the auto-follow implementation. We
check the license on put auto-follow patterns, and then for every
coordination round we check that the local and remote clusters are
licensed for CCR. In the case of non-compliance, we skip coordination
yet continue to schedule follow-ups.
This commit ensures that we bootstrap a new history_uuid when force
allocating a stale primary. A stale primary should never be the source
of an operation-based recovery to another shard which exists before the
forced-allocation.
Closes#26712
Today when checking settings dependencies, we do not check if fallback
settings are present. This means, for example, that if
cluster.remote.*.seeds falls back to search.remote.*.seeds, and
cluster.remote.*.skip_unavailable and search.remote.*.skip_unavailable
depend on cluster.remote.*.seeds, and we have set search.remote.*.seeds
and search.remote.*.skip_unavailable, then validation will fail because
it is expected that cluster.ermote.*.seeds is set here. This commit
addresses this by also checking fallback settings when validating
dependencies. To do this, we adjust the settings exist method to also
check for fallback settings, a case that it was not handling previously.
Change the logging infrastructure to handle when the node name isn't
available in `elasticsearch.yml`. In that case the node name is not
available until long after logging is configured. The biggest change is
that the node name logging no longer fixed at pattern build time.
Instead it is read from a `SetOnce` on every print. If it is unset it is
printed as `unknown` so we have something that fits in the pattern.
On normal startup we don't log anything until the node name is available
so we never see the `unknown`s.
This endpoint accepts an arbitrary file in the request body and
attempts to determine the structure. If successful it also
proposes mappings that could be used when indexing the file's
contents, and calculates simple statistics for each of the fields
that are useful in the data preparation step prior to configuring
machine learning jobs.
Watcher validates `action.auto_create_index` upon startup. If a user
specifies a pattern that does not contain watcher indices, it raises an
error message to include a list of three indices. However, the indices
are separated by a comma and a space which is not considered in parsing.
With this commit we change the error message string so it does not
contain the additional space thus making it more straightforward to copy
it to the configuration file.
Closes#33369
Relates #33497
Instead of passing DirectoryService which causes yet another dependency
on Store we can just pass in a Directory since we will just call
`DirectoryService#newDirectory()` on it anyway.
This change collapses all metrics aggregations classes into a single package `org.elasticsearch.aggregations.metrics`.
It also restricts the visibility of some classes (aggregators and factories) that should not be used outside of the package.
Relates #22868
Some browsers (eg. Firefox) behave differently when presented with
multiple auth schemes in 'WWW-Authenticate' header. The expected
behavior is that browser select the most secure auth-scheme before
trying others, but Firefox selects the first presented auth scheme and
tries the next ones sequentially. As the browser interpretation is
something that we do not control, we can at least present the auth
schemes in most to least secure order as the server's preference.
This commit modifies the code to collect and sort the auth schemes
presented by most to least secure. The priority of the auth schemes is
fixed, the lower number denoting more secure auth-scheme.
The current order of schemes based on the ES supported auth-scheme is
[Negotiate, Bearer,Basic] and when we add future support for
other schemes we will need to update the code. If need be we will make
this configuration customizable in future.
Unit test to verify the WWW-Authenticate header values are sorted by
server preference as more secure to least secure auth schemes.
Tested with Firefox, Chrome, Internet Explorer 11.
Closes#32699
This commit allows us to use different TranslogRecoveryRunner when
recovering an engine from its local translog. This change is a
prerequisite for the commit-based rollback PR.
Relates #32867
Split function section into multiple chapters
Add String functions
Add (small) section on Conversion/Cast functions
Add missing aggregation functions
Enable documentation testing (was disabled by accident). While at it,
fix failing tests
Improve spec tests to allow multi-line queries (useful for docs)
Add ability to ignore a spec test (name should end with -Ignore)
The main benefit of the upgrade for users is the search optimization for top scored documents when the total hit count is not needed. However this optimization is not activated in this change, there is another issue opened to discuss how it should be integrated smoothly.
Some comments about the change:
* Tests that can produce negative scores have been adapted but we need to forbid them completely: #33309Closes#32899
Many files supplied to the upcoming ML data preparation
functionality will not be "log" files. For example,
CSV files are generally not "log" files. Therefore it
makes sense to rename library that determines the
structure of these files.
Although "file structure" could be considered too broad,
as the library currently only works with a few text
formats, in the future it may be extended to work with
more formats.
Auto Following Patterns is a cross cluster replication feature that
keeps track whether in the leader cluster indices are being created with
names that match with a specific pattern and if so automatically let
the follower cluster follow these newly created indices.
This change adds an `AutoFollowCoordinator` component that is only active
on the elected master node. Periodically this component checks the
the cluster state of remote clusters if there new leader indices that
match with configured auto follow patterns that have been defined in
`AutoFollowMetadata` custom metadata.
This change also adds two new APIs to manage auto follow patterns. A put
auto follow pattern api:
```
PUT /_ccr/_autofollow/{{remote_cluster}}
{
"leader_index_pattern": ["logs-*", ...],
"follow_index_pattern": "{{leader_index}}-copy",
"max_concurrent_read_batches": 2
... // other optional parameters
}
```
and delete auto follow pattern api:
```
DELETE /_ccr/_autofollow/{{remote_cluster_alias}}
```
The auto follow patterns are directly tied to the remote cluster aliases
configured in the follow cluster.
Relates to #33007
Co-authored-by: Jason Tedor jason@tedor.me
With features like CCR building on the CCS infrastructure, the settings
prefix search.remote makes less sense as the namespace for these remote
cluster settings than does a more general namespace like
cluster.remote. This commit replaces these settings with cluster.remote
with a fallback to the deprecated settings search.remote.
This commit is related to #32517. It allows an "server_name"
attribute on a DiscoveryNode to be propagated to the server using
the TLS SNI extentsion. This functionality is only implemented for
the netty security transport.
Drops and unused logging constructor, simplifies a rarely used one, and
removes `Settings` from a third. There is now only a single logging ctor
that takes `Settings` and we'll remove that one in a follow up change.
This commit adds a security client to the high level rest client, which
includes an implementation for the put user api. As part of these
changes, a new request and response class have been added that are
specific to the high level rest client. One change here is that the response
was previously wrapped inside a user object. The plan is to remove this
wrapping and this PR adds an unwrapped response outside of the user
object so we can remove the user object later on.
See #29827
Solves all of the xpack line length suppressions and then merges the
remainder of the xpack checkstyle_suppressions.xml file into the core
checkstyle_suppressions.xml file. At this point that just means the
antlr generated files for sql.
It also adds an exclusion to the line length tests for javadocs that
are just a URL. We have one such javadoc and breaking up the line would
make the link difficult to use.
The log structure endpoint will return these in addition to
pure structure information so that it can be used to drive
pre-import data visualizer functionality.
The statistics for every field are count, cardinality
(distinct count) and top hits (most common values). Extra
statistics are calculated if the field is numeric: min, max,
mean and median.
With the introduction of the default distribution, it means that by
default the query cache is wrapped in the security implementation of the
query cache. This cache does not allow caching if the request does not
carry indices permissions. Yet, this will not happen if authorization is
not allowed, which it is not by default. This means that with the
introduction of the default distribution, query caching was disabled by
default! This commit addresses this by checking if authorization is
allowed and if not, delegating to the default indices query
cache. Otherwise, we proceed as before with security. Additionally, we
clear the cache on license state changes.
Extend SHOW TABLES, DESCRIBE and SHOW COLUMNS to support table
identifiers not just SQL LIKE pattern.
This allows both Elasticsearch-style multi-index patterns and SQL LIKE.
To disambiguate between the two (as the " vs ' can be easy to miss),
the grammar now requires LIKE keyword as a prefix for all LIKE-like
patterns.
Also added some docs comparing the two types of patterns.
Fix#33294
This is not changing the behaviour as when the sort field was set
to `influencer_score` the secondary sort would be used and that
was using the `record_score` at the highest priority.
1. The TOMCAT_DATESTAMP format needs to be checked before
TIMESTAMP_ISO8601, otherwise TIMESTAMP_ISO8601 will
match the start of the Tomcat datestamp.
2. Exclude more characters before and after numbers. For
example, in 1.2.3 we don't want to match 1.2 as a float.
The comparator used TimeValue parsing, which meant it couldn't handle
calendar time. This fixes the comparator to handle either (and potentially
mixed). The mixing shouldn't be an issue since the validation code
upstream will prevent it, but was simplest to allow the comparator
to handle both.
In Lucene 8 the statistics for a field (doc_count, sum_doc_count, ...) are
checked and invalid values (v < 0) are rejected. Though for the _field_names
field we hide the statistics of the field if security is enabled since
some terms (field names) may be filtered. However this statistics are never
used, this field is not used for ranking and cannot be used to generate
term vectors. For these reasons this commit restores the original statistics
for the field in order to be compliant with Lucene 8.
This test fails several times due to timeout when asserting the number
of docs on the following and leading indices. This change reduces
the number of docs to index and increases the timeout.
* master:
Mute test watcher usage stats output
[Rollup] Fix FullClusterRestart test
Adjust soft-deletes version after backport into 6.5
completely drop `index.shard.check_on_startup: fix` for 7.0 (#33194)
Fix AwaitsFix issue number
Mute SmokeTestWatcherWithSecurityIT testsi
drop `index.shard.check_on_startup: fix` (#32279)
tracked at
[DOCS] Moves ml folder from x-pack/docs to docs (#33248)
[DOCS] Move rollup APIs to docs (#31450)
[DOCS] Rename X-Pack Commands section (#33005)
TEST: Disable soft-deletes in ParentChildTestCase
Fixes SecurityIntegTestCase so it always adds at least one alias (#33296)
Fix pom for build-tools (#33300)
Lazy evaluate java9home (#33301)
SQL: test coverage for JdbcResultSet (#32813)
Work around to be able to generate eclipse projects (#33295)
Highlight that index_phrases only works if no slop is used (#33303)
Different handling for security specific errors in the CLI. Fix for https://github.com/elastic/elasticsearch/issues/33230 (#33255)
[ML] Refactor delimited file structure detection (#33233)
SQL: Support multi-index format as table identifier (#33278)
MINOR: Remove Dead Code from PathTrie (#33280)
Enable forbiddenapis server java9 (#33245)
We need to wait for the job to fully initialize and start before
we can attempt to stop it. If we don't, it's possible for the stop
API to be called before the persistent task is fully loaded and it'll
throw an exception.
Closes#32773
In the previous commit where SmokeTestWatcherWithSecurityIT tests were
muted, I added the incorrect issue numbers. This commit fixes this. The
issue for the tests is #33320.
* Fixes SecurityIntegTestCase so it always adds at least one alias
`SecurityIntegTestCase.createIndicesWithRandomAliases` could randomly
fail because its not gauranteed that the randomness of which aliases to
add to the `IndicesAliasesRequestBuilder` would always select at least
one alias to add. This change fixes the problem by keeping track of
whether we have added an alias to teh request and forcing the last
alias to be added if no other aliases have been added so far.
Closes#30098
Closes #33123e
* Addresses review comments
These response classes did not add any value and in that case just AcknowledgedResponse should be used.
I also changed the formatting of methods to take one line per parameter in
FollowIndexAction.java and UnfollowIndexAction.java files to make
reviewing diffs in the future easier.
* Tests for JdbcResultSet
* Added VARCHAR conversion for different types
* Made error messages consistent: they now contain both the type that fails to be converted and the value itself
1. Use the term "delimited" rather than "separated values"
2. Use a single factory class with arguments to specify the
delimiter and identification constraints
This change makes it easier to add support for other
delimiter characters.
* master:
Integrates soft-deletes into Elasticsearch (#33222)
Revert "Integrates soft-deletes into Elasticsearch (#33222)"
Add support for "authorization_realms" (#33262)
Authorization Realms allow an authenticating realm to delegate the task
of constructing a User object (with name, roles, etc) to one or more
other realms.
E.g. A client could authenticate using PKI, but then delegate to an LDAP
realm. The LDAP realm performs a "lookup" by principal, and then does
regular role-mapping from the discovered user.
This commit includes:
- authorization_realm support in the pki, ldap, saml & kerberos realms
- docs for authorization_realms
- checks that there are no "authorization chains"
(whereby "realm-a" delegates to "realm-b", but "realm-b" delegates to "realm-c")
Authorization realms is a platinum feature.
This PR removes the deprecated `Custom` class in `IndexMetaData`, in favor
of a `Map<String, DiffableStringMap>` that is used to store custom index
metadata. As part of this, there is now no way to set this metadata in a
template or create index request (since it's only set by plugins, or dedicated
REST endpoints).
The `Map<String, DiffableStringMap>` is intended to be a namespaced `Map<String,
String>` (`DiffableStringMap` implements `Map<String, String>`, so the signature
is more like `Map<String, Map<String, String>>`). This is so we can do things
like:
``` java
Map<String, String> ccrMeta = indexMetaData.getCustom("ccr");
```
And then have complete control over the metadata. This also means any
plugin/feature that uses this has to manage its own BWC, as the map is just
serialized as a map. It also means that if metadata is put in the map that isn't
used (for instance, if a plugin were removed), it causes no failures the way
an unregistered `Setting` would.
The reason I use a custom `DiffableStringMap` here rather than a plain
`Map<String, String>` is so the map can be diffed with previous cluster state
updates for serialization.
Supersedes #32683
This commit ensures that when `TriggerService.start()` is called,
we ensure in the trigger engine implementations that current watches are
removed instead of adding to the existing ones in
`TickerScheduleTriggerEngine.start()`
Two additional minor fixes, where the result remains the same but less code gets executed.
1. If the node is not a data node, we forgot to set the status to
STARTING when watcher is being started. This should not be a big issue,
because a non-data node does not spent a lot of time loading as there
are no watches which need loading.
2. If a new cluster state came in during a reload, we had two checks in
place to abort loading the current one. The first one before we load all
the watches of the local node and the second before watcher is starting
with those new watches. Turned out that the first check was not
returning, which meant we always tried to load all the watches, and then
would fail on the second check. This has been fixed here.
Ensure that the SSLConfigurationReloaderTests can run with JDK 11
by pinning the Server TLS version to TLS1.2. This can be revisited
while tackling the effort to full support TLSv1.3 in
https://github.com/elastic/elasticsearch/issues/32276Resolves#32124
Ran for all locales in system to find locales which caused
problems in tests due to incorrect generalized time handling
in simple kdc ldap server.
Closes#33228
We need to limit the search request aggregations to whole multiples
of the configured interval for both histogram and date_histogram.
Otherwise, agg buckets won't overlap with the rolled up buckets
and the results will be incorrect.
For histogram, the validation is very simple: request must be >= the config,
and modulo evenly.
Dates are more tricky.
- If both request and config are fixed dates, we can convert to millis
and treat them just like the histo
- If both are calendar, we make sure the request is >= the config with
a static lookup map that ranks the calendar values relatively. All
calendar units are "singles", so they are evenly divisible already
- We disallow any other combination (one fixed, one calendar, etc)
When a node dies that carries a watcher shard or a shard is relocated to
another node, then watcher needs not only trigger a reload on the node
where the shard relocation happened, but also on other nodes where
copies of this shard, as different watches may need to be loaded.
This commit takes the change of remote nodes into account by not only
storing the local shard allocation ids in the WatcherLifeCycleService,
but storing a list of ShardRoutings based on the local active shards.
This also fixes some tests, which had a wrong assumption. Using
`TestShardRouting.newShardRouting` in our tests for cluster state
creation led to the issue of always creating new allocation ids which
implicitely lead to a reload.
This extracts a super class out of the rollup indexer called the AsyncTwoPhaseIterator.
The implementor of it can define the query, transformation of the response,
indexing and the object to persist the position/state of the indexer.
The stats object used by the indexer to record progress is also now abstract, allowing
the implementation provide custom stats beyond what the indexer provides. It also
allows the implementation to decide how the stats are presented (leaves toXContent()
up to the implementation).
This should allow new projects to reuse the search-then-index persistent task that Rollup
uses, but without the restrictions/baggage of how Rollup has to work internally to
satisfy time-based rollups.
* master:
Painless: Add Bindings (#33042)
Update version after client credentials backport
Fix forbidden apis on FIPS (#33202)
Remote 6.x transport BWC Layer for `_shrink` (#33236)
Test fix - Graph HLRC tests needed another field adding to randomisation exception list
HLRC: Add ML Get Records API (#33085)
[ML] Fix character set finder bug with unencodable charsets (#33234)
TESTS: Fix overly long lines (#33240)
Test fix - Graph HLRC test was missing field name to be excluded from randomisation logic
Remove unsupported group_shard_failures parameter (#33208)
Update BucketUtils#suggestShardSideQueueSize signature (#33210)
Parse PEM Key files leniantly (#33173)
INGEST: Add Pipeline Processor (#32473)
Core: Add java time xcontent serializers (#33120)
Consider multi release jars when running third party audit (#33206)
Update MSI documentation (#31950)
HLRC: create base timed request class (#33216)
[DOCS] Fixes command page titles
HLRC: Move ML protocol classes into client ml package (#33203)
Scroll queries asking for rescore are considered invalid (#32918)
Painless: Fix Semicolon Regression (#33212)
ingest: minor - update test to include dissect (#33211)
Switch remaining LLREST usage to new style Requests (#33171)
HLREST: add reindex API (#32679)
This commit changes the serialization version from V_7_0_0_alpha1 to
V_6_5_0 for the create token request and response with a client
credentials grant type. The client credentials work has now been
backported to 6.x.
Relates #33106
- third party audit detects jar hell with JDK so we disable it
- jdk non portable in forbiddenapis detects classes being used from the
JDK ( for fips ) that are not portable, this is intended so we don't
scan for it on fips.
- different exclusion rules for third party audit on fips
Closes#33179
Some character sets cannot be encoded and this was tripping
up the binary data check in the ML log structure character
set finder.
The fix is to assume that if ICU4J identifies that some bytes
correspond to a character set that cannot be encoded and those
bytes contain zeroes then the data is binary rather than text.
Fixes#33227
Exclude classes meant for newer versions than what we are auditing against, those classes won't be found. There's no reason to exclude JDK classes from newer versions, with this PR, we will not extract them in the first place.