We've recently seen a number of test failures that tripped an assertion in IndexShard (see issues
linked below), leading to the discovery of a race between resetting a replica when it learns about a
higher term and when the same replica is promoted to primary. This commit fixes the race by
distinguishing between a cluster state primary term (called pendingPrimaryTerm) and a shard-level
operation term. The former is set during the cluster state update or when a replica learns about a
new primary. The latter is only incremented under the operation block, which can happen in a
delayed fashion. It also solves the issue where a replica that's still adjusting to the new term
receives a cluster state update that promotes it to primary, which can happen in the situation of
multiple nodes being shut down in short succession. In that case, the cluster state update thread
would call `asyncBlockOperations` in `updateShardState`, which in turn would throw an exception
as blocking permits is not allowed while an ongoing block is in place, subsequently failing the shard.
This commit therefore extends the IndexShardOperationPermits to allow it to queue multiple blocks
(which will all take precedence over operations acquiring permits). Finally, it also moves the primary
activation of the replication tracker under the operation block, so that the actual transition to
primary only happens under the operation block.
Relates to #32431, #32304 and #32118
* Make cluster stats response contain cluster UUID
* Updating constructor usage in Monitoring tests
* Adding cluster_uuid field to Cluster Stats API reference doc
* Adding rest api spec test for expecting cluster_uuid in cluster stats response
* Adding missing newline
* Indenting do section properly
* Missed a spot!
* Fixing the test cluster ID
Unmuting the test and adding some more debug output. Was not able to
reproduce the prior failure, but it seems possible that the
failure (mismatched counts) could be caused by partial search results
during the test.
The assertions check for shard failures first, because if one of the
two searches is partial the rest of the test will fail.
Next, instead of just checking respective hit counts, we emit the
difference in hits to help identify what went wrong.
Closes#32492
Since LUCENE-8263, testSeqNoAndCheckpoints might trigger merges because
of the updates and deletes in the test. Our merge scheduler will trigger
a flush if there is no pending merge. Those extra flushes will change
the last committed segmentInfos in the engine and fail the test.
This commit uses LogMergePolicy for the engine in the test to avoid
merges.
Closes#32430
This commit adds a boolean system property, `es.scripting.use_java_time`,
which controls the concrete return type used by doc values within
scripts. The return type of accessing doc values for a date field is
changed to Object, essentially duck typing the type to allow
co-existence during the transition from joda time to java time.
This removes a constructor from `AbstractComponent` and
`AbstractLifecycleComponent` that we weren't using and it switches the
logger creation away from one of the `Settings` flavored methods which
are no longer needed.
First, some background: we have 15 different methods to get a logger in
Elasticsearch but they can be broken down into three broad categories
based on what information is provided when building the logger.
Just a class like:
```
private static final Logger logger = ESLoggerFactory.getLogger(ActionModule.class);
```
or:
```
protected final Logger logger = Loggers.getLogger(getClass());
```
The class and settings:
```
this.logger = Loggers.getLogger(getClass(), settings);
```
Or more information like:
```
Loggers.getLogger("index.store.deletes", settings, shardId)
```
The goal of the "class and settings" variant is to attach the node name
to the logger. Because we don't always have the settings available, we
often use the "just a class" variant and get loggers without node names
attached. There isn't any real consistency here. Some loggers get the
node name because it is convenient and some do not.
This change makes the node name available to all loggers all the time.
Almost. There are some caveats are testing that I'll get to. But in
*production* code the node name is node available to all loggers. This
means we can stop using the "class and settings" variants to fetch
loggers which was the real goal here, but a pleasant side effect is that
the ndoe name is now consitent on every log line and optional by editing
the logging pattern. This is all powered by setting the node name
statically on a logging formatter very early in initialization.
Now to tests: tests can't set the node name statically because
subclasses of `ESIntegTestCase` run many nodes in the same jvm, even in
the same class loader. Also, lots of tests don't run with a real node so
they don't *have* a node name at all. To support multiple nodes in the
same JVM tests suss out the node name from the thread name which works
surprisingly well and easy to test in a nice way. For those threads
that are not part of an `ESIntegTestCase` node we stick whatever useful
information we can get form the thread name in the place of the node
name. This allows us to keep the logger format consistent.
When using cross-cluster search through the high-level REST client, the cluster alias from each search hit was not parsed correctly. It would be part of the index field initially, but overridden just a few lines later once setting the shard target (in case we have enough info to build it from the response). In any case, getClusterAlias returns `null` which is a bug.
With this change we rather parse back clusterAliases from the index name, set its corresponding field and properly handle the two possible cases depending on whether we can or cannot build the shard target object.
The method for working out whether a polygon is clockwise or anticlockwise is
mostly correct but doesn't work in some rare cases such as the included test
case. This commit fixes that.
Rollover should not swap aliases when `is_write_index` is set to `true`.
Instead, both the new and old indices should have the rollover alias,
with the newly created index as the new write index
Updates Rollover to leverage the ability to preserve aliases and swap which is the write index.
Historically, Rollover would swap which index had the designated alias for writing documents against. This required users to keep a separate read-alias that enabled reading against both rolled over and newly created indices, whiles the write-alias was being re-assigned at every rollover.
With the ability for aliases to designate a write index, Rollover can be a bit more flexible with its use of aliases.
Updates include:
- Rollover validates that the target alias has a write index (the index that is being rolled over). This means that the restriction that aliases only point to one index is no longer necessary.
- Rollover explicitly (and atomically) swaps which index is the write-index by explicitly assigning the existing index to have `is_write_index: false` and have the newly created index have its rollover alias as `is_write_index: true`. This is only done when `is_write_index: true` on the write index. Default behavior of removing the alias from the rolled over index stays when `is_write_index` is not explicitly set
Relevant things that are staying the same:
- Rollover is rejected if there exist any templates that match the newly-created index and configure the rollover-alias
- I think this existed to prevent the situation where an alias pointed to two indices for a short while. Although this can technically be relaxed, the specific cases that are safe are really particular and difficult to reason, so leaving the broad restriction sounds good
* Ensure decryption related exceptions are handled
This commit ensures that all possible Exceptions in
KeyStoreWrapper#decrypt() are handled. More specifically, in the
case that a wrong password is used for secure settings, calling readX
on the DataInputStream that wraps the CipherInputStream can throw an
IOException. It also adds a test for loading a KeyStoreWrapper with
a wrong password.
Resolves#32411
In rare cases it is possible that a nodes gets an instruction to replace a replica
shard that's in `POST_RECOVERY` with a new initializing primary with the same allocation id.
This can happen by batching cluster states that include the starting of the replica, with
closing of the indices, opening it up again and allocating the primary shard to the node in
question. The node should then clean it's initializing replica and replace it with a new
initializing primary.
I'm not sure whether the test I added really adds enough value as existing tests found this. The main reason I added is to allow for simpler reproduction and to double check I fixed it. I'm open to discuss if we should keep.
Closes#32308
`GetResult` and `SearchHit` have been adjusted to parse back the `_ignored` meta field whenever it gets printed out. Expanded the existing tests to make sure this is covered. Fixed also a small problem around highlighted fields in `SearchHitTests`.
Due to the recent change in LUCENE-8263, we need to adjust the deletion
ration to between 10% to 33% to preserve the current behavior of the
test. However, we may need another refinement if soft-deletes is enabled
as the actual deletes are different because of delete tombstones.
This commit prefers to always execute forceMerge instead of adjusting
the deletion ratio so that this test can focus on testing docStats.
Closes#32449
Due to the recent change in LUCENE-8263, a merge can be triggered if the
deletion ration is higher than 33%. An in-progress merge can prevent a
synced-flush from issuing.
This commit avoids deletes by using different docIds.
Closes#32436
The main highlight is the removal of the reclaim_deletes_weight in the TieredMergePolicy.
The es setting index.merge.policy.reclaim_deletes_weight is deprecated in this commit and the value is ignored. The new merge policy setting setDeletesPctAllowed should be added in a follow up.
This commit changes the randomization to always create an index with a type.
It also adds a way to create a query shard context that maps to an index with
no type registered in order to explicitely test cases where there is no type.
* Using short script form normalized to a map that used 'inline' instead of 'source' so a short form processor definition like:
```
{
"script": "ctx.foo= 'bar'"
}
```
would always warn about the following deprecation:
```
#! Deprecation: Deprecated field [inline] used, expected [source]
```
In testSyncedFlushSkipOutOfSyncReplicas, we reindex the extra documents
to all shards including the out-of-sync replica. However, reindexing to
that replica can trigger merges (due to the new deletes) which cause the
synced-flush failed. This test starts failing after we aggressively
trigger merges segments with a large number of deletes in LUCENE-8263.
Removing some dead code or supressing warnings where apropriate. Most of the
time the variable tested for null is dereferenced earlier or never used before.
Today we allow plugins to add index store implementations yet we are not
doing this in our new way of managing plugins as pull versus push. That
is, today we still allow plugins to push index store providers via an on
index module call where they can turn around and add an index
store. Aside from being inconsistent with how we manage plugins today
where we would look to pull such implementations from plugins at node
creation time, it also means that we do not know at a top-level (for
example, in the indices service) which index stores are available. This
commit addresses this by adding a dedicated plugin type for index store
plugins, removing the index module hook for adding index stores, and by
aggregating these into the top-level of the indices service.
An upcoming [Lucene change](https://issues.apache.org/jira/browse/LUCENE-7976)
will make TieredMergePolicy respect the maximum merged segment size all the
time, meaning it will possibly not respect the `max_num_segments` parameter
anymore if the shard is larger than the maximum segment size.
This change makes sure that `max_num_segments` is respected for now in order
to give us time to think about how to integrate this change, and also to delay
it until 7.0 as this might be a big-enough change for us to wait for a new
major version.
* Introduce fips_mode setting and associated checks
Introduce xpack.security.fips_mode.enabled setting ( default false)
When it is set to true, a number of Bootstrap checks are performed:
- Check that Secure Settings are of the latest version (3)
- Check that no JKS keystores are configured
- Check that compliant algorithms ( PBKDF2 family ) are used for
password hashing
This commit introduces "Application Privileges" to the X-Pack security
model.
Application Privileges are managed within Elasticsearch, and can be
tested with the _has_privileges API, but do not grant access to any
actions or resources within Elasticsearch. Their purpose is to allow
applications outside of Elasticsearch to represent and store their own
privileges model within Elasticsearch roles.
Access to manage application privileges is handled in a new way that
grants permission to specific application names only. This lays the
foundation for more OLS on cluster privileges, which is implemented by
allowing a cluster permission to inspect not just the action being
executed, but also the request to which the action is applied.
To support this, a "conditional cluster privilege" is introduced, which
is like the existing cluster privilege, except that it has a Predicate
over the request as well as over the action name.
Specifically, this adds
- GET/PUT/DELETE actions for defining application level privileges
- application privileges in role definitions
- application privileges in the has_privileges API
- changes to the cluster permission class to support checking of request
objects
- a new "global" element on role definition to provide cluster object
level security (only for manage application privileges)
- changes to `kibana_user`, `kibana_dashboard_only_user` and
`kibana_system` roles to use and manage application privileges
Closes#29820Closes#31559
* Complete changes for running IT in a fips JVM
- Mute :x-pack:qa:sql:security:ssl:integTest as it
cannot run in FIPS 140 JVM until the SQL CLI supports key/cert.
- Set default JVM keystore/truststore password in top level build
script for all integTest tasks in a FIPS 140 JVM
- Changed top level x-pack build script to use keys and certificates
for trust/key material when spinning up clusters for IT
Adds a new single-value metrics aggregation that computes the weighted
average of numeric values that are extracted from the aggregated
documents. These values can be extracted from specific numeric
fields in the documents.
When calculating a regular average, each datapoint has an equal "weight"; it
contributes equally to the final value. In contrast, weighted averages
scale each datapoint differently. The amount that each datapoint contributes
to the final value is extracted from the document, or provided by a script.
As a formula, a weighted average is the `∑(value * weight) / ∑(weight)`
A regular average can be thought of as a weighted average where every value has
an implicit weight of `1`.
Closes#15731
ClassCastException can be thrown by callers of TransportActions.isShardNotAvailableException(e) as e is not always an instance of ElasticSearchException
fixes#32173
Currently we check that the queries that QueryStringQueryBuilder#toQuery returns
is one out of a list of many Lucene query classes. This list has extended a lot over time,
since QueryStringQueryBuilder can build all sort of queries. This makes the test hard to
maintain. The recent addition of alias fields which build a BlendedTermQuery show how
easy this test breaks. Also the current assertions doesn't add a lot in terms of catching
errors. This is why we decided to remove this check.
Closes#32234
The parent filter for nested sort should always match **all** parents regardless
of the child queries. It is used to find the boundaries of a single parent and we use
the child query to match all the filters set in the nested tree so there is no need to
repeat the nested filters.
With this change we ensure that we build bitset filters
only to find the root docs (or the docs at the level where the sort applies) that can be reused
among queries.
Closes#31554Closes#32130Closes#31783
Co-authored-by: Dominic Bevacqua <bev@treatwell.com>
* Enhance Parent circuit breaker error message
This adds information about either the current real usage (if tracking "real"
memory usage) or the child breaker usages to the exception message when the
parent circuit breaker trips.
The messages now look like:
```
[parent] Data too large, data for [my_request] would be [211288064/201.5mb], which is larger than the limit of [209715200/200mb], usages [request=157286400/150mb, fielddata=54001664/51.5mb, in_flight_requests=0/0b, accounting=0/0b]
```
Or when tracking real memory usage:
```
[parent] Data too large, data for [request] would be [251/251b], which is larger than the limit of [200/200b], real usage: [181/181b], new bytes reserved: [70/70b]
```
* Only call currentMemoryUsage once by returning structured object
Resolving wildcards in aliases expression is challenging as we may end
up with no aliases to replace the original expression with, but if we
replace with an empty array that means _all which is quite the opposite.
Now that we support and serialize the original requested aliases,
whenever aliases are replaced we will be able to know what was
initially requested. `MetaData#findAliases` can then be updated to not
return anything in case it gets empty aliases, but the original aliases
were not empty. That means that empty aliases are interpreted as _all
only if they were originally requested that way.
Relates to #31516
Throw an exception for doc['field'].value
if this document is missing a value for the field.
After deprecation changes have been backported to 6.x,
make this a default behaviour in 7.0
Closes#29286
Now write operations like Index, Delete, Update rely on the write-index associated with
an alias to operate against. This means writes will be accepted even when an alias points to multiple indices, so long as one is the write index. Routing values will be used from the AliasMetaData for the alias in the write-index. All read operations are left untouched.
* Add basic support for field aliases in index mappings. (#31287)
* Allow for aliases when fetching stored fields. (#31411)
* Add tests around accessing field aliases in scripts. (#31417)
* Add documentation around field aliases. (#31538)
* Add validation for field alias mappings. (#31518)
* Return both concrete fields and aliases in DocumentFieldMappers#getMapper. (#31671)
* Make sure that field-level security is enforced when using field aliases. (#31807)
* Add more comprehensive tests for field aliases in queries + aggregations. (#31565)
* Remove the deprecated method DocumentFieldMappers#getFieldMapper. (#32148)
When building custom tokenfilters without an index in the _analyze endpoint,
we need to ensure that referring filters are correctly built by calling
their #setReferences() method
Fixes#32154
When a replica is fully recovered (i.e., in `POST_RECOVERY` state) we send a request to the master
to start the shard. The master changes the state of the replica and publishes a cluster state to that
effect. In certain cases, that cluster state can be processed on the node hosting the replica
*together* with a cluster state that promotes that, now started, replica to a primary. This can
happen due to cluster state batched processing or if the master died after having committed the
cluster state that starts the shard but before publishing it to the node with the replica. If the master
also held the primary shard, the new master node will remove the primary (as it failed) and will also
immediately promote the replica (thinking it is started).
Sadly our code in IndexShard didn't allow for this which caused [assertions](13917162ad/server/src/main/java/org/elasticsearch/index/seqno/ReplicationTracker.java (L482)) to be tripped in some of our tests runs.
With the introduction of single types in 6.x, the `_type` field is no longer
indexed, which leads to certain queries that were working before throw errors
now. One such query is the `range` query, that, if performed on a single typer
index, currently throws an IAE since the field is not indexed.
This change adds special treatment for this case in the TypeFieldMapper,
comparing the range queries lower and upper bound to the one existing type and
either returns a MatchAllDocs or a MatchNoDocs query.
Relates to #31632Closes#31476
With the introduction of sequence number, we no longer use versionType to
resolve out of order collision in replication and recovery requests.
This PR removes removes the versionType from translog. We can only remove
it in 7.0 because it is still required in a mixed cluster between 6.x and 5.x.
This commit moves additional unit test runners from being dependencies
of the test task to dependencies of check. Without this change,
reproduce lines are incorrect due to the additional test runner not
matching any of the reproduce class/method info.
closes#31964
Previously we create a translog snapshot inside the resync method,
and that snapshot will be closed by the resync listener. However, if
the resync method throws an exception before the resync listener
is initialized, the translog snapshot won't be released.
Closes#32030
Ensure our tests can run in a FIPS JVM
JKS keystores cannot be used in a FIPS JVM as attempting to use one
in order to init a KeyManagerFactory or a TrustManagerFactory is not
allowed.( JKS keystore algorithms for private key encryption are not
FIPS 140 approved)
This commit replaces JKS keystores in our tests with the
corresponding PEM encoded key and certificates both for key and trust
configurations.
Whenever it's not possible to refactor the test, i.e. when we are
testing that we can load a JKS keystore, etc. we attempt to
mute the test when we are running in FIPS 140 JVM. Testing for the
JVM is naive and is based on the name of the security provider as
we would control the testing infrastrtucture and so this would be
reliable enough.
Other cases of tests being muted are the ones that involve custom
TrustStoreManagers or KeyStoreManagers, null TLS Ciphers and the
SAMLAuthneticator class as we cannot sign XML documents in the
way we were doing. SAMLAuthenticator tests in a FIPS JVM can be
reenabled with precomputed and signed SAML messages at a later stage.
IT will be covered in a subsequent PR
The current docs of the put-mapping Java API is currently broken. It its current
form, it creates an index and uses the whole mapping definition given as a JSON
string as the type name. Since we didn't check the index created in the
IndicesDocumentationIT so far this went unnoticed.
This change adds test to catch this error to the documentation test, changes the
documentation so it works correctly now and adds an input validation to
PutMappingRequest#buildFromSimplifiedDef() which was used internally to reject
calls where no mapping definition is given.
Closes#31906
Dealing with empty fields in the highlight phase can
slow down the query because the query terms extraction is done independently
on each field. This change shortcuts the highlighting performed by the unified highlighter
for fields that are not present in the document. In such cases there is nothing to higlight so
we don't need to visit the query to build the highligh builder.
With this commit we raise the limit of the child circuit breaker used in
the unit test for the circuit breaker service so it is high enough to trip
only the parent circuit breaker. The previous limit was 300 bytes but
theoretically (considering overhead) we could reach 346 bytes. Thus any
value larger than 300 bytes could trip the child circuit breaker leading
to spurious failures.
Relates #31767
* Replace Ingest ScriptContext with Custom Interface
* Make org.elasticsearch.ingest.common.ScriptProcessorTests#testScripting more precise
* Don't mock script factory in ScriptProcessorTests
* Adjust mock script plugin in IT for new API
Make SnapshotInfo and CreateSnapshotResponse parsers lenient for backwards compatibility. Remove extraneous fields from CreateSnapshotRequest toXContent.
* Adds a new auto-interval date histogram
This change adds a new type of histogram aggregation called `auto_date_histogram` where you can specify the target number of buckets you require and it will find an appropriate interval for the returned buckets. The aggregation works by first collecting documents in buckets at second interval, when it has created more than the target number of buckets it merges these buckets into minute interval bucket and continues collecting until it reaches the target number of buckets again. It will keep merging buckets when it exceeds the target until either collection is finished or the highest interval (currently years) is reached. A similar process happens at reduce time.
This aggregation intentionally does not support min_doc_count, offest and extended_bounds to keep the already complex logic from becoming more complex. The aggregation accepts sub-aggregations but will always operate in `breadth_first` mode deferring the computation of sub-aggregations until the final buckets from the shard are known. min_doc_count is effectively hard-coded to zero meaning that we will insert empty buckets where necessary.
Closes#9572
* Adds documentation
* Added sub aggregator test
* Fixes failing docs test
* Brings branch up to date with master changes
* trying to get tests to pass again
* Fixes multiBucketConsumer accounting
* Collects more buckets than needed on shards
This gives us more options at reduce time in terms of how we do the
final merge of the buckeets to produce the final result
* Revert "Collects more buckets than needed on shards"
This reverts commit 993c782d117892af9a3c86a51921cdee630a3ac5.
* Adds ability to merge within a rounding
* Fixes nonn-timezone doc test failure
* Fix time zone tests
* iterates on tests
* Adds test case and documentation changes
Added some notes in the documentation about the intervals that can bbe
returned.
Also added a test case that utilises the merging of conseecutive buckets
* Fixes performance bug
The bug meant that getAppropriate rounding look a huge amount of time
if the range of the data was large but also sparsely populated. In
these situations the rounding would be very low so iterating through
the rounding values from the min key to the max keey look a long time
(~120 seconds in one test).
The solution is to add a rough estimate first which chooses the
rounding based just on the long values of the min and max keeys alone
but selects the rounding one lower than the one it thinks is
appropriate so the accurate method can choose the final rounding taking
into account the fact that intervals are not always fixed length.
Thee commit also adds more tests
* Changes to only do complex reduction on final reduce
* merge latest with master
* correct tests and add a new test case for 10k buckets
* refactor to perform bucket number check in innerBuild
* correctly derive bucket setting, update tests to increase bucket threshold
* fix checkstyle
* address code review comments
* add documentation for default buckets
* fix typo
Because this is a static method on a public API, and one that we encourage
plugin authors to use, the method with the typo is deprecated in 6.x
rather than just renamed.
With this commit we introduce a new circuit-breaking strategy to the parent
circuit breaker. Contrary to the current implementation which only accounts for
memory reserved via child circuit breakers, the new strategy measures real heap
memory usage at the time of reservation. This allows us to be much more
aggressive with the circuit breaker limit so we bump it to 95% by default. The
new strategy is turned on by default and can be controlled with the new cluster
setting `indices.breaker.total.userealmemory`.
Note that we turn it off for all integration tests with an internal test cluster
because it leads to spurious test failures which are of no value (we cannot
fully control heap memory usage in tests). All REST tests, however, will make
use of the real memory circuit breaker.
Relates #31767
Forces fetch tasks to queue even in the event that the queue is
already full. The reasoning is that fetch tasks may only be follow-up
to query tasks, so the number of additional fetch tasks that may enter
the threadpool is expected to be reasonable.
Closes#29442
This test produced different implementations of joda time classes,
depending on if the data was serialized or not (DateTime vs
MutableDateTime). This now uses a common base class to extract the
milliseconds from the data.
Closes#31992
* Added lenient flag for synonym-tokenfilter.
Relates to #30968
* added docs for synonym-graph-tokenfilter
-- Also made lenient final
-- changed from !lenient to lenient == false
* Changes after review (1)
-- Renamed to ElasticsearchSynonymParser
-- Added explanation for ElasticsearchSynonymParser::add method
-- Changed ElasticsearchSynonymParser::logger instance to static
* Added lenient option for WordnetSynonymParser
-- also added more documentation
* Added additional documentation
* Improved documentation
The initial check will never be true, because of the special semantics of NaN,
where no value is equal to Nan, including NaN. Thus, x == Double.NaN always
evaluates to false. The method still works correct because later computations
will also return NaN if the avg argument is NaN, but the intended shortcut
doesn't work.
A newly added class called DateFormatters now contains java.time based
builders for dates, which also intends to be fully backwards compatible,
when the name based date formatters are picked. Also a new class named
CompoundDateTimeFormatter for being able to parse multiple different
formats has been added.
A duelling test class has been added that ensures the same dates when
parsing java or joda time formatted dates for the name based dates.
Note, that java.time and joda time are not fully backwards compatible,
which also means that old formats will currently not work with this
setup.
* add support for is_write_index in put-alias body parsing
The Rest Put-Alias Action does separate parsing of the alias body
to construct the IndicesAliasesRequest. This extra parsing
was missed in #30703.
* test flag was not just ignored by the parser
* disable backcompat tests
* Handle missing values in painless
Throw an exception for `doc['field'].value`
if this document is missing a value for the `field`.
For 7.0:
This is the default behaviour from 7.0
For 6.x:
To enable this behavior from 6.x, a user can set a jvm.option:
`-Des.script.exception_for_missing_value=true` on a node.
If a user does not enable this behavior, a deprecation warning is logged on start up.
Closes#29286
If a get alias api call requests a specific alias pattern then
indices not having any matching aliases should not be included in the response.
This is a second attempt to fix this (first attempt was #28294).
The reason that the first attempt was reverted is because when xpack
security is enabled then index expression (like * or _all) are resolved
prior to when a request is processed in the get aliases transport action,
then `MetaData#findAliases` can't know whether requested all where
requested since it was already expanded in concrete alias names. This
change replaces aliases(...) replaceAliases(...) method on AliasesRequests
class and leave the aliases(...) method on subclasses. So there is a distinction
between when xpack security replaces aliases and a user setting aliases via
the transport or high level http client.
Closes#27763
This is a followup to #31537. It makes a number of changes requested by
a review that came after the PR was merged. These are mostly cleanups
and doc improvements.