The following "user privileges" objects have been created
on the client side:
* ApplicationResourcePrivileges
* IndicesPrivileges
* GlobalOperationPrivilege
* GlobalPrivileges
as well as the aggregating `Role` entity.
This change adds a high level freeze API that allows to mark an
index as frozen and vice versa. Indices must be closed in order to
become frozen and an open but frozen index must be closed to be
defrosted. This change also adds a index.frozen setting to
mark frozen indices and integrates the frozen engine with the
SearchOperationListener that resets and releases the directory
reader after and before search phases.
Relates to #34352
Depends on #34357
The MockTcpTransport is not friendly in regards to memory usage. It must
allocate multiple byte arrays for every message. This improves the
memory situation by failing fast if the message is improperly formatted.
Additionally, it uses reusable big arrays for at least half of the
allocated byte arrays.
Validate remote cluster license as part of put auto follow pattern api call
in addition of validation that when auto follow coordinator starts auto
following indices in the leader cluster.
Also added qa module that tests what happens to ccr after downgrading to basic license.
Existing active follow indices should remain to follow,
but the auto follow feature should not pickup new leader indices.
Add `IsNull` node in parser to simplify expressions so that `<value> IS NULL` is
no longer translated internally to `NOT(<value> IS NOT NULL)`
Replace `IsNotNullProcessor` with `CheckNullProcessor` to encapsulate both
isNull and isNotNull functionality.
Closes: #34876Fixes: #35171
* DISCOVERY: Fix RollingUpgradeTests
* Don't manually manage min master nodes if not necessary
* Remove some dead code
* Allow for manually supplying list of seed nodes
* Closes#35178
This documents how to include the search queries in the audit log.
There is a catch, that even if enabling `emit_request_body`, which should
output queries included in request bodies, search queries were not output
because, implicitly, no REST layer audit event type was included.
This folk knowledge is herein imprinted.
This change adds a logger for the query and fetch phases that prints all requests
before their execution at the trace level. This will help debugging cases where an issue
occurs during the execution since only completed queries are logged by the slow logs.
Many realm tests were written to use separate setting objects for
"global settings" and "realm settings".
Since #30241 there is no distinction between these settings, so these
tests can be cleaned up to use a single Settings object.
We changed the way realm settings are defined, and this affects custom
realms in SecurityExtensions. This change adds those details to the
breaking changes docs.
Relates: #30241
* [DOCS] ILM API Ref edits
* [DOCS] Fixed endpoint for DELETE policy.
* [DOCS] Removed comparison to setting index.lifecycle.name to null.
* [DOCS] Fixed xrefs to explain API.
Change `nullable()` logic of AND and OR to false since in the Optimizer
we cannot fold to null as we might be handling and expression in the
SELECT clause.
Introduce folding of null for AND and OR expressions in PruneFilter()
since we now know that we are in HAVING or WHERE clause and we
can fold `null` to `false`
Fixes: #35088
Co-authored-by: Costin Leau <costin.leau@gmail.com>
This is related to #34483. It introduces a namespaced setting for
compression that allows users to configure compression on a per remote
cluster basis. The transport.tcp.compress remains as a fallback
setting. If transport.tcp.compress is set to true, then all requests
and responses are compressed. If it is set to false, only requests to
clusters based on the cluster.remote.cluster_name.transport.compress
setting are compressed. However, after this change regardless of any
local settings, responses will be compressed if the request that is
received was compressed.
Today our OS information returned in node stats only returns a
high-level name of the OS (e.g., "Linux"). Yet, for some uses this is
too high-level and knowing at a finer level of granularity the
underlying OS can be useful. This commit extracts the pretty name on
Linux from /etc/os-release. This pretty name usually includes the Linux
vendor and the Linux vendor version number (e.g., Fedora 28).
This follows #33552 , when the `_authenticate` API added a new
`User` object for the API's response. This changes the `put_user`
API to also employ a `User` object in the request.
The User object changed slightly.
A bug with put_user only putting/updating enabled (but not disabled)
users has been fixed.
Currently we introduced a hard limit of 1024 to the number of fields a query can
be expanded to in #26541. Instead of using a hard limit, we should make this
configurable. This change removes the hard limit check and uses the existing
`max_clause_count` setting instead.
Closes#34778
Currently when aggregating on an unmapped date field (e.g. using a
date_histogram) we don't preserve the aggregations `format` setting but instead
use the default format. This can lead to loosing the aggregations `format` when
aggregating over several indices where some of them contain unmapped date fields
and are encountered first in the reduce phase.
Related to #31760
Today the `composite` aggregation throws an error if a source targets an
unmapped field and `missing_bucket` is set to false. Documents without a
value for a source cannot produce any bucket if `missing_bucket` is not
activated so the error is a shortcut to say that the response will be empty.
However this is not consistent with the `terms` aggregation which accepts
unmapped field by default even if the response is also guaranteed to be empty.
This commit removes this restriction, if a source contains an unmapped field
we now return an empty response (no buckets).
Closes#35317
The current implementation of asyncBlockOperations() can be used to
execute some code once all indexing operations permits have been acquired,
then releases all permits immediately after the code execution. This
immediate release is not suitable for treatments that need to keep all
permits over multiple execution steps.
This commit adds a new asyncBlockOperations() that exposes a Releasable,
making it possible to acquire all permits and only release them all
when needed by closing the Releasable. The existing blockOperations()
method has been modified to delegate permit acquisition/releasing to this new
method.
Relates to #33888