As part of #30241 realm settings were changed to be true affix
settings. In the process of this change, the "ssl." prefix was lost
from the realm truststore password. It should be:
xpack.security.authc.realms.<type>.<name>.ssl.truststore.password
Due to a mismatch between the way we define SSL settings and load SSL
contexts, there was no way to define this legacy password setting in a
realm config.
The settings validation would reject "ssl.truststore.password" but the
SSL service would ignore "truststore.password"
Backport of: #42336
This commit makes creators of GetField split the fields into document fields and metadata fields. It is part of larger refactoring that aims to remove the calls to static methods of MapperService related to metadata fields, as discussed in #24422.
Allow querying of FROZEN indices both through dedicated SQL grammar
extension:
> SELECT field FROM FROZEN index
and also through driver configuration parameter, namely:
> index.include.frozen: true/false
Fix#39390Fix#39377
(cherry picked from commit 2445a933915f420c7f51e8505afa0a7978ce6b0f)
When asserting on the checkpoint value if the DF has completed the checkpoint will be 1 else 0.
Similarly state may be started or indexing. Closes#42309
ShardId already implements Writeable so there is no need for it to implement Streamable too. Also the readShardId static method can be
easily replaced with direct usages of the constructor that takes a
StreamInput as argument.
* Remove IndexShard dependency from Repository
In order to simplify repository testing especially for BlobStoreRepository
it's important to remove the dependency on IndexShard and reduce it to
Store and MapperService (in the snapshot case). This significantly reduces
the dependcy footprint for Repository and allows unittesting without starting
nodes or instantiate entire shard instances. This change deprecates the old
method signatures and adds a unittest for FileRepository to show the advantage
of this change.
In addition, the unittesting surfaced a bug where the internal file names that
are private to the repository were used in the recovery stats instead of the
target file names which makes it impossible to relate to the actual lucene files
in the recovery stats.
* don't delegate deprecated methods
* apply comments
* test
rp.client_secret is a required secure setting. Make sure we fail with
a SettingsException and a clear, actionable message when building
the realm, if the setting is missing.
Enhance the handling of merging the claims sets of the
ID Token and the UserInfo response. JsonObject#merge would throw a
runtime exception when attempting to merge two objects with the
same key and different values. This could happen for an OP that
returns different vales for the same claim in the ID Token and the
UserInfo response ( Google does that for profile claim ).
If a claim is contained in both sets, we attempt to merge the
values if they are objects or arrays, otherwise the ID Token claim
value takes presedence and overwrites the userinfo response.
This adds the node name where we fail to start a process via the native
controller to facilitate debugging as otherwise it might not be known
to which node the job was allocated.
Moves the test infrastructure away from using node.max_local_storage_nodes, allowing us in a
follow-up PR to deprecate this setting in 7.x and to remove it in 8.0.
This also changes the behavior of InternalTestCluster so that starting up nodes will not automatically
reuse data folders of previously stopped nodes. If this behavior is desired, it needs to be explicitly
done by passing the data path from the stopped node to the new node that is started.
SHA256 was recently added to the Hasher class in order to be used
in the TokenService. A few tests were still using values() to get
the available algorithms from the Enum and it could happen that
SHA256 would be picked up by these.
This change adds an extra convenience method
(Hasher#getAvailableAlgoCacheHash) and enures that only this and
Hasher#getAvailableAlgoStoredHash are used for getting the list of
available password hashing algorithms in our tests.
This performs a simple restart test to move a basic licensed
cluster from no security (the default) to security & transport TLS
enabled.
Backport of: #41933
Re-enable muted tests and accommodate recent backend changes
that result in higher memory usage being reported for a job
at the start of its life-cycle
This corrects what appears to have been a copy-paste error
where the logger for `MachineLearning` and `DataFrame` was wrongly
set to be that of `XPackPlugin`.
If there are no realms that depend on the native role mapping store,
then changes should it should not perform any cache refresh.
A refresh with an empty realm array will refresh all realms.
This also fixes a spurious log warning that could occur if the
role mapping store was notified that the security index was recovered
before any realm were attached.
Backport of: #42169
Rollup jobs can define how long they should wait before rolling up new documents.
However if the delay is smaller or if it's not a multiple of the rollup interval
the job can create incomplete buckets because the max boundary for a job is computed
from the time when the job started rounded to the interval minus the delay. This change
fixes this computation by applying the delay substraction before the rounding in order to ensure
that we never create a boundary that falls in a middle of a bucket.
The date_histogram accepts an interval which can be either a calendar
interval (DST-aware, leap seconds, arbitrary length of months, etc) or
fixed interval (strict multiples of SI units). Unfortunately this is inferred
by first trying to parse as a calendar interval, then falling back to fixed
if that fails.
This leads to confusing arrangement where `1d` == calendar, but
`2d` == fixed. And if you want a day of fixed time, you have to
specify `24h` (e.g. the next smallest unit). This arrangement is very
error-prone for users.
This PR adds `calendar_interval` and `fixed_interval` parameters to any
code that uses intervals (date_histogram, rollup, composite, datafeed, etc).
Calendar only accepts calendar intervals, fixed accepts any combination of
units (meaning `1d` can be used to specify `24h` in fixed time), and both
are mutually exclusive.
The old interval behavior is deprecated and will throw a deprecation warning.
It is also mutually exclusive with the two new parameters. In the future the
old dual-purpose interval will be removed.
The change applies to both REST and java clients.
This commit changes how access tokens and refresh tokens are stored
in the tokens index.
Access token values are now hashed before being stored in the id
field of the `user_token` and before becoming part of the token
document id. Refresh token values are hashed before being stored
in the token field of the `refresh_token`. The tokens are hashed
without a salt value since these are v4 UUID values that have
enough entropy themselves. Both rainbow table attacks and offline
brute force attacks are impractical.
As a side effect of this change and in order to support multiple
concurrent refreshes as introduced in #39631, upon refreshing an
<access token, refresh token> pair, the superseding access token
and refresh tokens values are stored in the superseded token doc,
encrypted with a key that is derived from the superseded refresh
token. As such, subsequent requests to refresh the same token in
the predefined time window will return the same superseding access
token and refresh token values, without hitting the tokens index
(as this only stores hashes of the token values). AES in GCM
mode is used for encrypting the token values and the key
derivation from the superseded refresh token uses a small number
of iterations as it needs to be quick.
For backwards compatibility reasons, the new behavior is only
enabled when all nodes in a cluster are in the required version
so that old nodes can cope with the token values in a mixed
cluster during a rolling upgrade.
This commit updates the default ciphers and TLS protocols that are used
when the runtime JDK supports them. New cipher support has been
introduced in JDK 11 and 12 along with performance fixes for AES GCM.
The ciphers are ordered with PFS ciphers being most preferred, then
AEAD ciphers, and finally those with mainstream hardware support. When
available stronger encryption is preferred for a given cipher.
This is a backport of #41385 and #41808. There are known JDK bugs with
TLSv1.3 that have been fixed in various versions. These are:
1. The JDK's bundled HttpsServer will endless loop under JDK11 and JDK
12.0 (Fixed in 12.0.1) based on the way the Apache HttpClient performs
a close (half close).
2. In all versions of JDK 11 and 12, the HttpsServer will endless loop
when certificates are not trusted or another handshake error occurs. An
email has been sent to the openjdk security-dev list and #38646 is open
to track this.
3. In JDK 11.0.2 and prior there is a race condition with session
resumption that leads to handshake errors when multiple concurrent
handshakes are going on between the same client and server. This bug
does not appear when client authentication is in use. This is
JDK-8213202, which was fixed in 11.0.3 and 12.0.
4. In JDK 11.0.2 and prior there is a bug where resumed TLS sessions do
not retain peer certificate information. This is JDK-8212885.
The way these issues are addressed is that the current java version is
checked and used to determine the supported protocols for tests that
provoke these issues.
Muting a number of AutoDetectMemoryLimitIT tests to give CI a chance to
settle before easing in required backend changes.
relates elastic/ml-cpp#486
relates #42086
Improve the hard_limit memory audit message by reporting how many bytes
over the configured memory limit the job was at the point of the last
allocation failure.
Previously the model memory usage was reported, however this was
inaccurate and hence of limited use - primarily because the total
memory used by the model can decrease significantly after the models
status is changed to hard_limit but before the model size stats are
reported from autodetect to ES.
While this PR contains the changes to the format of the hard_limit audit
message it is dependent on modifications to the ml-cpp backend to
send additional data fields in the model size stats message. These
changes will follow in a subsequent PR. It is worth noting that this PR
must be merged prior to the ml-cpp one, to keep CI tests happy.
Due to a bug in JTS WKT parser, JTS cannot parse most of WKT shapes if
the shape type is written in the lower case. For examples `point (1 2)`
is causing JTS inside H2GIS to fail on tr-TR locale as a result of
case-insensitive comparison.
This change replaces the extremely unfriendly message
"Number of messages analyzed must be positive" in the
case where the sample lines were incorrectly grouped
into just one message to an error that more helpfully
explains the likely root cause of the problem.
The migrate tool was added when the native realm was created, to aid
users in converting from file realms that were per node, into the
cluster managed native realm. While this tool was useful at the time,
users should now be using the native realm directly. This commit
deprecates the tool, to be removed in a followup for 8.0.
Interval * integer number is a valid operation which previously was
only supported for foldables (literals) and not when a field was
involved. That was because:
1. There was no common type returned for that combination
2. The `BinaryArithmeticOperation` was permitting the multiplication
(called by fold()) but the BinaryArithmeticProcessor didn't allow it
Moreover the error message for invalid arithmetic operations was wrong
because of the issue with the overloading methods of
`LoggerMessageFormat.format`.
Fixes: #41239Fixes: #41200
(cherry picked from commit 91039bab12d3ef27d6eac9cdc891a3b3ad0c694d)
If a basic license enables security, then we should also enforce TLS
on the transport interface.
This was already the case for Standard/Gold/Platinum licenses.
For Basic, security defaults to disabled, so some of the process
around checking whether security is actuallY enabled is more complex
now that we need to account for basic licenses.
Adds an initial limited implementations of geo features to SQL. This implementation is based on the [OpenGIS® Implementation Standard for Geographic information - Simple feature access](http://www.opengeospatial.org/standards/sfs), which is the current standard for GIS system implementation. This effort is concentrate on SQL option AKA ISO 19125-2.
Queries that are supported as a result of this initial implementation
Metadata commands
- `DESCRIBE table` - returns the correct column types `GEOMETRY` for geo shapes and geo points.
- `SHOW FUNCTIONS` - returns a list that includes supported `ST_` functions
- `SYS TYPES` and `SYS COLUMNS` display correct types `GEO_SHAPE` and `GEO_POINT` for geo shapes and geo points accordingly.
Returning geoshapes and geopoints from elasticsearch
- `SELECT geom FROM table` - returns the geoshapes and geo_points as libs/geo objects in JDBC or as WKT strings in console.
- `SELECT ST_AsWKT(geom) FROM table;` and `SELECT ST_AsText(geom) FROM table;`- returns the geoshapes ang geopoints in their WKT representation;
Using geopoints to elasticsearch
- The following functions will be supported for geopoints in queries, sorting and aggregations: `ST_GeomFromText`, `ST_X`, `ST_Y`, `ST_Z`, `ST_GeometryType`, and `ST_Distance`. In most cases when used in queries, sorting and aggregations, these function are translated into script. These functions can be used in the SELECT clause for both geopoints and geoshapes.
- `SELECT * FROM table WHERE ST_Distance(ST_GeomFromText(POINT(1 2), point) < 10;` - returns all records for which `point` is located within 10m from the `POINT(1 2)`. In this case the WHERE clause is translated into a range query.
Limitations:
Geoshapes cannot be used in queries, sorting and aggregations as part of this initial effort. In order to fully take advantage of geoshapes we would need to have access to geoshape doc values, which is coming in #37206. `ST_Z` cannot be used on geopoints in queries, sorting and aggregations since we don't store altitude in geo_point doc values.
Relates to #29872
Backport of #42031
The `toStepKeys()` method was only called in its own test case. The real
list of StepKeys that's used in action execution is generated from the
list of actual step objects returned by `toSteps()`.
This commit removes that method.
* [ML] adding pivot.size option for setting paging size
* Changing field name to address PR comments
* fixing ctor usage
* adjust hlrc for field name change
* [ML] properly nesting objects in document source
* Throw exception on agg extraction failure, cause it to fail df
* throwing error to stop df if unsupported agg is found
Previously, TransportSingleShardAction required constructing a new
empty response object. This response object's Streamable readFrom
was used. As part of the migration to Writeable, the interface here
was updated to leverage Writeable.Reader.
relates to #34389.
- msearch exceptions should be thrown directly instead of wrapping
in a RuntimeException
- Do not allow partial results (where some indices are missing),
instead throw an exception if any index is missing
Direct the task request to the node executing the task and also refactor the task responses
so all errors are returned and set the HTTP status code based on presence of errors.
These tests are failing somewhat mysteriously, indicating that when we
renew retention leaess during a restore that our retention leases that
we added before starting the restore suddenly do not exist. To make
sense of this, this commit enables trace logging.
The CircuitBreaker was introduced as means of preventing a
`StackOverflowException` during the build of the AST by the parser.
The ANTLR4 grammar causes a weird behaviour for a Parser Listener.
The `enterEveryRule()` method is often called with a different parsing
context than the respective `exitEveryRule()`. This makes it difficult
to keep track of the tree's depth, and a custom Map was used as an
attempt of matching the contextes as they are encounter during `enter`
and during `exit` of the rules.
This approach had 2 important drawbacks:
1. It's hard to maintain this custom Map as the grammar changes.
2. The CircuitBreaker could often lead to false positives which caused
valid queries to return an Exception and prevent them from executing.
So, this removes completely the CircuitBreaker which is replaced be
a simple handling of the `StackOverflowException`
Fixes: #41471
(cherry picked from commit 1559a8e2dbd729138b52e89b7e80264c9f4ad1e7)
Because realms are configured at node startup, but license levels can
change dynamically, it is possible to have a running node that has a
particular realm type configured, but that realm is not permitted under
the current license.
In this case the realm is silently ignored during authentication.
This commit adds a warning in the elasticsearch logs if authentication
fails, and there are realms that have been skipped due to licensing.
This message is not intended to imply that the realms could (or would)
have successfully authenticated the user, but they may help reduce
confusion about why authentication failed if the caller was expecting
the authentication to be handled by a particular realm that is in fact
unlicensed.
Backport of: #41778
The run task is supposed to run elasticsearch with the given plugin or
module. However, for modules, this is most realistic if using the full
distribution. This commit changes the run setup to use the default or
oss as appropriate.
This is related to #27260. Currently we have a single read buffer that
is no larger than a single TLS packet. This prevents us from reading
multiple TLS packets in a single socket read call. This commit modifies
our TLS work to support reading similar to the plaintext case. The data
will be copied to a (potentially) recycled TLS packet-sized buffer for
interaction with the SSLEngine.
When applying a license update, we provide "acknowledgement messages"
that indicate which features will be affected by the change in license.
This commit updates the messages that are provided when installing a
basic license, so that they reflect the changes made to the security
features that are included in that license type.
Backport of: #41776
This switches the strategy used to download machine learning artifacts
from a manual download through S3 to using an Ivy repository on top of
S3. This gives us all the benefits of Gradle dependency resolution
including local caching.
With this change, we will verify the consistency of version and source
(besides id, seq_no, and term) of live documents between shard copies
at the end of disruption tests.
This commit is a refactoring of how we filter addresses on
interfaces. In particular, we refactor all of these methods into a
common private method. We also change the order of logic to first check
if an address matches our filter and then check if the interface is
up. This is to possibly avoid problems we are seeing where devices are
flapping up and down while we are checking for loopback addresses. We do
not expect the loopback device to flap up and down so by reversing the
logic here we avoid that problem on CI machines. Finally, we expand the
error message when this does occur so that we know which device is
flapping.
This is related to #27260. Currently there is a setting
http.read_timeout that allows users to define a read timeout for the
http transport. This commit implements support for this functionality
with the transport-nio plugin. The behavior here is that a repeating
task will be scheduled for the interval defined. If there have been
no requests received since the last run and there are no inflight
requests, the channel will be closed.
* [ML] Refactor NativeStorageProvider to enable reuse
Moves `NativeStorageProvider` as a machine learning component
so that it can be reused for other job types. Also, we now
pass the persistent task description as unique identifier which
avoids conflicts between jobs of different type but with same ids.
* Adding nativeStorageProvider as component
Since `TransportForecastJobAction` is expected to get injected a `NativeStorageProvider` class, we need to make sure that it is a constructed component, as it does not have a zero parametered, public ctor.
This is modelled on the qa test for TLS on basic.
It starts a cluster on basic with security & performs a number of
security related checks.
It also performs those same checks on a trial license.
This adds support for using security on a basic license.
It includes:
- AllowedRealmType.NATIVE realms (reserved, native, file)
- Roles / RBAC
- TLS (already supported)
It does not support:
- Audit
- IP filters
- Token Service & API Keys
- Advanced realms (AD, LDAP, SAML, etc)
- Advanced roles (DLS, FLS)
- Pluggable security
As with trial licences, security is disabled by default.
This commit does not include any new automated tests, but existing tests have been updated.
This commit introduces the `.security-tokens` and `.security-tokens-7`
alias-index pair. Because index snapshotting is at the index level granularity
(ie you cannot snapshot a subset of an index) snapshoting .`security` had
the undesirable effect of storing ephemeral security tokens. The changes
herein address this issue by moving tokens "seamlessly" (without user
intervention) to another index, so that a "Security Backup" (ie snapshot of
`.security`) would not be bloated by ephemeral data.
Today we allow adding entries from a file or from a string, yet we
internally maintain this distinction such that if you try to add a value
from a file for a setting that expects a string or add a value from a
string for a setting that expects a file, you will have a bad time. This
causes a pain for operators such that for each setting they need to know
this difference. Yet, we do not need to maintain this distinction
internally as they are bytes after all. This commit removes that
distinction and includes logic to upgrade legacy keystores.
This commit removes the usage of the `BulkProcessor` to write history documents
and delete triggered watches on a `EsRejectedExecutionException`. Since the
exception could be handled on the write thread, the write thread can be blocked
waiting on watcher threads (due to a synchronous method). This is problematic
since those watcher threads can be blocked waiting on write threads.
This commit also moves the handling of the exception to the generic threadpool
to avoid submitting write requests from the write thread pool.
fixes#41390
Today we choose to initialize max_seq_no_of_updates on primaries only so
we can deal with a situation where a primary is on an old node (before
6.5) which does not have MUS while replicas on new nodes (6.5+).
However, this strategy is quite complex and can lead to bugs (for
example #40249) since we have to assign a correct value (not too low) to
MSU in all possible situations (before recovering from translog,
restoring history on promotion, and handing off relocation).
Fortunately, we don't have to deal with this BWC in 7.0+ since all nodes
in the cluster should have MSU. This change simplifies the
initialization of MSU by always assigning it a correct value in the
constructor of Engine regardless of whether it's a replica or primary.
Relates #33842
This is related to #27260. Currently for the SSLDriver we allocate a
dedicated network write buffer and encrypt the data into that buffer one
buffer at a time. This requires constantly switching between encrypting
and flushing. This commit adds a dedicated outbound buffer for SSL
operations that will internally allocate new packet sized buffers as
they are need (for writing encrypted data). This allows us to totally
encrypt an operation before writing it to the network. Eventually it can
be hooked up to buffer recycling.
This commit also backports the following commit:
Handle WRAP ops during SSL read
It is possible that a WRAP operation can occur while decrypting
handshake data in TLS 1.3. The SSLDriver does not currently handle this
well as it does not have access to the outbound buffer during read call.
This commit moves the buffer into the Driver to fix this issue. Data
wrapped during a read call will be queued for writing after the read
call is complete.
Motivated by slow snapshot deletes reported in e.g. #39656 and the fact that these likely are a contributing factor to repositories accumulating stale files over time when deletes fail to finish in time and are interrupted before they can complete.
* Makes snapshot deletion async and parallelizes some steps of the delete process that can be safely run concurrently via the snapshot thread poll
* I did not take the biggest potential speedup step here and parallelize the shard file deletion because that's probably better handled by moving to bulk deletes where possible (and can still be parallelized via the snapshot pool where it isn't). Also, I wanted to keep the size of the PR manageable.
* See https://github.com/elastic/elasticsearch/pull/39656#issuecomment-470492106
* Also, as a side effect this gives the `SnapshotResiliencyTests` a little more coverage for master failover scenarios (since parallel access to a blob store repository during deletes is now possible since a delete isn't a single task anymore).
* By adding a `ThreadPool` reference to the repository this also lays the groundwork to parallelizing shard snapshot uploads to improve the situation reported in #39657
Thanks to #34071, there is enough information in field caps to infer
the table structure and thus use the same API consistently across the
IndexResolver.
(cherry picked from commit f99946943a3350206b6bca774b2f060f41a787b3)
This is related to #27260. Currently for the SSLDriver we allocate a
dedicated network write buffer and encrypt the data into that buffer one
buffer at a time. This requires constantly switching between encrypting
and flushing. This commit adds a dedicated outbound buffer for SSL
operations that will internally allocate new packet sized buffers as
they are need (for writing encrypted data). This allows us to totally
encrypt an operation before writing it to the network. Eventually it can
be hooked up to buffer recycling.
TLS 1.3 changes to the SSLEngine introduced a scenario where a UNWRAP
call during a handshake can consume a close notify alerty without
throwing an exception. This means that we continue down a codepath where
we assert that we are still in handshaking mode. Transitioning to closed
from handshaking is a valid scenario. This commit removes this
assertion.
* [ML] Adds progress reporting for transforms
* fixing after master merge
* Addressing PR comments
* removing unused imports
* Adjusting afterKey handling and percentage to be 100*
* Making sure it is a linked hashmap for serialization
* removing unused import
* addressing PR comments
* removing unused import
* simplifying code, only storing total docs and decrementing
* adjusting for rewrite
* removing initial progress gathering from executor
Today the `_field_caps` API returns the list of indices where a field
is present only if this field has different types within the requested indices.
However if the request is an index pattern (or an alias, or both...) there
is no way to infer the indices if the response contains only fields that have
the same type in all indices. This commit changes the response to always return
the list of indices in the response. It also adds a way to retrieve unmapped field
in a specific section per field called `unmapped`. This section is created for each field
that is present in some indices but not all if the parameter `include_unmapped` is set to
true in the request (defaults to false).
* Introduce Delegating ActionListener Wrappers
* Dry up use cases of ActionListener that simply pass through the response or exception to another listener
The Has Privileges API allows to tap into the authorization process, to validate
privileges without actually running the operations to be authorized. This commit
fixes a bug, in which the Has Privilege API returned spurious results when checking
for index privileges over restricted indices (currently .security, .security-6,
.security-7). The actual authorization process is not affected by the bug.
hamcrest has some improvements in newer versions, like FileMatchers
that make assertions regarding file exists cleaner. This commit upgrades
to the latest version of hamcrest so we can start using new and improved
matchers.
Adds some validation to prevent duplicate source names from being
used in the composite agg.
Also refactored to use a ConstructingObjectParser and removed the
private ctor and setter for sources, making it mandatory.
* fix#35262 define deprecations of API's as a whole and urls
* document hot threads deprecated paths
* deprecate scroll_id as part of the URL, documented only as part of the body which is a safer behaviour as well
* use version numbers up to patch version
* rest spec parser picks up deprecated paths as paths too
(cherry picked from commit 7e06023e7603b7584bfd9ee4e8a1ccd82c208ce7)
Implement a more trivial case of the CASE expression which is
expressed as a traditional function with 2 or 3 arguments. e.g.:
IIF(a = 1, 'one', 'many')
IIF(a > 0, 'positive')
Closes: #40917
(cherry picked from commit add02f4f553ad472026dcc1eaa84245a0558a4b0)
Implement the ANSI SQL CASE expression which provides the if/else
functionality common to most programming languages.
The CASE expression can have multiple WHEN branches and becomes a
powerful tool for SQL queries as it can be used in SELECT, WHERE,
GROUP BY, HAVING and ORDER BY clauses.
Closes: #36200
(cherry picked from commit 8b2577406f47ae60d15803058921d128390af0b6)
The `DistinguishedNamePredicate`, used for matching users to role mapping
expressions, should handle users with null DNs. But it fails to do so (and this is
a NPE bug), if the role mapping expression contains a lucene regexp or a wildcard.
The fix simplifies `DistinguishedNamePredicate` to not handle null DNs at all, and
instead use the `ExpressionModel#NULL_PREDICATE` for the DN field, just like
any other missing user field.
The X-Pack plugin implements ScriptEngine yet it does not actually
implement any of the methods on the interface, effectively making this a
no-op. This commit removes this interface from the X-Pack plugin.
It looks like these permissions were copy/pasted from another plugin yet
almost none of these permissions are needed for the CCR plugin. This
commit removes all these unneeded permissions from the CCR plugin.
The SimplifyConditional rule is removing NULL literals from those
functions to simplify their evaluation. This happens in the Optimizer
and a new instance of the conditional function is generated. Previously,
the dataType was not set properly (defaulted to DataType.NULL) for
those new instances and since the resolveType() wasn't called again
it resulted in returning always null.
E.g.:
SELECT COALESCE(null, 'foo', null, 'bar')
COALESCE(null, 'foo', null, 'bar')
-----------------
null
This issue was not visible before because the tests always used an alias
for the conditional function which caused the resolveType() to be
called which sets the dataType properly.
E.g.:
SELECT COALESCE(null, 'foo', null, 'bar') as c
c
-----------------
foo
(cherry picked from commit c39980a65dd593363f1d8d1b038b26cb0ce02aaf)
Fix bug in predicate subtraction that caused the evaluation to be
skipped on the first mismatch instead of evaluating the whole list. In
some cases this caused not only an incorrect result but one that kept on
growing causing the engine to bail
Fix#40835
(cherry picked from commit bd2b33d6eaca616a5acd846204e2d12f905854d4)
* Handle the scenario where assertLogs() is not called from a test method
but the audit rolling file rolls over.
* Use a local boolean variable instead of the static one to account for
assertBusy() code block possibly being called multiple times and having
different execution paths.
(cherry picked from commit 6f642196cbab90079c610097befc794746170df1)
This commit extracts the template management from Watcher into an
abstract class, so that templates and lifecycle policies can be managed
in the same way across multiple plugins. This will be useful for SLM, as
well as potentially ILM and any other plugins which need to manage index
templates.
The date_histogram internally converts obsolete timezones (such as
"Canada/Mountain") into their modern equivalent ("America/Edmonton").
But rollup just stored the TZ as provided by the user.
When checking the TZ for query validation we used a string comparison,
which would fail due to the date_histo's upgrading behavior.
Instead, we should convert both to a TimeZone object and check if their
rules are compatible.
CURRENT_DATE/CURRENT_TIME/CURRENT_TIMESTAMP can be used as SQL keywords
(without parentheses) and therefore there is a special rule in the
grammar to accommodate this.
Previously, this rule was also catching the parenthesised version of those functions too,
not allowing the {fn <functionName>()} to be used. E.g.:
{fn current_time(2)} or {fn current_timestamp()}
Now, the grammar rule catches only the keyword versions and all the parenthesised
versions go through the normal function resolution. As a consequence the validation
of the precision is moved from the parser lever (ExpressionBuilder) to the function
implementations.
Fixes: #41240
(cherry picked from commit bfbc9f140144b5a35aa29008b58bf58074419853)
When the same alias points to multiple indices we can write to only one index
with `is_write_index` value `true`. The special handling in case of the put
mapping request(to resolve authorized indices) has a check on indices size
for a concrete index. If multiple indices existed then it marked the request
as unauthorized.
The check has been modified to consider write index flag and only when the
requested index matches with the one with write index alias, the alias is considered
for authorization.
Closes#40831
In `TransportRolloverAction` before doing rollover we resolve
source index name (write index) from the alias in the rollover request.
Before evaluating the conditions and executing rollover action, we
retrieve stats, but to do so we used the source index name
resolved from the alias instead of alias from the index.
This fails when the user is assigned a role with index privilege on the
alias instead of the concrete index. This commit fixes this by using
the alias from the request.
After this change, verified that when we retrieve all the stats (including write + read indexes)
we are considering only source index.
Closes#40771
When specifying a limit over an agg sorting, the limit will be pushed
down to the grouping which affects the custom sorting. This commit fixes
that and restricts the limit only to sorting.
Fix#40984
(cherry picked from commit da3726528d9011b05c0677ece6d11558994eccd9)
Although the translation rule was implemented in the `Optimizer`,
the rule was not added in the list of rules to be executed.
Relates to #41195
Follows #37936
(cherry picked from commit f426a339b77af6008d41cc000c9199fe384e9269)
Yet another improvement to SYS TABLES on differentiating between table
types specified as '%' and '' while maintaining legacy support for null
Fix#40775
(cherry picked from commit 6dbca5edd335eb1da8e7825389a15e5fe45397d4)
Traditionally we have [recommended](https://www.elastic.co/guide/en/beats/filebeat/current/monitoring.html) that Beats send their monitoring data to the **production** Elasticsearch cluster. Beats do this by calling the `POST _monitoring/bulk` API. When Security is enabled this API call requires the `cluster:admin/xpack/monitoring/bulk` privilege. The built-in `beats_system` role has this privilege.
[Going forward](https://github.com/elastic/beats/pull/9260), Beats will be able to send their monitoring data directly to the **monitoring** Elasticsearch cluster. Beats will do this by calling the regular `POST _bulk` API. When Security is enabled this API call requires the `indices:data/write/bulk` privilege. Further, the call has to be able to create any indices that don't exist.
This PR expands the built-in `beats_system` role's privileges. Specifically, it adds index-level `write` and `create_index` privileges for `.monitoring-beats-*` indices.
This will allow Beats users to continue using the `beats_system` role for the new direct monitoring route when Security is enabled.