This is related to #34405 and a follow-up to #34753. It makes a number
of changes to our current keepalive pings.
The ping interval configuration is moved to the ConnectionProfile.
The server channel now responds to pings. This makes the keepalive
pings bidirectional.
On the client-side, the pings can now be optimized away. What this
means is that if the channel has received a message or sent a message
since the last pinging round, the ping is not sent for this round.
Some times the test fixtures plugin did not correctly disable tasks
from the build plugin as it should.
The plugin manager and tasks both use domain name collections so
the previus conde should have worked.
I did not have trime to track it down, but suspect there's some race
condition in Gradle causing this. The plugin manager is still incubating.
Since the tasks are on the cp even if the plugin is not applyed, we
don't really need to involve the plugin at all.
Closes#36041
Today the default for USE_ZEN2 is false and it is overridden in many places. By
defaulting it to true we can be sure that the only places in which Zen2 does
not work are those in which it is explicitly set to false.
This commits changes the serialization version from V_7_0_0 to
v_6_6_0 for the authenticate API response now that the work to add
the realm info in the response has been backported to 6.x in
b515ec7c9b9074dfa2f5fd28bac68fd8a482209e
Relates #35648
Looks like some odd race condition causes failed builds by attempting to
run the task that should be disabled.
Disable the task explicitly untill we figure it out.
This change adds an extra check that verifies that all primary shards
have been started of an index that is about to be auto followed.
If not all primary shards have been started for an index
then the next auto follow run will try to follow to auto follow
this index again.
Closes#35480
In #30241 Realm settings were changed, but the Kerberos realm settings
were not registered correctly. This change fixes the registration of
those Kerberos settings.
Also adds a new integration test that ensures every internal realm can
be configured in a test cluster.
Also fixes the QA test for kerberos.
Resolves: #35942
Right now using the `GET /_tasks/<taskid>` API and causing a task to opt
in to saving its result after being completed requires permissions on
the `.tasks` index. When we built this we thought that that was fine,
but we've since moved towards not leaking details like "persisting task
results after the task is completed is done by saving them into an index
named `.tasks`." A more modern way of doing this would be to save the
tasks into the index "under the hood" and to have APIs to manage the
saved tasks. This is the first step down that road: it drops the
requirement to have permissions to interact with the `.tasks` index when
fetching task statuses and when persisting statuses beyond the lifetime
of the task.
In particular, this moves the concept of the "origin" of an action into
a more prominent place in the Elasticsearch server. The origin of an
action is ignored by the server, but the security plugin uses the origin
to make requests on behalf of a user in such a way that the user need
not have permissions to perform these actions. It *can* be made to be
fairly precise. More specifically, we can create an internal user just
for the tasks API that just has permission to interact with the `.tasks`
index. This change doesn't do that, instead, it uses the ubiquitus
"xpack" user which has most permissions because it is simpler. Adding
the tasks user is something I'd like to get to in a follow up change.
Instead, the majority of this change is about moving the "origin"
concept from the security portion of x-pack into the server. This should
allow any code to use the origin. To keep the change managable I've also
opted to deprecate rather than remove the "origin" helpers in the
security code. Removing them is almost entirely mechanical and I'd like
to that in a follow up as well.
Relates to #35573
When the grouping key of a GROUP BY is a painless script (functions are
involved), the data type of the key was incorrect in certain cases
(Boolean, IP, Date). This resulted in returning wrong data type for this
columns in the query results. E.g.:
```
SELECT COUNT(*), a > 10 AS a FROM t GROUP BY a
```
Fixes: #35662
Move classes under the same package to avoid internal classes being
exposed to the outside. Remove public visibility outside 3 classes:
EsDriver, EsDataSource and EsTypes.
The driver only has one package, namely org.elasticsearch.xpack.sql.jdbc
Use Es prefix for classes to ease name conflict and indicate their
destination
Fix#35437
This change removes the deprecated useDisMax() and useAllFields() methods from
the QueryStringQueryBuilder and related tests. The disMax parameter has already
been a no-op since 6.0 and also the useAllFields has been deprecated since 6.0
and there is a direct replacement via defaultField.
Clients can use the Kerberos V5 security mechanism and when it
used this to establish security context it failed to do so as
Elasticsearch server only accepted Spengo mechanism.
This commit adds support to accept Kerberos V5 credentials
over spnego.
Closes#34763
- Add the authentication realm and lookup realm name and type in the response for the _authenticate API
- The authentication realm is set as the lookup realm too (instead of setting the lookup realm to null or empty ) when no lookup realm is used.
* [Rollup] Add more diagnostic stats to job
To help debug future performance issues, this adds the
min/max/avg/count/total latencies (in milliseconds) for search
and bulk phase. This latency is the total service time including
transfer between nodes, not just the `took` time.
It also adds the count of search/bulk failures encountered during
runtime. This information is also in the log, but a runtime counter
will help expose problems faster
* review cleanup
* Remove dead ParseFields
This changes the exporter code -- most notably the `http` exporter --
to use async operations throughout the resource management and bulk
initialization code (the bulk indexing of monitoring documents was
already async).
As part of this change, this does change one semi-core aspect of the
`HttpResource` class in that it will no longer block all concurrent calls
until the first call completes with
`HttpResource::checkAndPublishIfDirty`.
Now, any parallel attempts to check the resources will be skipped until
the first call completes (success or failure). While this is a technical
change, it has very little practical impact because the existing behavior
was either quick success (then every blocked request processed) or
each request timed out and failed anyway, thus being effectively
skipped (and a burden on the system).
step times were set. The assumption was that these are always set.
Tests passed, which led me to believe this was true. There is a time
when shrunk indices have their step phase/action/step details set,
but with no time information (in the CopyExecutionStateStep).
Explain API fails for these
This commit removes the use of AbstractComponent in xpack where it was
still being extended. It has been replaced with explicit logger
declarations.
See #34488
This commit is related to #32517. It allows an "sni_server_name"
attribute on a DiscoveryNode to be propagated to the server using
the TLS SNI extentsion. Prior to this commit, this functionality
was only support for the netty transport. This commit adds this
functionality to the security nio transport.
added validation for complete information of step details.
also changed the rendering of explain responses so null strings are not rendered
Another thing that I changed is the format of the client-side response. I found it difficult to maintain the two subtly-different objects, so I migrated the usage of long for the fields, to Long (just as it is on the server-side).
The trigger engine did always create a new schedule data structure, when
the watcher indexing listener called an add. However the indexing
listener also called add, when the watch status was updated. This means,
that upon a watch status update the watch got retriggered, potentially
waiting a defined interval from the watch status update onwards, instead
of waiting from the last run.
This commit only updates the schedule in the trigger engine, if it
actually has changed, otherwise the existing schedule will not be
touched. This has two results
1. If a watch is updated by an execution, the existing interval will not
be touched (meaning the scheduled time will not move forward).
2. If a watch is updated by a user, but the schedule is not changed, it
will not be reset from the update (for example starting to count from 5
minutes again, if the interval was set to 5 minutes).
Furthermore some minor cleanups were applied, making variables final in
the ctor, preventing double creation of variables.
`SIGN` and `RADIANS` where wrongly overriding `mathFunction()`.
Converted `mathFunction()` to private in `MathFunction` since it
shouldn't be overriden, as it uses the assigned `MathOperation`
to get the funtion name for painless scripts.
Fixes: #35654
Add special verifier rule to check that the arguments of conditional
functions are of the same or compatible types. This way the user gets
a descriptive error message with line number and column indicating
where is the offending argument.
Closes: #35907
This commit adds a test for handling correctly all they possible
`SamlPrepareAuthenticationRequest` parameter combinations that
we might get from Kibana or a custom web application talking to the
SAML APIs.
We can match the correct SAML realm based either on the realm name
or the ACS URL. If both are included in the request then both need to
match the realm configuration.
This generates a synthesized "id" for each incoming request that is
included in the audit logs (file only).
This id can be used to correlate events for the same request (e.g.
authentication success with access granted).
This request.id is specific to the audit logs and is not used for any
other purpose
The request.id is consistent across nodes if a single request requires
execution on multiple nodes (e.g. search acros multiple shards).
When assertions are enabled, a Put User action that have no effect (a
noop update) would trigger an assertion failure and shutdown the node.
This change accepts "noop" as an update result, and adds more
diagnostics to the assertion failure message.
This commit adds back bundling of all deps of the sql jdbc jar. This was
lost in a refactoring of how the shadow plugin is handled for the entire
elasticsearch project.
This removes the option to run a cluster without enforcing the
cluster-wide shard limit, making strict enforcement the default and only
behavior. The limit can still be adjusted as desired using the cluster
settings API.
Add GREATEST(expr1, expr2, ... exprN) and LEAST(expr1, expr2, exprN)
functions which are in the family of CONDITIONAL functions.
Implementation follows PostgreSQL behaviour, so the functions return
`NULL` when all of their arguments evaluate to `NULL`.
Renamed `CoalescePipe` and `CoalesceProcessor` to `ConditionalPipe` and
`ConditionalProcessor` respectively, to be able to reuse them for
`Greatest` and `Least` evaluations. To achieve that `ConditionalOperation`
has been added to differentiate between the functionalities at execution
time.
Closes: #35878
Due to some unresolvable type conflict between the expected definition
in JDBC vs ODBC, the driver mode is now passed over so that certain
command can change their results accordingly (in this case SYS COLUMNS)
Fix#35376
This operator handles nulls in different way than the normal `=`.
If one of the operants is `null` and the other not it returns `false`.
If both operants are `null` it returns `true`. Therefore in contrary to
`=`, which returns `null` if at least one of the operants is `null`, this one
never returns `null` as a result.
Closes: #35871
Code that operates on-top of the engine requires all readers returned to be
unwrapped into ElasticsearchDirectoryReader. The special reader
the FrozenEngine uses wasn't wrapped.
We didn't check that the ExplainLifecycleRequest was constructed with at least
one index before, now that we do we must also make sure the tests
mutateInstance() method used in equals/hashCode checks doesn't accidentally
create an empty index array.
Closes#35822
When there is no persistent tasks metadata we could hit a null pointer
exception when executing a follower stats request. This is because we
inspect the persistent tasks metadata. Yet, if no tasks have been
registered, this is null (as opposed to empty). We need to avoid
de-referencing the persistent tasks metadata in this case. That is what
this commit does, and we add a test for this situation.
By setting the cron to 2017, we ensure it won't trigger. This makes it
easier to test because we know the job will _only_ be in STARTED,
and we can ignore INDEXING states due to transient triggers.
Closes#35779
Today we have a way to atomically persist global MetaData and
IndexMetaData to disk when new ClusterState is received. All other
ClusterState fields are not persisted.
However, there are other parts of ClusterState that should be
persisted, namely:
version
term
lastCommittedConfiguration
lastAcceptedConfiguration
votingTombstones
version is changed frequently, other fields are not. We decided
to group term, lastCommittedConfiguration,
lastAcceptedConfiguration and votingTombstones into
CoordinationMetaData class and make CoordinationMetaData a field
inside MetaData.
MetaData.toXContent and MetaData.fromXContent should take care of
CoordinationMetaData.
version stays as a top level field in ClusterState and will be
persisted as part of Manifest in a follow-up commit.
Also MetaData.isGlobalStateEquals should be extended to include
coordinationMetaData in comparison.
This commit favors exposing getters, such as getTerm directly in
ClusterState to avoid massive code changes.
An example of CoordinationMetaState.toXContent:
{
"term": 1,
"last_committed_config": [
"TiIuBcbBtpuXyDDVHXeD",
"ZIAoVbkjjLPLUuYLaTkw"
],
"last_accepted_config": [
"OwkXbXZNOZPJqccdFHdz",
"LouzsGYwmQzpeQMrboZe",
"fCKGRZdjLTqzXAqPUtGL",
"pLoxshjpJXwDhbgjfYJy",
"SjINLwFIlIEFZCbjrSFo",
"MDkVncJEVyZLJktopWje"
]
}
Move away from performing eager, fail-fast validation of mismatched
mapping to a lazy evaluation based on the fields actually used in the
query. This allows queries to run on the parts of the indices that
"work" which is not just convenient but also a necessity for large
mappings (like logging) where alignment is hard/impossible to achieve.
Fix#35659
Creates the manage_token cluster privilege and adds it to the
kibana_system role. This is required if kibana were to use the token
service for its authenticator process.
Because kibana_system already has manage_saml this effectively
only adds the privilege to create tokens.
Introduce INTERVAL as a DataType
Add INTERVAL to the grammar which supports the standard SQL declaration
(without precision):
> INTERVAL '1 23:45:01.123456789' DAY TO SECOND
but also number for single unit intervals:
> INTERVAL 1 YEAR
as well as the plurals of the units:
> INTERVAL 2 YEARS
Interval are internally supported as just another Literal being backed
by java.time.Period and java.time.Duration
Move JDBC away from JDBCType enum to SQLType interface
Refactor DataType by moving it into server core and adding dedicated (and
much simpler) JDBC driver type
Improve internal JDBC conversion by normalizing on the DataType
Rename JDBC columnInfo to JdbcColumnInfo to differentiate between it and
the SQL ColumnInfo
Fix#29990
This commit removes the parsing code from the PutLicenseResponse server variant, and the toXContent portion from the corresponding client variant.
Relates to #35547
This commit removes the parsing code from the PostStartBasicResponse server variant. It also makes the server response implement StatusToXContent which allows us to save a couple of lines of code in the corredponding REST action.
Relates to #35547
The RestHasPrivilegesAction previously handled its own XContent
generation. This change moves that into HasPrivilegesResponse and
makes the response implement ToXContent.
This allows HasPrivilegesResponseTests to be used to test
compatibility between HLRC and X-Pack internals.
A serialization bug (cluster privs) was also fixed here.
* The port assigned to all loopback interfaces doesn't necessarily have to be the same for ipv4 and ipv6
=> use actual address from profile instead of just port + loopback in test
* Closes#35584
This parameter in the `query_string` query was deprecated in 6.0 and ignored
since then. Its API methods and remaining uses can be removed in the upcoming
major version.
Relates to #35734
This commit adds a rest endpoint for freezing and unfreezing an index.
Among other cleanups mainly fixing an issue accessing package private APIs
from a plugin that got caught by integration tests this change also adds
documentation for frozen indices.
Note: frozen indices are marked as `beta` and available as a basic feature.
Relates to #34352
Currently there is a common NPE in the IndexFollowingIT that does not
indicate the test failing. This is when a cluster state listener is
called and certain index metadata is not yet available.
This commit checks that the metadata is not null before performing the
logic that depends on the metadata.
PR #35242 formalised support for the password_hash field in the body
of the Put User security API.
Since this field is now validated and tested, it can also be
documented.
The Put User API also supports a "refresh" query parameter that was
not documented. This commit adds it to the docs.
Zen2 is now feature-complete enough to run most ESIntegTestCase tests. The changes in this PR
are as follows:
- ClusterSettingsIT is adapted to not be Zen1 specific anymore (it was using Zen1 settings).
- Some of the integration tests require persistent storage of the cluster state, which is not fully
implemented yet (see #33958). These tests keep running with Zen1 for now but will be switched
over as soon as that is fully implemented.
- Some very few integration tests are not running yet with Zen2 for other reasons, depending on
some of the other open points in #32006.
* ML: Removing result_finalization_window && overlapping_buckets
* Reverting bad method deletions
* Setting to current before backport to try and get a green build
* fixing testBuildAutodetectCommand test
* disabling bwc tests for backport
Fix bug in Analyzer that caused it to report unsupported fields only
when declared in projections. The rule has been extended to all field
declarations.
Fix#35673
RolloverStep previously had a name of "attempt_rollover", which was
inconsistent with all other step names due it its use of an underscore
instead of a dash.
This commit switches from using java util's default timezone method to
using joda. The former can cause problems when the string representation
of the timezone is unknown to joda.
closes#35518
Removed extending of AbstractComponent and changed logger usage to
explicit declaration. Abstract classes still have logger
declaration using this.getClass() in order to show implementation class
name in its logs.
See #34488
* Deprecate types in count requests.
* Move RestCountAction to the 'search' package.
* Deprecate types in multi search requests.
* Add tests for types deprecation in the _search endpoint.
RolloverAction will now periodically check the rollover conditions using
the Rollover API with the dry_run option as an AsyncWaitStep, then run
the rollover itself by calling the Rollover API with no conditions,
which will always roll over, as an AsyncActionStep. This will resolve
race condition issues in policies using RolloverAction.
Today, the bootstrapping of a Zen2 cluster is driven externally, requiring
something else to wait for discovery to converge and then to inject the initial
configuration. This is hard to use in some situations, such as REST tests.
This change introduces the `ClusterBootstrapService` which brings the bootstrap
retry logic within each node and allows it to be controlled via an (unsafe)
node setting.
* ML: Adding missing datacheck to datafeedjob
* Adding client side and docs
* Making adjustments to validations
* Making values default to on, having more sensible limits
* Intermittent commit, still need to figure out interval
* Adjusting delayed data check interval
* updating docs
* Making parameter Boolean, so it is nullable
* bumping bwc to 7 before backport
* changing to version current
* moving delayed data check config its own object
* Separation of duties for delayed data detection
* fixing checkstyles
* fixing checkstyles
* Adjusting default behavior so that null windows are allowed
* Mentioning the default value
* Fixing comments, syncing up validations
In the event that the target index does not exist when `CopyExecutionStateStep`
executes, this avoids a `NullPointerException` and provides a more helpful error
to the ILM user.
Resolves#35567
Kibana now uses the tasks API to manage automatic reindexing of the
.kibana index during upgrades.
The implementation of the tasks API requires that
1. the user executing the task can create & write to the ".tasks" index
2. the user checking on the status of the task can read (Get) the
relevant document from the ".tasks" index
Response classes in Elasticsearch (and xpack) only need to implement ToXContent, which is needed to print their output put in the REST layer and return the response in json (or others) format. On the other hand, response classes that are added to the high-level REST client, need to do the opposite: parse xcontent and create a new object based on that.
This commit removes the parsing code from the XPackInfoResponse server variant, and the toXContent portion from the corresponding client variant. It also removes a client specific test class that looks redundant now that we have a single test class for both classes.
This pull request replaces some blocks of code that must be run once
and that are currently based on AtomicBoolean by the convient RunOnce
class added in #35489.
The DefaultAuthenticationFailureHandler has a deprecated constructor
that was present to prevent a breaking change to custom realm plugin
authors in 6.x. This commit removes the constructor and its uses.
For some time, the PutUser REST API has supported storing a pre-hashed
password for a user. The change adds validation and tests around that
feature so that it can be documented & officially supported.
It also prevents the request from containing both a "password" and a "password_hash".
This adds a `wait_for_completion` flag which allows the user to block
the Stop API until the task has actually moved to a stopped state,
instead of returning immediately. If the flag is set, a `timeout` parameter
can be specified to determine how long (at max) to block the API
call. If unspecified, the timeout is 30s.
If the timeout is exceeded before the job moves to STOPPED, a
timeout exception is thrown. Note: this is just signifying that the API
call itself timed out. The job will remain in STOPPING and evenutally
flip over to STOPPED in the background.
If the user asks the API to block, we move over the the generic
threadpool so that we don't hold up a networking thread.
This change adds a special caching reader that caches all relevant
values for a range query to rewrite correctly in a can_match phase
without actually opening the underlying directory reader. This
allows frozen indices to be filtered with can_match and in-turn
searched with wildcards in a efficient way since it allows us to
exclude shards that won't match based on their date-ranges without
opening their directory readers.
Relates to #34352
Depends on #34357
The NPE would occur if should_trim_field was overridden to
true and any field value was completely blank. This change
defends against this situation.
Fixes#35462
Before, moving to a failed step would only change the step info
to be that of the failed step. This means two things.
1. Async Steps would never be triggered to execute
2. If there are inherent problems with the action definition that can
be fixed with a policy update, these changes were not being reflected
by the new execution info.
Changes now
1. Async steps are executed after the move to the failed step in cluster state
2. the lifecycle execution info's phase definition is updated from the current
latest policy definition, even though the index isn't moving to a new phase.
Closes#35397.
avoid the assertions that check the log files, because that does not work on Windows.
The rest of the test is still useful and should work on Windows CI.
Currently on Windows CI this qa module fails because there is just one test and
that test si ignored if OS is Windows.
This change adds a high level freeze API that allows to mark an
index as frozen and vice versa. Indices must be closed in order to
become frozen and an open but frozen index must be closed to be
defrosted. This change also adds a index.frozen setting to
mark frozen indices and integrates the frozen engine with the
SearchOperationListener that resets and releases the directory
reader after and before search phases.
Relates to #34352
Depends on #34357
Validate remote cluster license as part of put auto follow pattern api call
in addition of validation that when auto follow coordinator starts auto
following indices in the leader cluster.
Also added qa module that tests what happens to ccr after downgrading to basic license.
Existing active follow indices should remain to follow,
but the auto follow feature should not pickup new leader indices.
Add `IsNull` node in parser to simplify expressions so that `<value> IS NULL` is
no longer translated internally to `NOT(<value> IS NOT NULL)`
Replace `IsNotNullProcessor` with `CheckNullProcessor` to encapsulate both
isNull and isNotNull functionality.
Closes: #34876Fixes: #35171
* DISCOVERY: Fix RollingUpgradeTests
* Don't manually manage min master nodes if not necessary
* Remove some dead code
* Allow for manually supplying list of seed nodes
* Closes#35178
This documents how to include the search queries in the audit log.
There is a catch, that even if enabling `emit_request_body`, which should
output queries included in request bodies, search queries were not output
because, implicitly, no REST layer audit event type was included.
This folk knowledge is herein imprinted.
Many realm tests were written to use separate setting objects for
"global settings" and "realm settings".
Since #30241 there is no distinction between these settings, so these
tests can be cleaned up to use a single Settings object.
Change `nullable()` logic of AND and OR to false since in the Optimizer
we cannot fold to null as we might be handling and expression in the
SELECT clause.
Introduce folding of null for AND and OR expressions in PruneFilter()
since we now know that we are in HAVING or WHERE clause and we
can fold `null` to `false`
Fixes: #35088
Co-authored-by: Costin Leau <costin.leau@gmail.com>
This is related to #34483. It introduces a namespaced setting for
compression that allows users to configure compression on a per remote
cluster basis. The transport.tcp.compress remains as a fallback
setting. If transport.tcp.compress is set to true, then all requests
and responses are compressed. If it is set to false, only requests to
clusters based on the cluster.remote.cluster_name.transport.compress
setting are compressed. However, after this change regardless of any
local settings, responses will be compressed if the request that is
received was compressed.
Today our OS information returned in node stats only returns a
high-level name of the OS (e.g., "Linux"). Yet, for some uses this is
too high-level and knowing at a finer level of granularity the
underlying OS can be useful. This commit extracts the pretty name on
Linux from /etc/os-release. This pretty name usually includes the Linux
vendor and the Linux vendor version number (e.g., Fedora 28).
- Introduces a transport API for bootstrapping a Zen2 cluster
- Introduces a transport API for requesting the set of nodes that a
master-eligible node has discovered and for waiting until this comprises the
expected number of nodes.
- Alters ESIntegTestCase to use these APIs when forming a cluster, rather than
injecting the initial configuration directly.