- Add the authentication realm and lookup realm name and type in the response for the _authenticate API
- The authentication realm is set as the lookup realm too (instead of setting the lookup realm to null or empty ) when no lookup realm is used.
* [Rollup] Add more diagnostic stats to job
To help debug future performance issues, this adds the
min/max/avg/count/total latencies (in milliseconds) for search
and bulk phase. This latency is the total service time including
transfer between nodes, not just the `took` time.
It also adds the count of search/bulk failures encountered during
runtime. This information is also in the log, but a runtime counter
will help expose problems faster
* review cleanup
* Remove dead ParseFields
`ScriptDocValues#getValues` was added for backwards compatibility but no
longer needed. Scripts using the syntax `doc['foo'].values` when
`doc['foo']` is a list should be using `doc['foo']` instead.
Closes#22919
This changes the exporter code -- most notably the `http` exporter --
to use async operations throughout the resource management and bulk
initialization code (the bulk indexing of monitoring documents was
already async).
As part of this change, this does change one semi-core aspect of the
`HttpResource` class in that it will no longer block all concurrent calls
until the first call completes with
`HttpResource::checkAndPublishIfDirty`.
Now, any parallel attempts to check the resources will be skipped until
the first call completes (success or failure). While this is a technical
change, it has very little practical impact because the existing behavior
was either quick success (then every blocked request processed) or
each request timed out and failed anyway, thus being effectively
skipped (and a burden on the system).
step times were set. The assumption was that these are always set.
Tests passed, which led me to believe this was true. There is a time
when shrunk indices have their step phase/action/step details set,
but with no time information (in the CopyExecutionStateStep).
Explain API fails for these
This commit removes the use of AbstractComponent in xpack where it was
still being extended. It has been replaced with explicit logger
declarations.
See #34488
This PR adds deprecation warnings to the relevant `Rest*Action` classes, plus tests in `Rest*ActionTests`. No updates to REST tests, the Java HLRC, or documentation were necessary, since they didn't make use of types.
MultiSearchRequests issues through `_msearch` now validate all keys
in the metadata section. Previously unknown keys were ignored
while now an exception is thrown.
Closes#35869
This commit removes the dedicated `setSoLinger` method. This simplifies
the `TcpChannel` interface. This method has very little effect as the
SO_LINGER is not set prior to the channels being closed in the abstract
transport test case. We still will set SO_LINGER on the
`MockNioTransport`. However we can do this manually.
This commit is related to #32517. It allows an "sni_server_name"
attribute on a DiscoveryNode to be propagated to the server using
the TLS SNI extentsion. Prior to this commit, this functionality
was only support for the netty transport. This commit adds this
functionality to the security nio transport.
This pull request makes the `RestGetSourceAction` return a `ResourceNotFoundException` with a proper JSON response when source or document itself is missing (see issue #33384).
Here is below a sample JSON output:
```
{
"error": {
"root_cause": [
{
"type": "resource_not_found_exception",
"reason": "Source not found [index1]/[_doc]/[1]"
}
],
"type": "resource_not_found_exception",
"reason": "Source not found [index1]/[_doc]/[1]"
},
"status": 404
}
```
added validation for complete information of step details.
also changed the rendering of explain responses so null strings are not rendered
Another thing that I changed is the format of the client-side response. I found it difficult to maintain the two subtly-different objects, so I migrated the usage of long for the fields, to Long (just as it is on the server-side).
The trigger engine did always create a new schedule data structure, when
the watcher indexing listener called an add. However the indexing
listener also called add, when the watch status was updated. This means,
that upon a watch status update the watch got retriggered, potentially
waiting a defined interval from the watch status update onwards, instead
of waiting from the last run.
This commit only updates the schedule in the trigger engine, if it
actually has changed, otherwise the existing schedule will not be
touched. This has two results
1. If a watch is updated by an execution, the existing interval will not
be touched (meaning the scheduled time will not move forward).
2. If a watch is updated by a user, but the schedule is not changed, it
will not be reset from the update (for example starting to count from 5
minutes again, if the interval was set to 5 minutes).
Furthermore some minor cleanups were applied, making variables final in
the ctor, preventing double creation of variables.
In #35259 we switched the default number of VMs to fork for unit tests to
the number of physical CPU cores. But because we could only get an accurate
count on machines with a normal `/proc` filesystem, macOS machine did not
pick up the new default. Given that macOS is a huge portion of developer
machines, we'd like to get the right default there. This does that.
It also moves the default-finding process from happening once per testing
task to happening once at startup. This seems like a good choice in general,
but a very good choice for macOS because we have to run a command to list
the count.
`SIGN` and `RADIANS` where wrongly overriding `mathFunction()`.
Converted `mathFunction()` to private in `MathFunction` since it
shouldn't be overriden, as it uses the assigned `MathOperation`
to get the funtion name for painless scripts.
Fixes: #35654
If `dpkg` fails, try and look for who has `/var/lib/dpkg/lock` open. If
it exists and is open then return a failure with information about who
has file open. This should help us debug #33762.
Closes#34309
Today GatewayMetaState is capable of atomically storing MetaData to
disk. We've also moved fields that are needed to be persisted in Zen2
from ClusterState to ClusterState.MetaData.CoordinationMetaData.
This commit implements PersistedState interface.
version and currentTerm are persisted as a part of Manifest.
GatewayMetaState now implements both ClusterStateApplier and
PersistedState interfaces. We started with two descendants
Zen1GatewayMetaState and Zen2GatewayMetaState, but it turned
out to be not easy to glue it.
GatewayMetaState now constructs previousClusterState (including
MetaData) and previousManifest inside the constructor so that all
PersistedState methods are usable as soon as GatewayMetaState
instance is constructed. Also, loadMetaData is renamed to
getMetaData, because it just returns
previousClusterState.metaData().
Sadly, we don't have access to localNode (obtained from
TransportService in the constructor, so getLastAcceptedState
should be called, after setLocalNode method is invoked.
Currently, when deciding whether to write IndexMetaData to disk,
we're comparing current IndexMetaData version and received
IndexMetaData version. This is not safe in Zen2 if the term has changed.
So updateClusterState now accepts incremental write
method parameter. When it's set to false, we always write
IndexMetaData to disk.
Things that are not covered by GatewayMetaStateTests are covered
by GatewayMetaStatePersistedStateTests.
This commit also adds an option to use GatewayMetaState instead of
InMemoryPersistedState in TestZenDiscovery. However, by default
InMemoryPersistedState is used and only one test in PersistedStateIT
used GatewayMetaState. In order to use it for other tests, proper
state recovery should be implemented.
Add special verifier rule to check that the arguments of conditional
functions are of the same or compatible types. This way the user gets
a descriptive error message with line number and column indicating
where is the offending argument.
Closes: #35907
This commit adds documentation for authorization_realms
setting for the Kerberos realm and also corrects a typo in
existing documentation.
Co-authored-by: @A-Hall
This commit adds a test for handling correctly all they possible
`SamlPrepareAuthenticationRequest` parameter combinations that
we might get from Kibana or a custom web application talking to the
SAML APIs.
We can match the correct SAML realm based either on the realm name
or the ACS URL. If both are included in the request then both need to
match the realm configuration.
This generates a synthesized "id" for each incoming request that is
included in the audit logs (file only).
This id can be used to correlate events for the same request (e.g.
authentication success with access granted).
This request.id is specific to the audit logs and is not used for any
other purpose
The request.id is consistent across nodes if a single request requires
execution on multiple nodes (e.g. search acros multiple shards).
When assertions are enabled, a Put User action that have no effect (a
noop update) would trigger an assertion failure and shutdown the node.
This change accepts "noop" as an update result, and adds more
diagnostics to the assertion failure message.
Update PutUserRequest to support password_hash (see: #35242)
This also updates the documentation to bring it in line with our more
recent approach to HLRC docs.
Watcher still exposes some dates as joda DateTime objects. This commit
adds back joda to the painless whitelist so they can still be accessed.
closes#35913
This commit adds back bundling of all deps of the sql jdbc jar. This was
lost in a refactoring of how the shadow plugin is handled for the entire
elasticsearch project.
This removes the option to run a cluster without enforcing the
cluster-wide shard limit, making strict enforcement the default and only
behavior. The limit can still be adjusted as desired using the cluster
settings API.
This commit adds the support for exists query in the sorted execution mode
of the `composite` aggregation. We'll execute the aggregation from the sorted
points and use early termination if the main query is an `exists` query over the
first source of the `composite` aggregation.
Add GREATEST(expr1, expr2, ... exprN) and LEAST(expr1, expr2, exprN)
functions which are in the family of CONDITIONAL functions.
Implementation follows PostgreSQL behaviour, so the functions return
`NULL` when all of their arguments evaluate to `NULL`.
Renamed `CoalescePipe` and `CoalesceProcessor` to `ConditionalPipe` and
`ConditionalProcessor` respectively, to be able to reuse them for
`Greatest` and `Least` evaluations. To achieve that `ConditionalOperation`
has been added to differentiate between the functionalities at execution
time.
Closes: #35878