* Move to Gradle 4.8 RC1
* Use latest version of plugin
The current does not work with Gradle 4.8 RC1
* Switch to Gradle GA
* Add and configure build compare plugin
* add work-around for https://github.com/gradle/gradle/issues/5692
* work around https://github.com/gradle/gradle/issues/5696
* Make use of Gradle build compare with reference project
* Make the manifest more compare friendly
* Clear the manifest in compare friendly mode
* Remove animalsniffer from buildscript classpath
* Fix javadoc errors
* Fix doc issues
* reference Gradle issues in comments
* Conditionally configure build compare
* Fix some more doclint issues
* fix typo in build script
* Add sanity check to make sure the test task was replaced
Relates to #31324. It seems like Gradle has an inconsistent behavior and
the taks is not always replaced.
* Include number of non conforming tasks in the exception.
* No longer replace test task, create implicit instead
Closes#31324. The issue has full context in comments.
With this change the `test` task becomes nothing more than an alias for `utest`.
Some of the stand alone tests that had a `test` task now have `integTest`, and a
few of them that used to have `integTest` to run multiple tests now only
have `check`.
This will also help separarate unit/micro tests from integration tests.
* Revert "No longer replace test task, create implicit instead"
This reverts commit f1ebaf7d93e4a0a19e751109bf620477dc35023c.
* Fix replacement of the test task
Based on information from gradle/gradle#5730 replace the task taking
into account the task providres.
Closes#31324.
* Only apply build comapare plugin if needed
* Make sure test runs before integTest
* Fix doclint aftter merge
* PR review comments
* Switch to Gradle 4.8.1 and remove workaround
* PR review comments
* Consolidate task ordering
This creates a YAML test "features" that indices if the cluster being
tested has xpack installed (`xpack`) or if it does *not* have xpack
installed (`no_xpack`). It uses those features to centralize skipping
a few tests that fail if xpack is installed.
The plan is to use this in a followup to skip docs tests that require
xpack when xpack is not installed. We *plan* to use the declaration
of required license level on the docs page to generate the required
`skip`.
Closes#30933.
TransportAction currently contains 2 doExecute methods, one which takes
a the task, and one that does not. The latter is what some subclasses
implement, while the first one just calls the latter, dropping the given
task. This commit combines these methods, in favor of just always
assuming a task is present.
This adds an api to allow updating a filter:
POST _xpack/ml/filters/{filter_id}/_update
The request body may have:
- description: setting a new description
- add_items: a list of the items to add
- remove_items: a list of the items to remove
This commit also changes the PUT filter api to
error when the filter_id is already used. As
now there is an api for updating filters, the
put api should only be used to create new ones.
Also, updating a filter results into a notification
message auditing the change for every job that is
using that filter.
According to RFC 7617, the Basic authentication scheme name
should not be case sensitive.
Case insensitive comparisons are also applicable for the bearer
tokens where Bearer authentication scheme is used as per
RFC 6750 and RFC 7235
Some Http clients may send authentication scheme names in
different case types for eg. Basic, basic, BASIC, BEARER etc.,
so the lack of case-insensitive check is an issue when these
clients try to authenticate with elasticsearch.
This commit adds case-insensitive checks for Basic and Bearer
authentication schemes.
Closes#31486
Most transport actions don't need the node ThreadPool. This commit
removes the ThreadPool as a super constructor parameter for
TransportAction. The actions that do need the thread pool then have a
member added to keep it from their own constructor.
Historically in TcpTransport server channels were represented by the
same channel interface as socket channels. This was necessary as
TcpTransport was parameterized by the channel type. This commit
introduces TcpServerChannel and HttpServerChannel classes. Additionally,
it adds the implementations for the various transports. This allows
server channels to have unique functionality and not implement the
methods they do not support (such as send and getRemoteAddress).
Additionally, with the introduction of HttpServerChannel this commit
extracts some of the storing and closing channel work to the abstract
http server transport.
Most transport actions don't need to resolve index names. This commit
removes the index name resolver as a super constructor parameter for
TransportAction. The actions that do need the resolver then have a
member added to keep the resolver from their own constructor.
The changes made to disable security for trial licenses unless security
is explicitly enabled caused issues when a 6.3 node attempts to join a
cluster that already has a production license installed. The new node
starts off with a trial license and `xpack.security.enabled` is not
set for the node, which causes the security code to skip attaching the
user to the request. The existing cluster has security enabled and the
lack of a user attached to the requests causes the request to be
rejected.
This commit changes the security code to check if the state has been
recovered yet when making the decision on whether or not to attach a
user. If the state has not yet been recovered, the code will attach
the user to the request in case security is enabled on the cluster
being joined.
Closes#31332
This is a general cleanup of channels and exception handling in http.
This commit introduces a CloseableChannel that is a superclass of
TcpChannel and HttpChannel. This allows us to unify the closing logic
between tcp and http transports. Additionally, the normal http channels
are extracted to the abstract server transport.
Finally, this commit (mostly) unifies the exception handling between nio
and netty4 http server transports.
Since #30966, Action no longer has anything but a call to the
GenericAction super constructor. This commit renames GenericAction
into Action, thus eliminating the Action class. Additionally, this
commit removes the Request generic parameter of the class, since
it was unused.
This commit makes it so that cluster state update tasks always run under the system context, only
restoring the original context when the listener that was provided with the task is called. A notable
exception is the clusterStatePublished(...) callback which will still run under system context,
because it's defined on the executor-level, and not the task level, and only called once for the
combined batch of tasks and can therefore not be uniquely identified with a task / thread context.
Relates #30603
This is related to #28898. This PR implements pooling of bytes arrays
when reading from the wire in the http server transport. In order to do
this, we must integrate with netty reference counting. That manner in
which this PR implements this is making Pages in InboundChannelBuffer
reference counted. When we accessing the underlying page to pass to
netty, we retain the page. When netty releases its bytebuf, it releases
the underlying pages we have passed to it.
This pull request removes the relationship between the state
of persistent task (as stored in the cluster state) and the status
of the task (as reported by the Task APIs and used in various
places) that have been confusing for some time (#29608).
In order to do that, a new PersistentTaskState interface is added.
This interface represents the persisted state of a persistent task.
The methods used to update the state of persistent tasks are
renamed: updatePersistentStatus() becomes updatePersistentTaskState()
and now takes a PersistentTaskState as a parameter. The
Task.Status type as been changed to PersistentTaskState in all
places were it make sense (in persistent task customs in cluster
state and all other methods that deal with the state of an allocated
persistent task).
This is related to #28898. With the addition of the http nio transport,
we now have two different modules that provide http transports.
Currently most of the http logic lives at the module level. However,
some of this logic can live in server. In particular, some of the
setting of headers, cors, and pipelining. This commit begins this moving
in that direction by introducing lower level abstraction (HttpChannel,
HttpRequest, and HttpResonse) that is implemented by the modules. The
higher level rest request and rest channel work can live entirely in
server.
This adds a `description` to ML filters in order
to allow users to describe their filters in a human
readable form which is also editable (filter updates
to be added shortly).
The parser for the Metric config was directly instantiating
the config object, rather than using the builder. That means it was
bypassing the validation logic built into the builder, and would allow
users to create invalid metric configs (like using unsupported metrics).
The job would later blow up and abort due to bad configs, but this isn't
immediately obvious to the user since the PutJob API succeeded.
This change prevents a datafeed using cross cluster search from starting if the remote cluster
does not have x-pack installed and a sufficient license. The check is made only when starting a
datafeed.
Rules allow users to supply a detector with domain
knowledge that can improve the quality of the results.
The model detects statistically anomalous results but it
has no knowledge of the meaning of the values being modelled.
For example, a detector that performs a population analysis
over IP addresses could benefit from a list of IP addresses
that the user knows to be safe. Then anomalous results for
those IP addresses will not be created and will not affect
the quantiles either.
Another example would be a detector looking for anomalies
in the median value of CPU utilization. A user might want
to inform the detector that any results where the actual
value is less than 5 is not interesting.
This commit introduces a `custom_rules` field to the `Detector`.
A detector may have multiple rules which are combined with `or`.
A rule has 3 fields: `actions`, `scope` and `conditions`.
Actions is a list of what should happen when the rule applies.
The current options include `skip_result` and `skip_model_update`.
The default value for `actions` is the `skip_result` action.
Scope is optional and allows for applying filters on any of the
partition/over/by field. When not defined the rule applies to
all series. The `filter_id` needs to be specified to match the id
of the filter to be used. Optionally, the `filter_type` can be specified
as either `include` (default) or `exclude`. When set to `include`
the rule applies to entities that are in the filter. When set to
`exclude` the rule only applies to entities not in the filter.
There may be zero or more conditions. A condition requires `applies_to`,
`operator` and `value` to be specified. The `applies_to` value can be
either `actual`, `typical` or `diff_from_typical` and it specifies
the numerical value to which the condition applies. The `operator`
(`lt`, `lte`, `gt`, `gte`) and `value` complete the definition.
Conditions are combined with `and` and allow to specify numerical
conditions for when a rule applies.
A rule must either have a scope or one or more conditions. Finally,
a rule with scope and conditions applies when all of them apply.
* Support RequestedAuthnContext
This implements limited support for RequestedAuthnContext by :
- Allowing SP administrators to define a list of authnContextClassRef
to be included in the RequestedAuthnContext of a SAML Authn Request
- Veirifying that the authnContext in the incoming SAML Asertion's
AuthnStatement contains one of the requested authnContextClassRef
- Only EXACT comparison is supported as the semantics of validating
the incoming authnContextClassRef are deployment dependant and
require pre-established rules for MINIMUM, MAXIMUM and BETTER
Also adds necessary AuthnStatement validation as indicated by [1] and
[2]
[1] https://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf
3.4.1.4, line 2250-2253
[2] https://kantarainitiative.github.io/SAMLprofiles/saml2int.html
[SDP-IDP10]
Trying to post a new watch without any body currently results in a
NullPointerException. This change fixes that by validating that
Post and Put requests always have a body.
Closes#30057
This commit upgrades us to Netty 4.1.25. This upgrade is more
challenging than past upgrades, all because of a new object cleaner
thread that they have added. This thread requires an additional security
permission (set context class loader, needed to avoid leaks in certain
scenarios). Additionally, there is not a clean way to shutdown this
thread which means that the thread can fail thread leak control during
tests. As such, we have to filter this thread from thread leak control.
* Remove DocumentFieldMappers#simpleMatchToFullName, as it is duplicative of MapperService#simpleMatchToIndexNames.
* Rename MapperService#simpleMatchToIndexNames -> simpleMatchToFullName for consistency.
* Simplify EsIntegTestCase#assertConcreteMappingsOnAll to accept concrete fields instead of wildcard patterns.
The native realm's usage stats were previously pulled from the cache,
which only contains the number of users that had authenticated in the
past 20 minutes. This commit changes this so that we pull the current
value from the security index by executing a search request. In order
to support this, the usage stats for realms is now asynchronous so that
we do not block while waiting on the search to complete.
We should not allow the user to configure index patterns that also match
the index which stores the rollup index.
For example, it is quite natural for a user to specify `metricbeat-*`
as the index pattern, and then store the rollups in `metricbeat-rolled`.
This will start throwing errors as soon as the rollup index is created
because the indexer will try to search it.
Note: this does not prevent the user from matching against existing
rollup indices. That should be prevented by the field-level validation
during job creation.
ObjectParser should throw XContentParseExceptions, not IAE. A dedicated parsing
exception can includes the place where the error occurred.
Closes#30605
Extends ActionRequestValidationException with a rollup-specific version
to make it easier to handle mapping validation issues on the client
side.
The type will now be `rollup_action_request_validation_exception`
instead of `action_request_validation_exception`
The majority of Responses inheriting from AcknowledgeResponse implement
the readFrom and writeTo serialization method in the same way. Moving this
as a default into AcknowledgeResponse and letting the few exceptions that
need a slightly different implementation handle this themselves saves a lot
of duplication.
With #31020 we introduced the ability for transport clients to indicate what features they support
in order to make sure we don't serialize object to them they don't support. This PR adapts the
serialization logic of persistent tasks to be aware of those features and not serialize tasks that
aren't supported.
Also, a version check is added for the future where we may add new tasks implementations and
need to be able to indicate they shouldn't be serialized both to nodes and clients.
As the implementation relies on the interface of `PersistentTaskParams`, these are no longer
optional. That's acceptable as all current implementation have them and we plan to make
`PersistentTaskParams` more central in the future.
Relates to #30731
This commit introduces the ability for a client to communicate to the
server features that it can support and for these features to be used in
influencing the decisions that the server makes when communicating with
the client. To this end we carry the features from the client to the
underlying stream as we carry the version of the client today. This
enables us to enhance the logic where we make protocol decisions on the
basis of the version on the stream to also make protocol decisions on
the basis of the features on the stream. With such functionality, the
client can communicate to the server if it is a transport client, or if
it has, for example, X-Pack installed. This enables us to support
rolling upgrades from the OSS distribution to the default distribution
without breaking client connectivity as we can now elect to serialize
customs in the cluster state depending on whether or not the client
reports to us using the feature capabilities that it can under these
customs. This means that we would avoid sending a client pieces of the
cluster state that it can not understand. However, we want to take care
and always send the full cluster state during node-to-node communication
as otherwise we would end up with different understanding of what is in
the cluster state across nodes depending on which features they reported
to have. This is why when deciding whether or not to write out a custom
we always send the custom if the client is not a transport client and
otherwise do not send the custom if the client is transport client that
does not report to have the feature required by the custom.
Co-authored-by: Yannick Welsch <yannick@welsch.lu>
* Retain the expiryDate for trial licenses
While updating the license signature to the new license spec retain
the trial license expiration date to that of the existing license.
Resolves#30882
This commit removes the RequestBuilder generic type from Action. It was
needed to be used by the newRequest method, which in turn was used by
client.prepareExecute. Both of these methods are now removed, along with
the existing users of prepareExecute constructing the appropriate
builder directly.
ML has dedicated APIs for datafeeds and jobs yet base test classes and
some tests were relying on the cluster state for this state. This commit
removes this usage in favor of using the dedicated endpoints.
Limits the scope of the runtime dependency on
BouncyCastle so that it can be eventually removed.
* Splits functionality related to reading and generating certificates
and keys in two utility classes so that reading certificates and
keys doesn't require BouncyCastle.
* Implements a class for parsing PEM Encoded key material (which also
adds support for reading PKCS8 encoded encrypted private keys).
* Removes BouncyCastle dependency for all of our test suites(except
for the tests that explicitly test certificate generation) by using
pre-generated keys/certificates/keystores.
This is a bug that was identified by the kibana team. Currently on a
get-license call we do not serialize the hard-coded expiration for basic
licenses. However, the kibana team calls the x-pack info route which
still does serialize the expiration date. This commit removes that
serialization in the rest response.
The .watcher-history-* template is currently using a plugin-custom index setting xpack.watcher.template.version,
which prevents this template from being installed in a mixed OSS / X-Pack cluster, ultimately
leading to the situation where an X-Pack node is constantly spamming an OSS master with (failed)
template updates. Other X-Pack templates (e.g. security-index-template or security_audit_log)
achieve the same versioning functionality by using a custom _meta field in the mapping instead.
This commit switches the .watcher-history-* template to use the _meta field instead.
Persistent tasks was moved from X-Pack to core in #28455.
However, registration of the named writables and named
X-content was left in X-Pack.
This change moves the registration of the named writables
and named X-content into core. Additionally, the persistent
task actions are no longer registered in the X-Pack client
plugin, as they are already registered in ActionModule.
This change adds a simple header to the transport client
that is present on the servers thread context that ensures
we can detect if a transport client talks to the server in a
specific request. This change also adds a header for xpack
to detect if the client has xpack installed.
Enables a rolling restart from the OSS distribution to the x-pack based distribution by preventing
x-pack code from installing custom metadata into the cluster state until all nodes are capable of
deserializing this metadata.