We can leverage the composite agg's new `missing_bucket` feature on
terms groupings. This means the aggregation criteria used in the indexer
will now return null buckets for missing keys.
Because all buckets are now returned (even if a key is null),
we can guarantee correct doc counts with
"combined" jobs (where a job rolls up multiple schemas). This was
previously impossible since composite would ignore documents that
didn't have _all_ the keys, meaning non-overlapping schemas would
cause composite to return no buckets.
Note: date_histo does not use `missing_bucket`, since a timestamp is
always required.
The docs have been adjusted to recommend a single, combined job. It
also makes reference to the previous issue to help users that are upgrading
(rather than just deleting the sections).
Historically we have loaded SSL objects (such as SSLContext,
SSLIOSessionStrategy) by passing in the SSL settings, constructing a
new SSL configuration from those settings and then looking for a
cached object that matches those settings.
The primary issue with this approach is that it requires a fully
configured Settings object to be available any time the SSL context
needs to be loaded. If the Settings include SecureSettings (such as
passwords for keys or keystores) then this is not true, and the cached
SSL object cannot be loaded at runtime.
This commit introduces an alternative approach of naming every cached
ssl configuration, so that it is possible to load the SSL context for
a named configuration (such as "xpack.http.ssl"). This means that the
calling code does not need to have ongoing access to the secure
settings that were used to load the configuration.
This change also allows monitoring exporters to use SSL passwords
from secure settings, however an exporter that uses a secure SSL setting
(e.g. truststore.secure_password) may not have its SSL settings updated
dynamically (this is prevented by a settings validator).
Exporters without secure settings can continue to be defined and updated
dynamically.
This is related to #27260. It adds the SecurityNioTransport to the
security plugin. Additionally, it adds support for ip filtering. And it
randomly uses the nio transport in security integration tests.
The test failure in #31916 revealed that updating
rules on a job was modifying the detectors list
in-place. That meant the old cluster state and the
updated cluster state had no difference and thus the
change was not propagated to non-master nodes.
This commit fixes that and also reviews all of ML
metadata in order to ensure immutability.
Closes#31916
Currently Role.Builder keeps a reference to the FieldPermissionsCache that is
passed into its constructors. This seems to be unused except for passing it on
to convertFromIndicesPrivileges() in the second ctor itself, but we don't need
to keep the internal reference in that case, so it can be removed.
Relates to #31876
* Adding Beats x-pack plugin + index templates
* Adding built-in roles for Beats central management
* Fixing typo
* Refactoring: extract common code into method
* More refactoring for more code reuse
* Use a single index for Beats management
* Rename "fragment" to "block"
* Adding configuration block type
* Expand kibana_system role to include Beats management index privileges
* Fixing syntax
* Adding test
* Adding asserting for reserved role
* Fixing privileges
* Updating template
* Removing beats plugin
* Fixing tests
* Fixing role variable name
* Fixing assertions
* Switching to preferred syntax for boolean false checks
* Making class final
* Making variables final
* Updating Basic license message to be more accurate
Originally I put the X-Pack info object into the top level rest client
object. I did that because we thought we'd like to squash `xpack` from
the name of the X-Pack APIs now that it is part of the default
distribution. We still kind of want to do that, but at least for now we
feel like it is better to keep the high level rest client aligned with
the other language clients like C# and Python. This shifts the X-Pack
info API to align with its json spec file.
Relates to #31870
With https://github.com/elastic/beats/pull/7075 Beats introduces state reporting for X-Pack Monitoring. The data sent up to Elasticsearch ends up stored in the following format.
```
"beats_state": {
"timestamp": "2018-07-05T07:21:03.581Z",
"state": {
"module": {
"count": 1,
"names": [
"http"
]
}
},
"beat": {
"uuid": "594039b5-6353-4d78-9bad-778ecc0fe83f",
"type": "metricbeat",
"version": "7.0.0-alpha1",
"name": "ruflin",
"host": "ruflin"
}
}
```
This PR adds the new fields to the template.
This is the first x-pack API we're adding to the high level REST client
so there is a lot to talk about here!
= Open source
The *client* for these APIs is open source. We're taking the previously
Elastic licensed files used for the `Request` and `Response` objects and
relicensing them under the Apache 2 license.
The implementation of these features is staying under the Elastic
license. This lines up with how the rest of the Elasticsearch language
clients work.
= Location of the new files
We're moving all of the `Request` and `Response` objects that we're
relicensing to the `x-pack/protocol` directory. We're adding a copy of
the Apache 2 license to the root fo the `x-pack/protocol` directory to
line up with the language in the root `LICENSE.txt` file. All files in
this directory will have the Apache 2 license header as well. We don't
want there to be any confusion. Even though the files are under the
`x-pack` directory, they are Apache 2 licensed.
We chose this particular directory layout because it keeps the X-Pack
stuff together and easier to think about.
= Location of the API in the REST client
We've been following the layout of the rest-api-spec files for other
APIs and we plan to do this for the X-Pack APIs with one exception:
we're dropping the `xpack` from the name of most of the APIs. So
`xpack.graph.explore` will become `graph().explore()` and
`xpack.license.get` will become `license().get()`.
`xpack.info` and `xpack.usage` are special here though because they
don't belong to any proper category. For now I'm just calling
`xpack.info` `xPackInfo()` and intend to call usage `xPackUsage` though
I'm not convinced that this is the final name for them. But it does get
us started.
= Jars, jars everywhere!
This change makes the `xpack:protocol` project a `compile` scoped
dependency of the `x-pack:plugin:core` and `client:rest-high-level`
projects. I intend to keep it a compile scoped dependency of
`x-pack:plugin:core` but I intend to bundle the contents of the protocol
jar into the `client:rest-high-level` jar in a follow up. This change
has grown large enough at this point.
In that followup I'll address javadoc issues as well.
= Breaking-Java
This breaks that transport client by a few classes around. We've
traditionally been ok with doing this to the transport client.
It seems that java 11 tightened some validations with regard to
time formats. The random instance creator was setting an odd
time format to the data description which is invalid when run
with java 11. This commit changes it to a valid format.
* Upgrade bouncycastle
Required to fix
`bcprov-jdk15on-1.55.jar; invalid manifest format `
on jdk 11
* Downgrade bouncycastle to avoid invalid manifest
* Add checksum for new jars
* Update tika permissions for jdk 11
* Mute test failing on jdk 11
* Add JDK11 to CI
* Thread#stop(Throwable) was removed
http://mail.openjdk.java.net/pipermail/core-libs-dev/2018-June/053536.html
* Disable failing tests #31456
* Temprorarily disable doc tests
To see if there are other failures on JDK11
* Only blacklist specific doc tests
* Disable only failing tests in ingest attachment plugin
* Mute failing HDFS tests #31498
* Mute failing lang-painless tests #31500
* Fix backwards compatability builds
Fix JAVA version to 10 for ES 6.3
* Add 6.x to bwx -> java10
* Prefix out and err from buildBwcVersion for readability
```
> Task :distribution:bwc:next-bugfix-snapshot:buildBwcVersion
[bwc] :buildSrc:compileJava
[bwc] WARNING: An illegal reflective access operation has occurred
[bwc] WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/home/alpar/.gradle/wrapper/dists/gradle-4.5-all/cg9lyzfg3iwv6fa00os9gcgj4/gradle-4.5/lib/groovy-all-2.4.12.jar) to method java.lang.Object.finalize()
[bwc] WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
[bwc] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[bwc] WARNING: All illegal access operations will be denied in a future release
[bwc] :buildSrc:compileGroovy
[bwc] :buildSrc:writeVersionProperties
[bwc] :buildSrc:processResources
[bwc] :buildSrc:classes
[bwc] :buildSrc:jar
```
* Also set RUNTIME_JAVA_HOME for bwcBuild
So that we can make sure it's not too new for the build to understand.
* Align bouncycastle dependency
* fix painles array tets
closes#31500
* Update jar checksums
* Keep 8/10 runtime/compile untill consensus builds on 11
* Only skip failing tests if running on Java 11
* Failures are dependent of compile java version not runtime
* Condition doc test exceptions on compiler java version as well
* Disable hdfs tests based on runtime java
* Set runtime java to minimum supported for bwc
* PR review
* Add comment with ticket for forbidden apis
Today TransportService is tightly coupled with Transport since it
requires an instance of TransportService in order to receive responses
and send requests. This is mainly due to the Request and Response handlers
being maintained in TransportService but also because of the lack of a proper
callback interface.
This change moves request handler registry and response handler registration into
Transport and adds all necessary methods to `TransportConnectionListener` in order
to remove the `TransportService` dependency from `Transport`
Transport now accepts one or more `TransportConnectionListener` instances that are
executed sequentially in a blocking fashion.
Add hard limit to the number of items
a filter may have. This serves to protect
from excessive overhead due to the filters
taking too much memory or lookups becoming
too expensive.
This change adds stats about forecasts, to the jobstats api as well as xpack/_usage. The following
information is collected:
_xpack/ml/anomaly_detectors/{jobid|_all}/_stats:
- total number of forecasts
- memory statistics (mean/min/max)
- runtime statistics
- record statistics
- counts by status
_xpack/usage
- collected by job status as well as overall (_all):
- total number of forecasts
- number of jobs that have at least 1 forecast
- memory, runtime, record statistics
- counts by status
Fixes#31395
* Default resolveFromHash to Hasher.NOOP
This changes the default behavior when resolving the hashing
algorithm from unrecognised hash strings, which was introduced in
#31234
A hash string that doesn't start with an algorithm identifier can
either be a malformed/corrupted hash or a plaintext password when
Hasher.NOOP is used(against warnings).
Do not make assumptions about which of the two is true for such
strings and default to Hasher.NOOP. Hash verification will subsequently
fail for malformed hashes.
Finally, do not log the potentially malformed hash as this can very
well be a plaintext password.
Resolves#31697
Reverts 58cf95a06f1defd31b16c831708ca32a5b445f98
Support multiple system store types
When falling back to using the system keystore and - most usually -
truststore, do not assume that it will be a JKS store, but deduct
its type from {@code KeyStore#getDefaultKeyStoreType}. This allows
the use of any store type the Security Provider supports by setting
the keystore.type java security property.
Make password hashing algorithm/cost configurable for the
stored passwords of users for the realms that this applies
(native, reserved). Replaces predefined choice of bcrypt with
cost factor 10.
This also introduces PBKDF2 with configurable cost
(number of iterations) as an algorithm option for password hashing
both for storing passwords and for the user cache.
Password hash validation algorithm selection takes into
consideration the stored hash prefix and only a specific number
of algorithnm and cost factor options for brypt and pbkdf2 are
whitelisted and can be selected in the relevant setting.
* Move to Gradle 4.8 RC1
* Use latest version of plugin
The current does not work with Gradle 4.8 RC1
* Switch to Gradle GA
* Add and configure build compare plugin
* add work-around for https://github.com/gradle/gradle/issues/5692
* work around https://github.com/gradle/gradle/issues/5696
* Make use of Gradle build compare with reference project
* Make the manifest more compare friendly
* Clear the manifest in compare friendly mode
* Remove animalsniffer from buildscript classpath
* Fix javadoc errors
* Fix doc issues
* reference Gradle issues in comments
* Conditionally configure build compare
* Fix some more doclint issues
* fix typo in build script
* Add sanity check to make sure the test task was replaced
Relates to #31324. It seems like Gradle has an inconsistent behavior and
the taks is not always replaced.
* Include number of non conforming tasks in the exception.
* No longer replace test task, create implicit instead
Closes#31324. The issue has full context in comments.
With this change the `test` task becomes nothing more than an alias for `utest`.
Some of the stand alone tests that had a `test` task now have `integTest`, and a
few of them that used to have `integTest` to run multiple tests now only
have `check`.
This will also help separarate unit/micro tests from integration tests.
* Revert "No longer replace test task, create implicit instead"
This reverts commit f1ebaf7d93e4a0a19e751109bf620477dc35023c.
* Fix replacement of the test task
Based on information from gradle/gradle#5730 replace the task taking
into account the task providres.
Closes#31324.
* Only apply build comapare plugin if needed
* Make sure test runs before integTest
* Fix doclint aftter merge
* PR review comments
* Switch to Gradle 4.8.1 and remove workaround
* PR review comments
* Consolidate task ordering
This creates a YAML test "features" that indices if the cluster being
tested has xpack installed (`xpack`) or if it does *not* have xpack
installed (`no_xpack`). It uses those features to centralize skipping
a few tests that fail if xpack is installed.
The plan is to use this in a followup to skip docs tests that require
xpack when xpack is not installed. We *plan* to use the declaration
of required license level on the docs page to generate the required
`skip`.
Closes#30933.
TransportAction currently contains 2 doExecute methods, one which takes
a the task, and one that does not. The latter is what some subclasses
implement, while the first one just calls the latter, dropping the given
task. This commit combines these methods, in favor of just always
assuming a task is present.
This adds an api to allow updating a filter:
POST _xpack/ml/filters/{filter_id}/_update
The request body may have:
- description: setting a new description
- add_items: a list of the items to add
- remove_items: a list of the items to remove
This commit also changes the PUT filter api to
error when the filter_id is already used. As
now there is an api for updating filters, the
put api should only be used to create new ones.
Also, updating a filter results into a notification
message auditing the change for every job that is
using that filter.
According to RFC 7617, the Basic authentication scheme name
should not be case sensitive.
Case insensitive comparisons are also applicable for the bearer
tokens where Bearer authentication scheme is used as per
RFC 6750 and RFC 7235
Some Http clients may send authentication scheme names in
different case types for eg. Basic, basic, BASIC, BEARER etc.,
so the lack of case-insensitive check is an issue when these
clients try to authenticate with elasticsearch.
This commit adds case-insensitive checks for Basic and Bearer
authentication schemes.
Closes#31486
Most transport actions don't need the node ThreadPool. This commit
removes the ThreadPool as a super constructor parameter for
TransportAction. The actions that do need the thread pool then have a
member added to keep it from their own constructor.
Historically in TcpTransport server channels were represented by the
same channel interface as socket channels. This was necessary as
TcpTransport was parameterized by the channel type. This commit
introduces TcpServerChannel and HttpServerChannel classes. Additionally,
it adds the implementations for the various transports. This allows
server channels to have unique functionality and not implement the
methods they do not support (such as send and getRemoteAddress).
Additionally, with the introduction of HttpServerChannel this commit
extracts some of the storing and closing channel work to the abstract
http server transport.
Most transport actions don't need to resolve index names. This commit
removes the index name resolver as a super constructor parameter for
TransportAction. The actions that do need the resolver then have a
member added to keep the resolver from their own constructor.
The changes made to disable security for trial licenses unless security
is explicitly enabled caused issues when a 6.3 node attempts to join a
cluster that already has a production license installed. The new node
starts off with a trial license and `xpack.security.enabled` is not
set for the node, which causes the security code to skip attaching the
user to the request. The existing cluster has security enabled and the
lack of a user attached to the requests causes the request to be
rejected.
This commit changes the security code to check if the state has been
recovered yet when making the decision on whether or not to attach a
user. If the state has not yet been recovered, the code will attach
the user to the request in case security is enabled on the cluster
being joined.
Closes#31332
This is a general cleanup of channels and exception handling in http.
This commit introduces a CloseableChannel that is a superclass of
TcpChannel and HttpChannel. This allows us to unify the closing logic
between tcp and http transports. Additionally, the normal http channels
are extracted to the abstract server transport.
Finally, this commit (mostly) unifies the exception handling between nio
and netty4 http server transports.
Since #30966, Action no longer has anything but a call to the
GenericAction super constructor. This commit renames GenericAction
into Action, thus eliminating the Action class. Additionally, this
commit removes the Request generic parameter of the class, since
it was unused.
This commit makes it so that cluster state update tasks always run under the system context, only
restoring the original context when the listener that was provided with the task is called. A notable
exception is the clusterStatePublished(...) callback which will still run under system context,
because it's defined on the executor-level, and not the task level, and only called once for the
combined batch of tasks and can therefore not be uniquely identified with a task / thread context.
Relates #30603
This is related to #28898. This PR implements pooling of bytes arrays
when reading from the wire in the http server transport. In order to do
this, we must integrate with netty reference counting. That manner in
which this PR implements this is making Pages in InboundChannelBuffer
reference counted. When we accessing the underlying page to pass to
netty, we retain the page. When netty releases its bytebuf, it releases
the underlying pages we have passed to it.
This pull request removes the relationship between the state
of persistent task (as stored in the cluster state) and the status
of the task (as reported by the Task APIs and used in various
places) that have been confusing for some time (#29608).
In order to do that, a new PersistentTaskState interface is added.
This interface represents the persisted state of a persistent task.
The methods used to update the state of persistent tasks are
renamed: updatePersistentStatus() becomes updatePersistentTaskState()
and now takes a PersistentTaskState as a parameter. The
Task.Status type as been changed to PersistentTaskState in all
places were it make sense (in persistent task customs in cluster
state and all other methods that deal with the state of an allocated
persistent task).
This is related to #28898. With the addition of the http nio transport,
we now have two different modules that provide http transports.
Currently most of the http logic lives at the module level. However,
some of this logic can live in server. In particular, some of the
setting of headers, cors, and pipelining. This commit begins this moving
in that direction by introducing lower level abstraction (HttpChannel,
HttpRequest, and HttpResonse) that is implemented by the modules. The
higher level rest request and rest channel work can live entirely in
server.
This adds a `description` to ML filters in order
to allow users to describe their filters in a human
readable form which is also editable (filter updates
to be added shortly).
The parser for the Metric config was directly instantiating
the config object, rather than using the builder. That means it was
bypassing the validation logic built into the builder, and would allow
users to create invalid metric configs (like using unsupported metrics).
The job would later blow up and abort due to bad configs, but this isn't
immediately obvious to the user since the PutJob API succeeded.
This change prevents a datafeed using cross cluster search from starting if the remote cluster
does not have x-pack installed and a sufficient license. The check is made only when starting a
datafeed.
Rules allow users to supply a detector with domain
knowledge that can improve the quality of the results.
The model detects statistically anomalous results but it
has no knowledge of the meaning of the values being modelled.
For example, a detector that performs a population analysis
over IP addresses could benefit from a list of IP addresses
that the user knows to be safe. Then anomalous results for
those IP addresses will not be created and will not affect
the quantiles either.
Another example would be a detector looking for anomalies
in the median value of CPU utilization. A user might want
to inform the detector that any results where the actual
value is less than 5 is not interesting.
This commit introduces a `custom_rules` field to the `Detector`.
A detector may have multiple rules which are combined with `or`.
A rule has 3 fields: `actions`, `scope` and `conditions`.
Actions is a list of what should happen when the rule applies.
The current options include `skip_result` and `skip_model_update`.
The default value for `actions` is the `skip_result` action.
Scope is optional and allows for applying filters on any of the
partition/over/by field. When not defined the rule applies to
all series. The `filter_id` needs to be specified to match the id
of the filter to be used. Optionally, the `filter_type` can be specified
as either `include` (default) or `exclude`. When set to `include`
the rule applies to entities that are in the filter. When set to
`exclude` the rule only applies to entities not in the filter.
There may be zero or more conditions. A condition requires `applies_to`,
`operator` and `value` to be specified. The `applies_to` value can be
either `actual`, `typical` or `diff_from_typical` and it specifies
the numerical value to which the condition applies. The `operator`
(`lt`, `lte`, `gt`, `gte`) and `value` complete the definition.
Conditions are combined with `and` and allow to specify numerical
conditions for when a rule applies.
A rule must either have a scope or one or more conditions. Finally,
a rule with scope and conditions applies when all of them apply.
* Support RequestedAuthnContext
This implements limited support for RequestedAuthnContext by :
- Allowing SP administrators to define a list of authnContextClassRef
to be included in the RequestedAuthnContext of a SAML Authn Request
- Veirifying that the authnContext in the incoming SAML Asertion's
AuthnStatement contains one of the requested authnContextClassRef
- Only EXACT comparison is supported as the semantics of validating
the incoming authnContextClassRef are deployment dependant and
require pre-established rules for MINIMUM, MAXIMUM and BETTER
Also adds necessary AuthnStatement validation as indicated by [1] and
[2]
[1] https://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf
3.4.1.4, line 2250-2253
[2] https://kantarainitiative.github.io/SAMLprofiles/saml2int.html
[SDP-IDP10]
Trying to post a new watch without any body currently results in a
NullPointerException. This change fixes that by validating that
Post and Put requests always have a body.
Closes#30057