Removed extending of AbstractComponent and changed logger usage to
explicit declaration. Abstract classes still have logger
declaration using this.getClass() in order to show implementation class
name in its logs.
See #34488
* ML: Adding missing datacheck to datafeedjob
* Adding client side and docs
* Making adjustments to validations
* Making values default to on, having more sensible limits
* Intermittent commit, still need to figure out interval
* Adjusting delayed data check interval
* updating docs
* Making parameter Boolean, so it is nullable
* bumping bwc to 7 before backport
* changing to version current
* moving delayed data check config its own object
* Separation of duties for delayed data detection
* fixing checkstyles
* fixing checkstyles
* Adjusting default behavior so that null windows are allowed
* Mentioning the default value
* Fixing comments, syncing up validations
This pull request replaces some blocks of code that must be run once
and that are currently based on AtomicBoolean by the convient RunOnce
class added in #35489.
The NPE would occur if should_trim_field was overridden to
true and any field value was completely blank. This change
defends against this situation.
Fixes#35462
This commit uses the index settings version so that a follower can
replicate index settings changes as needed from the leader.
Co-authored-by: Martijn van Groningen <martijn.v.groningen@gmail.com>
* Adding rollup support for datafeeds
* Fixing tests and adjusting formatting
* minor formatting chagne
* fixing some syntax and removing redundancies
* Refactoring and fixing failing test
* Refactoring, adding paranoid null check
* Moving rollup into the aggregation package
* making AggregationToJsonProcessor package private again
* Addressing test failure
* Fixing validations, chunking
* Addressing failing test
* rolling back RollupJobCaps changes
* Adding comment and cleaning up test
* Addressing review comments and test failures
* Moving builder logic into separate methods
* Addressing PR comments, adding test for rollup permissions
* Fixing test failure
* Adding rollup priv check on datafeed put
* Handling missing index when getting caps
* Fixing unused import
Stop passing `Settings` to `AbstractComponent`'s ctor. This allows us to
stop passing around `Settings` in a *ton* of places. While this change
touches many files, it touches them all in fairly small, mechanical
ways, doing a few things per file:
1. Drop the `super(settings);` line on everything that extends
`AbstractComponent`.
2. Drop the `settings` argument to the ctor if it is no longer used.
3. If the file doesn't use `logger` then drop `extends
AbstractComponent` from it.
4. Clean up all compilation failure caused by the `settings` removal
and drop any now unused `settings` isntances and method arguments.
I've intentionally *not* removed the `settings` argument from a few
files:
1. TransportAction
2. AbstractLifecycleComponent
3. BaseRestHandler
These files don't *need* `settings` either, but this change is large
enough as is.
Relates to #34488
Today when ESIntegTestCase starts some nodes it writes out the unicast hosts
files each time a node starts its transport service. This does mean that a
number of nodes can start and perform their first pinging round without any
unicast hosts which, if the timing is unlucky and a lot of nodes are all
started at the same time, can lead to a split brain as in #35052.
Prior to #33554 this was unlikely to happen since the MockUncasedHostsProvider
would always have yielded the existing hosts, so the timing would have to have
been implausibly unlucky. Since #33554, however, it's more likely because the
race occurs between the start of the first round of pinging and the writing of
the unicast hosts file. It is realistic that new nodes will be configured with
the existing nodes from startup, so this change reinstates that behaviour.
Closes#35052.
Drops the `Settings` member from `AbstractComponent`, moving it from the
base class on to the classes that use it. For the most part this is a
mechanical change that doesn't drop `Settings` accesses. The one
exception to this is naming threads where it switches from an invocation
that passes `Settings` and extracts the node name to one that explicitly
passes the node name.
This change doesn't drop the `Settings` argument from
`AbstractComponent`'s ctor because this change is big enough as is.
We'll do that in a follow up change.
Sometimes the cluster forming here will split-brain when it grows up to 5
nodes. This could be a timing issue or could be something going wrong in
discovery, so this asks for more logs. Relates #35052
The file structure finder endpoint can find the NDJSON
(newline-delimited JSON) file format, but called it
`json`. This change renames the `format` for this file
structure to `ndjson`, which is more precise and will
hopefully avoid confusion.
`AbstractComponent` is trouble because its name implies that
*everything* should extend from it. It *is* useful, but maybe too
broadly useful. The things it offers access too, the `Settings` instance
for the entire server and a logger are nice to have around, but not
really needed *everywhere*. The `Settings` instance especially adds a
fair bit of ceremony to testing without any value.
This removes the `nodeName` method from `AbstractComponent` so it is
more clear where we actually need the node name.
We currently have two different native processes:
autodetect & normalizer. There are plans for introducing
a new process. All these share many things in common.
This commit refactors the processes to extend an
`AbstractNativeProcess` class that encapsulates those
commonalities with the purpose of reusing the code
for new processes in the future.
This change ensures the `message` field is always included
in the `field_stats` for the semi-structured text log file
file structure. Previously it was not, as it will almost
certainly contain all distinct values. However, for
consistency in the UI it's useful to include it.
After discussing on the team's FixItFriday, we concluded that
static final instance variables that are mutable should be lowercased.
Historically, DeprecationLogger was uppercased more frequently than lowercased.
With this change, we apply the common test config automatically to all
newly created tasks instead of opting in specifically.
For plugin authors using the plugin externally this means that the
configuration will be applied to their RandomizedTestingTasks as well.
The purpose of the task is to simplify setup and make it easier to
change projects that use the `test` task but actually run integration
tests to use a task called `integTest` for clarity, but also because
we may want to configure and run them differently.
E.x. using different levels of concurrency.
#33708 introduced a strict deprecation mode that makes a REST request
fail if there is a warning header in the response returned by
Elasticsearch (usually a deprecation message signaling that a feature
or a field has been deprecated).
This change adds the strict deprecation mode into the REST integration
tests, and makes the tests fail if a deprecated feature is used. Also
any test using a deprecated feature has been modified to pass the build.
The YAML integration tests already analyzed HTTP warnings so they do
not use this mode, keeping their "expected vs actual" behavior.
All of the tests in PainlessDomainSplitIT have an awaitsfix, which
causes the build to fail since no tests are run. This adds an empty
test to get the build going again.
Relates #34683
Relates #32966
In some of our X-Pack REST tests we have to wait for pending tasks to
complete. We are now needing this functionality in ESRestTestCase for
the docs tests where we run against X-Pack features. This commit moves
the helper method that we have in X-Pack to ESRestTestCase, and removes
duplicate logic from waiting for rollup tasks to complete.
The setting that reduces the disk space requirement
for the forecasting integration tests was accidentally
removed in #31757 when files were moved around. This
change simply adds back the setting that existed before
that.
This commit moves the definition of domainSplit into java and exposes it
as a painless whitelist extension. The method also no longer needs
params, and version which ignores params is added and deprecated.
The ingest pipeline that is produced is very simple. It
contains a grok processor if the format is semi-structured
text, a date processor if the format contains a timestamp,
and a remove processor if required to remove the interim
timestamp field parsed out of semi-structured text.
Eventually the UI should offer the option to customize the
pipeline with additional processors to perform other data
preparation steps before ingesting data to an index.
In ccb9ab5717 we changed how we deal with time
fields to support the `DateTime`-format fields added in 6.0, but dropped
support for pre-6.x `Long`-format fields. This change reinstates this support
for cases where pre-6.x data is made available to ML (e.g. in a mixed-version
CCS setup or after an upgrade).
This changes the delete job API by adding
the choice to delete a job asynchronously.
The commit adds a `wait_for_completion` parameter
to the delete job request. When set to `false`,
the action returns immediately and the response
contains the task id.
This also changes the handling of subsequent
delete requests for a job that is already being
deleted. It now uses the task framework to check
if the job is being deleted instead of the cluster
state. This is a beneficial for it is going to also
be working once the job configs are moved out of the
cluster state and into an index. Also, force delete
requests that are waiting for the job to be deleted
will not proceed with the deletion if the first task
fails. This will prevent overloading the cluster. Instead,
the failure is communicated better via notifications
so that the user may retry.
Finally, this makes the `deleting` property of the job
visible (also it was renamed from `deleted`). This allows
a client to render a deleting job differently.
Closes#32836
This change fixes a potential deadlock problem in the unit
test introduced in #34117.
It also removes a piece of debug code and corrects a docs
formatting problem that were both added in that same PR.
This can be used to restrict the amount of CPU a single
structure finder request can use.
The timeout is not implemented precisely, so requests
may run for slightly longer than the timeout before
aborting.
The default is 25 seconds, which is a little below
Kibana's default timeout of 30 seconds for calls to
Elasticsearch APIs.
This change cleans up "unused variable" warnings. There are several cases were we
most likely want to suppress the warnings (especially in the client documentation test
where the snippets contain many unused variables). In a lot of cases the unused
variables can just be deleted though.
[ML] Modify thresholds for normalization triggers
The (arbitrary) threshold factors used to judge if scores have
changed significantly enough to trigger a look-back renormalization have
been changed to values that reduce the frequency of such
renormalizations.
Added a clause to treat changes in scores as a 'big change' if it would
result in a change of severity reported in the UI.
Also altered the clause affecting small scores so that a change should
be considered big if scores have changed by at least 1.5.
Relates https://github.com/elastic/machine-learning-qa/issues/263
Previously the timestamp_formats field in the response
from the find_file_structure endpoint contained Joda
timestamp formats. This change makes that clear by
renaming the field to joda_timestamp_formats, and also
adds a java_timestamp_formats field containing the
equivalent Java time format strings.
* Make certain ML node settings dynamic (#33565)
* Changing to pull in updating settings and pass to constructor
* adding note about only newly opened jobs getting updated value
Previously numeric values in the field_stats created by the
find_file_structure endpoint were always output with a
decimal point. This looked unfriendly and unnatural for
fields that clearly store integer values. This change
converts integer values to type Integer before output in
the file structure field stats.
This change adds the OneStatementPerLineCheck to our checkstyle precommit
checks. This rule restricts the number of statements per line to one. The
resoning behind this is that it is very difficult to read multiple statements on
one line. People seem to mostly use it in short lambdas and switch statements in
our code base, but just going through the changes already uncovered some actual
problems in randomization in test code, so I think its worth it.
The job deletion logic was scattered around a few places:
the transport action, the job manager and the deletion task.
Overloading the task with deletion logic also meant extra
dependencies in the core package which should be unnecessary.
This commit consolidates all this logic into the transport action
and replaces the deletion task with a plain one that needs not be
aware of deletion logic.
We currently special-case SynonymFilterFactory and SynonymGraphFilterFactory, which need to
know their predecessors in the analysis chain in order to correctly analyze their synonym lists. This
special-casing doesn't work with Referring filter factories, such as the Multiplexer or Conditional
filters. We also have a number of filters (eg the Multiplexer) that will break synonyms when they
appear before them in a chain, because they produce multiple tokens at the same position.
This commit adds two methods to the TokenFilterFactory interface.
* `getChainAwareTokenFilterFactory()` allows a filter factory to rewrite itself against its preceding
filter chain, or to resolve references to other filters. It replaces `ReferringFilterFactory` and
`CustomAnalyzerProvider.checkAndApplySynonymFilter`, and by default returns `this`.
* `getSynonymFilter()` defines whether or not a filter should be applied when building a synonym
list `Analyzer`. By default it returns `true`.
Fixes#33609
This change modifies the file structure detection functionality
such that some of the decisions can be overridden with user
supplied values.
The fields that can be overridden are:
- charset
- format
- has_header_row
- column_names
- delimiter
- quote
- should_trim_fields
- grok_pattern
- timestamp_field
- timestamp_format
If an override makes finding the file structure impossible then
the endpoint will return an exception.
This change addresses some issues regarding thread safety around
updates and method calls on the XPackLicenseState object. There exists
a possibility that there could be a concurrent update to the
XPackLicenseState when there is a scheduled check to see if the license
is expired and a cluster state update. In order to address this, the
update method now has a synchronized block where member variables are
updated. Each method that reads these variables is now also
synchronized.
Along with the above change, there was a consistency issue around
security calls to the license state. The majority of security checks
make two calls to the license state, which could result in incorrect
behavior due to the checks being made against different license states.
The majority of this behavior was introduced for 6.3 with the inclusion
of x-pack in the default distribution. In order to resolve the majority
of these cases, the `isSecurityEnabled` method is no longer public and
the logic is also included in individual methods about security such as
`isAuthAllowed`. There were a few cases where this did not remove
multiple calls on the license state, so a new method has been added
which creates a copy of the current license state that will not change.
Callers can use this copy of the license state to make decisions based
on a consistent view of the license state.
This change tightens up the meaning of the "input_fields" field
in the file structure finder output. Previously it was permitted
but not calculated for JSON and XML files. Following this change
the field is called "column_names" and is only permitted for
delimited files.
Additionally the way the column names are set for headerless
delimited files is refactored to encapsulate the way they're
named to one line of the code rather than having the same
logic in two places.
When requesting job stats for `_all`, all ES tasks are accepted
resulting to loads of cluster traffic and a memory overhead.
This commit correctly filters out non ML job tasks.
Closes#33515
This commit ensures that we bootstrap a new history_uuid when force
allocating a stale primary. A stale primary should never be the source
of an operation-based recovery to another shard which exists before the
forced-allocation.
Closes#26712
This endpoint accepts an arbitrary file in the request body and
attempts to determine the structure. If successful it also
proposes mappings that could be used when indexing the file's
contents, and calculates simple statistics for each of the fields
that are useful in the data preparation step prior to configuring
machine learning jobs.
This change collapses all metrics aggregations classes into a single package `org.elasticsearch.aggregations.metrics`.
It also restricts the visibility of some classes (aggregators and factories) that should not be used outside of the package.
Relates #22868
Many files supplied to the upcoming ML data preparation
functionality will not be "log" files. For example,
CSV files are generally not "log" files. Therefore it
makes sense to rename library that determines the
structure of these files.
Although "file structure" could be considered too broad,
as the library currently only works with a few text
formats, in the future it may be extended to work with
more formats.
The log structure endpoint will return these in addition to
pure structure information so that it can be used to drive
pre-import data visualizer functionality.
The statistics for every field are count, cardinality
(distinct count) and top hits (most common values). Extra
statistics are calculated if the field is numeric: min, max,
mean and median.
This is not changing the behaviour as when the sort field was set
to `influencer_score` the secondary sort would be used and that
was using the `record_score` at the highest priority.
1. The TOMCAT_DATESTAMP format needs to be checked before
TIMESTAMP_ISO8601, otherwise TIMESTAMP_ISO8601 will
match the start of the Tomcat datestamp.
2. Exclude more characters before and after numbers. For
example, in 1.2.3 we don't want to match 1.2 as a float.
* master:
Mute test watcher usage stats output
[Rollup] Fix FullClusterRestart test
Adjust soft-deletes version after backport into 6.5
completely drop `index.shard.check_on_startup: fix` for 7.0 (#33194)
Fix AwaitsFix issue number
Mute SmokeTestWatcherWithSecurityIT testsi
drop `index.shard.check_on_startup: fix` (#32279)
tracked at
[DOCS] Moves ml folder from x-pack/docs to docs (#33248)
[DOCS] Move rollup APIs to docs (#31450)
[DOCS] Rename X-Pack Commands section (#33005)
TEST: Disable soft-deletes in ParentChildTestCase
Fixes SecurityIntegTestCase so it always adds at least one alias (#33296)
Fix pom for build-tools (#33300)
Lazy evaluate java9home (#33301)
SQL: test coverage for JdbcResultSet (#32813)
Work around to be able to generate eclipse projects (#33295)
Highlight that index_phrases only works if no slop is used (#33303)
Different handling for security specific errors in the CLI. Fix for https://github.com/elastic/elasticsearch/issues/33230 (#33255)
[ML] Refactor delimited file structure detection (#33233)
SQL: Support multi-index format as table identifier (#33278)
MINOR: Remove Dead Code from PathTrie (#33280)
Enable forbiddenapis server java9 (#33245)
1. Use the term "delimited" rather than "separated values"
2. Use a single factory class with arguments to specify the
delimiter and identification constraints
This change makes it easier to add support for other
delimiter characters.
* master:
Painless: Add Bindings (#33042)
Update version after client credentials backport
Fix forbidden apis on FIPS (#33202)
Remote 6.x transport BWC Layer for `_shrink` (#33236)
Test fix - Graph HLRC tests needed another field adding to randomisation exception list
HLRC: Add ML Get Records API (#33085)
[ML] Fix character set finder bug with unencodable charsets (#33234)
TESTS: Fix overly long lines (#33240)
Test fix - Graph HLRC test was missing field name to be excluded from randomisation logic
Remove unsupported group_shard_failures parameter (#33208)
Update BucketUtils#suggestShardSideQueueSize signature (#33210)
Parse PEM Key files leniantly (#33173)
INGEST: Add Pipeline Processor (#32473)
Core: Add java time xcontent serializers (#33120)
Consider multi release jars when running third party audit (#33206)
Update MSI documentation (#31950)
HLRC: create base timed request class (#33216)
[DOCS] Fixes command page titles
HLRC: Move ML protocol classes into client ml package (#33203)
Scroll queries asking for rescore are considered invalid (#32918)
Painless: Fix Semicolon Regression (#33212)
ingest: minor - update test to include dissect (#33211)
Switch remaining LLREST usage to new style Requests (#33171)
HLREST: add reindex API (#32679)
Some character sets cannot be encoded and this was tripping
up the binary data check in the ML log structure character
set finder.
The fix is to assume that if ICU4J identifies that some bytes
correspond to a character set that cannot be encoded and those
bytes contain zeroes then the data is binary rather than text.
Fixes#33227
* master:
Add proxy support to RemoteClusterConnection (#33062)
TEST: Skip assertSeqNos for closed shards (#33130)
TEST: resync operation on replica should acquire shard permit (#33103)
Switch remaining x-pack tests to new style Requests (#33108)
Switch remaining tests to new style Requests (#33109)
Switch remaining ml tests to new style Requests (#33107)
Build: Line up IDE detection logic
Security index expands to a single replica (#33131)
HLRC: request/response homogeneity and JavaDoc improvements (#33133)
Checkstyle!
[Test] Fix sporadic failure in MembershipActionTests
Revert "Do NOT allow termvectors on nested fields (#32728)"
[Rollup] Move toAggCap() methods out of rollup config objects (#32583)
Fix race condition in scheduler engine test
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/plugin/ml/qa/native-multi-node-tests`,
`x-pack/plugin/ml/qa/single-node-tests` projects to use the new
versions.
* es/master: (62 commits)
[DOCS] Add docs for Application Privileges (#32635)
Add versions 5.6.12 and 6.4.1
Do NOT allow termvectors on nested fields (#32728)
[Rollup] Return empty response when aggs are missing (#32796)
[TEST] Add some ACL yaml tests for Rollup (#33035)
Move non duplicated actions back into xpack core (#32952)
Test fix - GraphExploreResponseTests should not randomise array elements Closes#33086
Use `addIfAbsent` instead of checking if an element is contained
TESTS: Fix Random Fail in MockTcpTransportTests (#33061)
HLRC: Fix Compile Error From Missing Throws (#33083)
[DOCS] Remove reload password from docs cf. #32889
HLRC: Add ML Get Buckets API (#33056)
Watcher: Improve error messages for CronEvalTool (#32800)
Search: Support of wildcard on docvalue_fields (#32980)
Change query field expansion (#33020)
INGEST: Cleanup Redundant Put Method (#33034)
SQL: skip uppercasing/lowercasing function tests for AZ locales as well (#32910)
Fix the default pom file name (#33063)
Switch ml basic tests to new style Requests (#32483)
Switch some watcher tests to new style Requests (#33044)
...
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/ml-basic-multi-node` project to use
the new versions.
This reworks how we configure the `shadow` plugin in the build. The major
change is that we no longer bundle dependencies in the `compile` configuration,
instead we bundle dependencies in the new `bundle` configuration. This feels
more right because it is a little more "opt in" rather than "opt out" and the
name of the `bundle` configuration is a little more obvious.
As an neat side effect of this, the `runtimeElements` configuration used when
one project depends on another now contains exactly the dependencies needed
to run the project so you no longer need to reference projects that use the
shadow plugin like this:
```
testCompile project(path: ':client:rest-high-level', configuration: 'shadow')
```
You can instead use the much more normal:
```
testCompile "org.elasticsearch.client:elasticsearch-rest-high-level-client:${version}"
```
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/audit-tests`,
`x-pack/qa/ml-disabled`, and `x-pack/qa/multi-node` projects to use the
new versions.
This commit moves the ML QA tests to be a sub-project of ML. The purpose
of this refactoring is to enable ML developers to run
:x-pack:plugin:ml:check and run the vast majority of a ML tests with a
single command (this still does not contain the ML REST tests, nor the
upgrade tests). This simplifies local development for faster iteration.
This commit implements licensing for CCR. CCR will require a platinum
license, and administrative endpoints will be disabled when a license is
non-compliant.
Machine learning has baked a remote license checker for use in checking
license compatibility of a remote license. This remote license checker
has general usage for any feature that relies on a remote cluster. For
example, cross-cluster replication will pull changes from a remote
cluster and require that the local and remote clusters have platinum
licenses. This commit generalizes the remote cluster license check for
use in cross-cluster replication.
* ML: fix updating opened jobs scheduled events (#31651)
* Adding UpdateParamsTests license header
* Adding integration test and addressing PR comments
* addressing test and job names
This change adds a library to ML that can be used to deduce a log
file's structure given only a sample of the log file.
Eventually this will be used to add an endpoint to ML to make the
functionality available to end users, but this will follow in a
separate change.
The functionality is split into a library so that it can also be
used by a command line tool without requiring the command line
tool to include all server code.
This removes custom Response classes that extend `AcknowledgedResponse` and do nothing, these classes are not needed and we can directly use the non-abstract super-class instead.
While this appears to be a large PR, no code has actually changed, only class names have been changed and entire classes removed.
[ML] Removing old per-partition normalization code
Per-partition normalization is an old, undocumented feature that was
never used by clients. It has been superseded by per-partition maximum
scoring.
To maintain communication compatibility with nodes prior to 6.5 it is
necessary to maintain/cope with the old wire format
Added infrastructure to push through the 'person name field value' to
the normalizer process. This is required by the normalizer to retrieve
the maximum scores for individual partitions.
* Clear Job#finished_time when it is opened (#32605)
* not returning failure when Job#finished_time is not reset
* Changing error log string and source string
The upcoming ML log structure finder functionality will use these
libraries, and it makes sense to use the same versions that are
being used elsewhere in Elasticsearch. This is especially true
with icu4j, which is pretty big.
This commit removes the never released multiple_bucket_spans
configuration parameter. This is now replaced with the new
multibucket feature that requires no configuration.
* Upgrade to `4.1.28` since the problem reported in #32487 is a bug in Netty itself (see https://github.com/netty/netty/issues/7337)
* Fixed other leaks in test code that now showed up due to fixes improvements in leak reporting in the newer version
* Needed to extend permissions for netty common package because it now sets a classloader at runtime after changes in 63bae0956a
* Adjusted forbidden APIs check accordingly
* Closes#32487
Previously we had two patterns for naming of strict
and lenient parsers.
Some classes had CONFIG_PARSER and METADATA_PARSER,
and used an enum to pass the parser type to nested
parsers.
Other classes had STRICT_PARSER and LENIENT_PARSER
and used ternary operators to pass the parser type
to nested parsers.
This change makes all ML classes use the second of
the patterns described above.
Removing some dead code or supressing warnings where apropriate. Most of the
time the variable tested for null is dereferenced earlier or never used before.
This commit introduces "Application Privileges" to the X-Pack security
model.
Application Privileges are managed within Elasticsearch, and can be
tested with the _has_privileges API, but do not grant access to any
actions or resources within Elasticsearch. Their purpose is to allow
applications outside of Elasticsearch to represent and store their own
privileges model within Elasticsearch roles.
Access to manage application privileges is handled in a new way that
grants permission to specific application names only. This lays the
foundation for more OLS on cluster privileges, which is implemented by
allowing a cluster permission to inspect not just the action being
executed, but also the request to which the action is applied.
To support this, a "conditional cluster privilege" is introduced, which
is like the existing cluster privilege, except that it has a Predicate
over the request as well as over the action name.
Specifically, this adds
- GET/PUT/DELETE actions for defining application level privileges
- application privileges in role definitions
- application privileges in the has_privileges API
- changes to the cluster permission class to support checking of request
objects
- a new "global" element on role definition to provide cluster object
level security (only for manage application privileges)
- changes to `kibana_user`, `kibana_dashboard_only_user` and
`kibana_system` roles to use and manage application privileges
Closes#29820Closes#31559
This bundles the x-pack:protocol project into the x-pack:plugin:core
project because we'd like folks to consider it an implementation detail
of our build rather than a separate artifact to be managed and depended
on. It is now bundled into both x-pack:plugin:core and
client:rest-high-level. To make this work I had to fix a few things.
Firstly, I had to make PluginBuildPlugin work with the shadow plugin.
In that case we have to bundle only the `shadow` dependencies and the
shadow jar.
Secondly, every reference to x-pack:plugin:core has to use the `shadow`
configuration. Without that the reference is missing all of the
un-shadowed dependencies. I tried to make it so that applying the shadow
plugin automatically redefines the `default` configuration to mirror the
`shadow` configuration which would allow us to use bare project references
to the x-pack:plugin:core project but I couldn't make it work. It'd *look*
like it works but then fail for transitive dependencies anyway. I think
it is still a good thing to do but I don't have the willpower to do it
now.
Finally, I had to fix an issue where Eclipse and IntelliJ didn't properly
reference shadowed transitive dependencies. Neither IDE supports shadowing
natively so they have to reference the shadowed projects. We fix this by
detecting `shadow` dependencies when in "Intellij mode" or "Eclipse mode"
and adding `runtime` dependencies to the same target. This convinces
IntelliJ and Eclipse to play nice.
The initial decision to use async durability was made a long time ago
for performance reasons. That argument no longer applies and we
prefer the safety of request durability.
Prior to 6.3 a trial license default to security enabled. Since 6.3
they default to security disabled. If a cluster is upgraded from <6.3
to >6.3, then we detect this and mimic the old behaviour with respect
to security.
The ML config classes will shortly be moved to the X-Pack protocol
library to allow the ML APIs to be moved to the high level REST
client. Dependencies on server functionality should be removed
from the config classes before this is done.
This change is entirely about moving code between packages. It
does not add or remove any functionality or tests.
When an ML job cannot be allocated to a node the exception
contained an explanation of why the job couldn't be
allocated to each node in the cluster. For large clusters
this was not particularly easy to read and made the error
displayed in the UI look very scary.
This commit changes the structure of the error to an outer
ElasticsearchException with a high level message and an
inner IllegalStateException containing the detailed
explanation. Because the definition of root cause is the
innermost ElasticsearchException the detailed explanation
will not be the root cause (which is what Kibana displays).
Fixes#29950
Originally I put the X-Pack info object into the top level rest client
object. I did that because we thought we'd like to squash `xpack` from
the name of the X-Pack APIs now that it is part of the default
distribution. We still kind of want to do that, but at least for now we
feel like it is better to keep the high level rest client aligned with
the other language clients like C# and Python. This shifts the X-Pack
info API to align with its json spec file.
Relates to #31870
This is the first x-pack API we're adding to the high level REST client
so there is a lot to talk about here!
= Open source
The *client* for these APIs is open source. We're taking the previously
Elastic licensed files used for the `Request` and `Response` objects and
relicensing them under the Apache 2 license.
The implementation of these features is staying under the Elastic
license. This lines up with how the rest of the Elasticsearch language
clients work.
= Location of the new files
We're moving all of the `Request` and `Response` objects that we're
relicensing to the `x-pack/protocol` directory. We're adding a copy of
the Apache 2 license to the root fo the `x-pack/protocol` directory to
line up with the language in the root `LICENSE.txt` file. All files in
this directory will have the Apache 2 license header as well. We don't
want there to be any confusion. Even though the files are under the
`x-pack` directory, they are Apache 2 licensed.
We chose this particular directory layout because it keeps the X-Pack
stuff together and easier to think about.
= Location of the API in the REST client
We've been following the layout of the rest-api-spec files for other
APIs and we plan to do this for the X-Pack APIs with one exception:
we're dropping the `xpack` from the name of most of the APIs. So
`xpack.graph.explore` will become `graph().explore()` and
`xpack.license.get` will become `license().get()`.
`xpack.info` and `xpack.usage` are special here though because they
don't belong to any proper category. For now I'm just calling
`xpack.info` `xPackInfo()` and intend to call usage `xPackUsage` though
I'm not convinced that this is the final name for them. But it does get
us started.
= Jars, jars everywhere!
This change makes the `xpack:protocol` project a `compile` scoped
dependency of the `x-pack:plugin:core` and `client:rest-high-level`
projects. I intend to keep it a compile scoped dependency of
`x-pack:plugin:core` but I intend to bundle the contents of the protocol
jar into the `client:rest-high-level` jar in a follow up. This change
has grown large enough at this point.
In that followup I'll address javadoc issues as well.
= Breaking-Java
This breaks that transport client by a few classes around. We've
traditionally been ok with doing this to the transport client.
Job persistent tasks with stale allocation IDs used to always be
considered as OPENING jobs in the ML job node allocation decision.
However, FAILED jobs are not relocated to other nodes, which leads
to them blocking up the nodes they failed on after node restarts.
FAILED jobs should not restrict how many other jobs can open on a
node, regardless of whether they are stale or not.
Closes#31794
Job updates or changes to calendars or filters may
result into updating the job process if it has been
running. To preserve the order of updates, process
updates are queued through the UpdateJobProcessNotifier
which is only running on the master node. All actions
performing such updates must run on the master node.
However, the CRUD actions for calendars and filters
are not master node actions. They have been submitting
the updates to the UpdateJobProcessNotifier even though
it might have not been running (given the action was
run on a non-master node). When that happens, the update
never reaches the process.
This commit fixes this problem by ensuring the notifier
runs on all nodes and by ensuring the process update action
gets the resources again before updating the process
(instead of having those resources passed in the request).
This ensures that even if the order of the updates
gets messed up, the latest update will read the latest
state of those resource and the process will get back
in sync.
This leaves us with 2 types of updates:
1. updates to the job config should happen on the master
node. This is because we cannot refetch the entire job
and update it. We need to know the parts that have been changed.
2. updates to resources the job uses. Those can be handled
on non-master nodes but they should be re-fetched by the
update process action.
Closes#31803
There is at most one model size stats document per bucket, but
during lookback a job can churn through many buckets very quickly.
This can lead to many cluster state updates if established model
memory needs to be updated for a given model size stats document.
This change rate limits established model memory updates to one
per job per 5 seconds. This is done by scheduling the updates 5
seconds in the future, but replacing the value to be written if
another model size stats document is received during the waiting
period. Updating the values in arrears like this means that the
last value received will be the one associated with the job in the
long term, whereas alternative approaches such as not updating the
value if a new value was close to the old value would not.
This change adds stats about forecasts, to the jobstats api as well as xpack/_usage. The following
information is collected:
_xpack/ml/anomaly_detectors/{jobid|_all}/_stats:
- total number of forecasts
- memory statistics (mean/min/max)
- runtime statistics
- record statistics
- counts by status
_xpack/usage
- collected by job status as well as overall (_all):
- total number of forecasts
- number of jobs that have at least 1 forecast
- memory, runtime, record statistics
- counts by status
Fixes#31395
* Remove deprecation warnings to prepare for Gradle 5
Gradle replaced `project.sourceSets.main.output.classesDir` of type
`File` with `project.sourceSets.main.output.classesDirs` of type
`FileCollection`
(see [SourceSetOutput](https://github.com/gradle/gradle/blob/master/subprojects/plugins/src/main/java/org/gradle/api/tasks/SourceSetOutput.java))
Build output is now stored on a per language folder.
There are a few places where we use that, here's these and how it's
fixed:
- Randomized Test execution
- look in all test folders ( pass the multi dir configuration to the
ant runner )
- DRY the task configuration by introducing `basedOn` for
`RandomizedTestingTask` DSL
- Extend the naming convention test to support passing in multiple
directories
- Fix the standalon test plugin, the dires were not passed trough,
checked with a debuger and the statement had no affect due to a
missing `=`.
Closes#30354
* Only check Java tests, PR feedback
- Name checker was ran for Groovy tests that don't adhere to the same
convections causing the check to fail
- implement PR feedback
* Replace `add` with `addAll`
This worked because the list is passed to `project.files` that does the
right thing.
* Revert "Only check Java tests, PR feedback"
This reverts commit 9bd9389875d8b88aadb50df57a45cd0d2b073241.
* Remove `basedOn` helper
* Bring some changes back
Previus revert accidentally reverted too much
* Fix negation
* add back public
* revert name check changes
* Revert "revert name check changes"
This reverts commit a2800c0b363168339ea65e2a79ec8256e5883e6d.
* Pass all dirs to name check
Only run on Java for build-tools, this is safe because it's a self test.
It needs more work before we could pass in the Groovy classes as well as
these inherit from `GroovyTestCase`
* remove self tests from name check
The self complicates the task setup and disable real checks on
build-tools.
With this change there are no more self tests, and the build-tools tests
adhere to the conventions.
The self test will be replaced by gradle test kit, thus the addition of
the Gradle plugin builder plugin.
* First test to run a Gradle build
* Add tests that replace the name check self test
* Clean up integ test base class
* Always run tests
* Align with test naming conventions
* Make integ. test case inherit from unit test case
The check requires this
* Remove `import static org.junit.Assert.*`
TransportAction currently contains 2 doExecute methods, one which takes
a the task, and one that does not. The latter is what some subclasses
implement, while the first one just calls the latter, dropping the given
task. This commit combines these methods, in favor of just always
assuming a task is present.
This adds an api to allow updating a filter:
POST _xpack/ml/filters/{filter_id}/_update
The request body may have:
- description: setting a new description
- add_items: a list of the items to add
- remove_items: a list of the items to remove
This commit also changes the PUT filter api to
error when the filter_id is already used. As
now there is an api for updating filters, the
put api should only be used to create new ones.
Also, updating a filter results into a notification
message auditing the change for every job that is
using that filter.
In #29639 we added a `format` option to doc-value fields and deprecated usage
of doc-value fields without a format so that we could migrate doc-value fields
to use the format that comes with the mappings by default. However I missed to
fix the machine-learning datafeed extractor.
Most transport actions don't need the node ThreadPool. This commit
removes the ThreadPool as a super constructor parameter for
TransportAction. The actions that do need the thread pool then have a
member added to keep it from their own constructor.
Most transport actions don't need to resolve index names. This commit
removes the index name resolver as a super constructor parameter for
TransportAction. The actions that do need the resolver then have a
member added to keep the resolver from their own constructor.
This commit makes it so that cluster state update tasks always run under the system context, only
restoring the original context when the listener that was provided with the task is called. A notable
exception is the clusterStatePublished(...) callback which will still run under system context,
because it's defined on the executor-level, and not the task level, and only called once for the
combined batch of tasks and can therefore not be uniquely identified with a task / thread context.
Relates #30603
This pull request removes the relationship between the state
of persistent task (as stored in the cluster state) and the status
of the task (as reported by the Task APIs and used in various
places) that have been confusing for some time (#29608).
In order to do that, a new PersistentTaskState interface is added.
This interface represents the persisted state of a persistent task.
The methods used to update the state of persistent tasks are
renamed: updatePersistentStatus() becomes updatePersistentTaskState()
and now takes a PersistentTaskState as a parameter. The
Task.Status type as been changed to PersistentTaskState in all
places were it make sense (in persistent task customs in cluster
state and all other methods that deal with the state of an allocated
persistent task).
This adds a `description` to ML filters in order
to allow users to describe their filters in a human
readable form which is also editable (filter updates
to be added shortly).
This change prevents a datafeed using cross cluster search from starting if the remote cluster
does not have x-pack installed and a sufficient license. The check is made only when starting a
datafeed.
Rules allow users to supply a detector with domain
knowledge that can improve the quality of the results.
The model detects statistically anomalous results but it
has no knowledge of the meaning of the values being modelled.
For example, a detector that performs a population analysis
over IP addresses could benefit from a list of IP addresses
that the user knows to be safe. Then anomalous results for
those IP addresses will not be created and will not affect
the quantiles either.
Another example would be a detector looking for anomalies
in the median value of CPU utilization. A user might want
to inform the detector that any results where the actual
value is less than 5 is not interesting.
This commit introduces a `custom_rules` field to the `Detector`.
A detector may have multiple rules which are combined with `or`.
A rule has 3 fields: `actions`, `scope` and `conditions`.
Actions is a list of what should happen when the rule applies.
The current options include `skip_result` and `skip_model_update`.
The default value for `actions` is the `skip_result` action.
Scope is optional and allows for applying filters on any of the
partition/over/by field. When not defined the rule applies to
all series. The `filter_id` needs to be specified to match the id
of the filter to be used. Optionally, the `filter_type` can be specified
as either `include` (default) or `exclude`. When set to `include`
the rule applies to entities that are in the filter. When set to
`exclude` the rule only applies to entities not in the filter.
There may be zero or more conditions. A condition requires `applies_to`,
`operator` and `value` to be specified. The `applies_to` value can be
either `actual`, `typical` or `diff_from_typical` and it specifies
the numerical value to which the condition applies. The `operator`
(`lt`, `lte`, `gt`, `gte`) and `value` complete the definition.
Conditions are combined with `and` and allow to specify numerical
conditions for when a rule applies.
A rule must either have a scope or one or more conditions. Finally,
a rule with scope and conditions applies when all of them apply.
This commit upgrades us to Netty 4.1.25. This upgrade is more
challenging than past upgrades, all because of a new object cleaner
thread that they have added. This thread requires an additional security
permission (set context class loader, needed to avoid leaks in certain
scenarios). Additionally, there is not a clean way to shutdown this
thread which means that the thread can fail thread leak control during
tests. As such, we have to filter this thread from thread leak control.
ObjectParser should throw XContentParseExceptions, not IAE. A dedicated parsing
exception can includes the place where the error occurred.
Closes#30605
This commit removes the RequestBuilder generic type from Action. It was
needed to be used by the newRequest method, which in turn was used by
client.prepareExecute. Both of these methods are now removed, along with
the existing users of prepareExecute constructing the appropriate
builder directly.
This commit renames methods in the PersistentTasksService, to
make obvious that the methods send requests in order to change
the state of persistent tasks.
Relates to #29608.
ML has dedicated APIs for datafeeds and jobs yet base test classes and
some tests were relying on the cluster state for this state. This commit
removes this usage in favor of using the dedicated endpoints.
This commit adds the ability to configure how a docvalue field should be
formatted, so that it would be possible eg. to return a date field
formatted as the number of milliseconds since Epoch.
Closes#27740
Enables a rolling restart from the OSS distribution to the x-pack based distribution by preventing
x-pack code from installing custom metadata into the cluster state until all nodes are capable of
deserializing this metadata.
When doing a node restart using the test framework, the restarted node does not only use the
settings provided to the original node, but also additional settings provided by plugin extensions,
which does not correspond to the settings that a node would have on a true restart.
This change is to support rolling upgrade from a pre-6.3 default
distribution (i.e. without X-Pack) to a 6.3+ default distribution
(i.e. with X-Pack).
The ML metadata is no longer eagerly added to the cluster state
as soon as the master node has X-Pack available. Instead, it
is added when the first ML job is created.
As a result all methods that get the ML metadata need to be able
to handle the situation where there is no ML metadata in the
current cluster state. They do this by behaving as though an
empty ML metadata was present. This logic is encapsulated by
always asking for the current ML metadata using a static method
on the MlMetadata class.
Relates #30731
diskspace and creates a subfolder for storing data outside of Lucene
indexes, but as part of the ES data paths.
Details:
- tmp storage is managed and does not allow allocation if disk space is
below a threshold (5GB at the moment)
- tmp storage is supposed to be managed by the native component but in
case this fails cleanup is provided:
- on job close
- on process crash
- after node crash, on restart
- available space is re-checked for every forecast call (the native
component has to check again before writing)
Note: The 1st path that has enough space is chosen on job open (job
close/reopen triggers a new search)
This change adds version information in case a native ML process crashes, the version is important for choosing the right symbol files when analyzing the crash. Adding the version combines all necessary information on one line.
relates elastic/ml-cpp#94
It is possible for state documents to be
left behind in the state index. This may be
because of bugs or uncontrollable scenarios.
In any case, those documents may take up quite
some disk space when they add up. This commit
adds a step in the expired data deletion that
is part of the daily maintenance service. The
new step searches for state documents that
do not belong to any of the current jobs and
deletes them.
Closes#30551
* Refactors ClientHelper to combine header logic
This change removes all the `*ClientHelper` classes which were
repeating logic between plugins and instead adds
`ClientHelper.executeWithHeaders()` and
`ClientHelper.executeWithHeadersAsync()` methods to centralise the
logic for executing requests with stored security headers.
* Removes Watcher headers constant
This change adds a grok_pattern field to the GET categories API
output in ML. It's calculated using the regex and examples in the
categorization result, and applying a list of candidate Grok
patterns to the bits in between the tokens that are considered to
define the category.
This can currently be considered a prototype, as the Grok patterns
it produces are not optimal. However, enough people have said it
would be useful for it to be worthwhile exposing it as experimental
functionality for interested parties to try out.
This commit fixes an issue with the data diagnostics were
empty buckets are not reported even though they should. Once
a job is reopened, the diagnostics do not get initialized from
the current data counts (especially the latest record timestamp).
The result is that if the data that is sent have a time gap compared
to the previous ones, that gap is not accounted for in the empty bucket
count.
This commit fixes that by initializing the diagnostics with the current
data counts.
Closes#30080
This commit removes the http.enabled setting. While all real nodes (started with bin/elasticsearch) will always have an http binding, there are many tests that rely on the quickness of not actually needing to bind to 2 ports. For this case, the MockHttpTransport.TestPlugin provides a dummy http transport implementation which is used by default in ESIntegTestCase.
closes#12792
This commit refactors the DataStreamDiagnostics class
achieving the following advantages:
- simpler code; by encapsulating the moving bucket histogram
into its own class
- better performance; by using an array to store the buckets
instead of a map
- explicit handling of gap buckets; in preparation of fixing #30080
The overall NOTICE file for the ML X-Pack module should
include the notices from the 3rd party C++ components as
well as the 3rd party Java components.
Tests need to wait for changes to the job's established memory usage to
propagate and an over enthusiastic optimisation meant jobs were updated
from stale state causing recent change to be lost.