This change cleans up "unused variable" warnings. There are several cases were we
most likely want to suppress the warnings (especially in the client documentation test
where the snippets contain many unused variables). In a lot of cases the unused
variables can just be deleted though.
Two issues are resolved:
1. The `value_type` should be either long or double in case of numeric.
2. The key label for the aggregate filter (having) was duplicate of an aggr key label.
Fixes: #33520
* Changed the format of the String functions documentation page.
* Adopted the same format for Math functions, but completely changed the examples.
* Added missing documentation for Math functions.
This commit reverts most of #33157 as it introduces another race
condition and breaks a common case of watcher, when the first watch is
added to the system and the index does not exist yet.
This means, that the index will be created, which triggers a reload, but
during this time the put watch operation that triggered this is not yet
indexed, so that both processes finish roughly add the same time and
should not overwrite each other but act complementary.
This commit reverts the logic of cleaning out the ticker engine watches
on start up, as this is done already when the execution is paused -
which also gets paused on the cluster state listener again, as we can be
sure here, that the watches index has not yet been created.
This also adds a new test, that starts a one node cluster and emulates
the case of a non existing watches index and a watch being added, which
should result in proper execution.
Closes#33320
* Added TRUNCATE function, modified ROUND to accept two parameters instead of one. Made the second parameter optional for both functions.
* Added documentation for both functions.
Previously multiple comma separated lists of options where not
recognized correctly which resulted in only the last of them
to be taked into account, e.g.:
For the following query:
SELECT * FROM test WHERE QUERY('search', 'default_field=foo', 'default_operator=and')"
only the `default_operator=and` was finally passed to the ES query.
Fixes: #32602
Changes the format of log events in the audit logfile.
It also changes the filename suffix from `_access` to `_audit`.
The new entry format is consistent with Elastic Common Schema.
Entries are formatted as JSON with no nested objects and field
names have a dotted syntax. Moreover, log entries themselves
are not spaced by commas and there is exactly one entry per line.
In addition, entry fields are ordered, unlike a typical JSON doc,
such that a human would not strain his eyes over jumbled
fields from one line to the other; the order is defined in the log4j2
properties file.
The implementation utilizes the log4j2's `StringMapMessage`.
This means that the application builds the log event as a map
and the log4j logic (the appender's layout) handle the format
internally. The layout, such as the set of printed fields and their
order, can be changed at runtime without restarting the node.
We have a test dependency on Apache Mina when using SimpleKdcServer
for testing Kerberos. When checking for LDAP backend connectivity,
the code checks for deadlocks which require additional security
permissions accessClassInPackage.sun.reflect. As this is only for
test and we do not want to add security permissions to production,
this commit moves these tests and related classes to
x-pack evil-tests where they can run with security manager disabled.
The plan is to handle the security manager exception in the upstream issue
DIRMINA-1093
and then once the release is available to run these tests with security
manager enabled.
Closes#32739
This change removes the wrapping of the created field in the put user
response. The created field was added as a top level field in #32332,
while also still being wrapped within the `user` object of the
response. Since the value is available in both formats in 6.x, we can
remove the wrapped version for 7.0.
Previously, when an non-pruned cast (casting as a different
data type) got applied on a table column in the `SELECT` clause,
the name of the result column didn't contain the target data type
of the cast, e.g.:
SELECT CAST(MAX(salary) AS DOUBLE) FROM "test_emp"
returned as column name:
CAST(MAX(salary))
instead of:
CAST(MAX(salary) AS DOUBLE)
Closes#33571
* Added more tests for trivial casts that are pruned
This commit adds settings upgraders for the search.remote.* settings
that can be in the cluster state to automatically upgrade these settings
to cluster.remote.*. Because of the infrastructure that we have here,
these settings can be upgraded when recovering the cluster state, but
also when a user tries to make a dynamic update for these settings.
These logs are incredibly verbose, and it makes chasing normal failures
burdensome. This commit removes the debug logging, which can be
reenabled again if needed.
Previously, when an arithmetic function got applied on a
table column in the `SELECT` clause, the name of the result
column contained weird characters used internally when
processing the SQL statement e.g.:
SELECT CHAR(emp_no % 10000) FROM "test_emp"
returned:
CHAR((emp_no{f}#14) % 10000))
as the column name instead of:
CHAR((emp_no) % 10000))
Also, fix an issue that causes a ClassCastException to be thrown
when using functions where both arguments are literals.
Closes#31869Closes#33461
Split function section into multiple chapters
Add String functions
Add (small) section on Conversion/Cast functions
Add missing aggregation functions
Enable documentation testing (was disabled by accident). While at it,
fix failing tests
Improve spec tests to allow multi-line queries (useful for docs)
Add ability to ignore a spec test (name should end with -Ignore)
With features like CCR building on the CCS infrastructure, the settings
prefix search.remote makes less sense as the namespace for these remote
cluster settings than does a more general namespace like
cluster.remote. This commit replaces these settings with cluster.remote
with a fallback to the deprecated settings search.remote.
Extend SHOW TABLES, DESCRIBE and SHOW COLUMNS to support table
identifiers not just SQL LIKE pattern.
This allows both Elasticsearch-style multi-index patterns and SQL LIKE.
To disambiguate between the two (as the " vs ' can be easy to miss),
the grammar now requires LIKE keyword as a prefix for all LIKE-like
patterns.
Also added some docs comparing the two types of patterns.
Fix#33294
We need to wait for the job to fully initialize and start before
we can attempt to stop it. If we don't, it's possible for the stop
API to be called before the persistent task is fully loaded and it'll
throw an exception.
Closes#32773
In the previous commit where SmokeTestWatcherWithSecurityIT tests were
muted, I added the incorrect issue numbers. This commit fixes this. The
issue for the tests is #33320.
* Tests for JdbcResultSet
* Added VARCHAR conversion for different types
* Made error messages consistent: they now contain both the type that fails to be converted and the value itself
Authorization Realms allow an authenticating realm to delegate the task
of constructing a User object (with name, roles, etc) to one or more
other realms.
E.g. A client could authenticate using PKI, but then delegate to an LDAP
realm. The LDAP realm performs a "lookup" by principal, and then does
regular role-mapping from the discovered user.
This commit includes:
- authorization_realm support in the pki, ldap, saml & kerberos realms
- docs for authorization_realms
- checks that there are no "authorization chains"
(whereby "realm-a" delegates to "realm-b", but "realm-b" delegates to "realm-c")
Authorization realms is a platinum feature.
We need to limit the search request aggregations to whole multiples
of the configured interval for both histogram and date_histogram.
Otherwise, agg buckets won't overlap with the rolled up buckets
and the results will be incorrect.
For histogram, the validation is very simple: request must be >= the config,
and modulo evenly.
Dates are more tricky.
- If both request and config are fixed dates, we can convert to millis
and treat them just like the histo
- If both are calendar, we make sure the request is >= the config with
a static lookup map that ranks the calendar values relatively. All
calendar units are "singles", so they are evenly divisible already
- We disallow any other combination (one fixed, one calendar, etc)
This commit removes the unused User class from the protocol project.
This class was originally moved into protocol in preparation for moving
more request and response classes, but given the change in direction
for the HLRC this is no longer needed. Additionally, this change also
changes the package name for the User object in x-pack/plugin/core to
its original name.
This change adds support for the client credentials grant type to the
token api. The client credentials grant allows for a client to
authenticate with the authorization server and obtain a token to access
as itself. Per RFC 6749, a refresh token should not be included with
the access token and as such a refresh token is not issued when the
client credentials grant is used.
The addition of the client credentials grant will allow users
authenticated with mechanisms such as kerberos or PKI to obtain a token
that can be used for subsequent access.
* Adding new MonitoredSystem for APM server
* Teaching Monitoring template utils about APM server monitoring indices
* Documenting new monitoring index for APM server
* Adding monitoring index template for APM server
* Copy pasta typo
* Removing metrics.libbeat.config section from mapping
* Adding built-in user and role for APM server user
* Actually define the role :)
* Adding missing import
* Removing index template and system ID for apm server
* Shortening line lengths
* Updating expected number of built-in users in integration test
* Removing "system" from role and user names
* Rearranging users to make tests pass
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/saml-idp-tests` and
`x-pack/qa/security-setup-password-tests` projects to use the new
versions.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/smoke-test-monitoring-with-watcher`,
`x-pack/qa/smoke-test-watcher`, and
`x-pack/qa/smoke-test-watcher-with-security` projects to use the new
versions.
This reworks how we configure the `shadow` plugin in the build. The major
change is that we no longer bundle dependencies in the `compile` configuration,
instead we bundle dependencies in the new `bundle` configuration. This feels
more right because it is a little more "opt in" rather than "opt out" and the
name of the `bundle` configuration is a little more obvious.
As an neat side effect of this, the `runtimeElements` configuration used when
one project depends on another now contains exactly the dependencies needed
to run the project so you no longer need to reference projects that use the
shadow plugin like this:
```
testCompile project(path: ':client:rest-high-level', configuration: 'shadow')
```
You can instead use the much more normal:
```
testCompile "org.elasticsearch.client:elasticsearch-rest-high-level-client:${version}"
```
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/audit-tests`,
`x-pack/qa/ml-disabled`, and `x-pack/qa/multi-node` projects to use the
new versions.
This commit moves the ML QA tests to be a sub-project of ML. The purpose
of this refactoring is to enable ML developers to run
:x-pack:plugin:ml:check and run the vast majority of a ML tests with a
single command (this still does not contain the ML REST tests, nor the
upgrade tests). This simplifies local development for faster iteration.
There are two problems with the scheduler engine today. Both relate to
listeners that throw.
The first problem is that any triggered listener that throws a plain old
exception will cause no additional listeners to be triggered for the
event, and will also cause the scheduler to never be invoked again. This
leads to lost events and is bad.
The second problem is that any triggered listener that throws an error
of the fatal kind will not lead to that error because caught by the
uncaught exception handler. This is because the triggered listener is
executed as a future task under a scheduled thread pool executor. A
throwable there goes caught by the JDK framework and set as the outcome
on the future task. Since we never inspect these tasks for their
outcomes, nor is there a good place to do this, we have to handle these
errors ourselves. To do this, we catch them and dispatch them to the
uncaught exception handler via a forked thread. This is similar to our
handling in Netty.
As the multibucket feature was merged in, this test hit a side effect
which means buckets trailing an anomaly could become anomalous.
This commit fixes the problem by filtering low score records when
we request them.
Elasticsearch versions earlier than 6.4.0 cannot properly run in a
FIPS 140 JVM. This commit ensures that we use a non-FIPS JVM for
nodes that we spin up in BWC tests even when we're testing FIPS.
* ML: fix updating opened jobs scheduled events (#31651)
* Adding UpdateParamsTests license header
* Adding integration test and addressing PR comments
* addressing test and job names
This change cleans up some methods in the CharArrays class from x-pack, which
includes the unification of char[] to utf8 and utf8 to char[] conversions that
intentionally do not use strings. There was previously an implementation in
x-pack and in the reloading of secure settings. The method from the reloading
of secure settings was adopted as it handled more scenarios related to the
backing byte and char buffers that were used to perform the conversions. The
cleaned up class is moved into libs/core to allow it to be used by requests
that will be migrated to the high level rest client.
Relates #32332
This removes custom Response classes that extend `AcknowledgedResponse` and do nothing, these classes are not needed and we can directly use the non-abstract super-class instead.
While this appears to be a large PR, no code has actually changed, only class names have been changed and entire classes removed.
We only upgrade the ID when the state is saved in one of four scenarios:
- when we reach a checkpoint (every 50 pages)
- when we run out of data
- when explicitly stopped
- on failure
The test was relying on the pre-upgrade to finish, save state and then
the post-upgrade to start, hit the end of data and upgrade ID. THEN
get the new doc and apply the new ID.
But I think this is vulnerable to timing issues. If the pre-upgrade
portion shutdown before it saved the state, when restarting we would run
through all the data from the beginning with the old ID, meaning both
docs would still have the old scheme.
This change makes the pre-upgrade wait for the job to go back to STARTED
so that we know it persisted the end point. Post-upgrade, it stops and
restarts the job to ensure the state was persisted and the ID upgraded.
That _should_ rule out the above timing issue.
Closes#32773
* Clear Job#finished_time when it is opened (#32605)
* not returning failure when Job#finished_time is not reset
* Changing error log string and source string
The qa tests with security haven't actually gone as far as testing security roles yet, so this is a start in the hopes of both bringing the tests into the ilm plugin
Skip the comparative tests using lowercasing/uppercasing against H2 (which considers the Locale).
ES-SQL is, so far, ignoring the Locale.
Still, the same queries are executed against ES-SQL alone and results asserted to be correct.
The Apache Http components support for Spnego scheme
uses canonical name by default.
Also when resolving host name, on centos by default
there are other aliases so adding them to the
DelegationPermission.
Closes#32498
Bumping down the version to 6.4 since the backport is complete. Also
adds some missing version checks to the bwc tests to make sure it
only runs on the correct versions
Previously, we were using a simple CRC32 for the IDs of rollup documents.
This is a very poor choice however, since 32bit IDs leads to collisions
between documents very quickly.
This commit moves Rollups over to a 128bit ID. The ID is a concatenation
of all the keys in the document (similar to the rolling CRC before),
hashed with 128bit Murmur3, then base64 encoded. Finally, the job
ID and a delimiter (`$`) are prepended to the ID.
This gurantees that there are 128bits per-job. 128bits should
essentially remove all chances of collisions, and the prepended
job ID means that _if_ there is a collision, it stays "within"
the job.
BWC notes:
We can only upgrade the ID scheme after we know there has been a good
checkpoint during indexing. We don't rely on a STARTED/STOPPED
status since we can't guarantee that resulted from a real checkpoint,
or other state. So we only upgrade the ID after we have reached
a checkpoint state during an active index run, and only after the
checkpoint has been confirmed.
Once a job has been upgraded and checkpointed, the version increments
and the new ID is used in the future. All new jobs use the
new ID from the start
These tests ensure, that the basic watch APIs are tested in the rolling
upgrade tests. After initially adding a watch, the tests try to get,
execute, deactivate and activate a watch. Watcher stats are tested as
well, and an own java based test has been added for restarting, as that
requires waiting for a state change. Watcher history is also checked.
Closes#31216
The User class has been moved to the protocol project for upcoming work
to add more security APIs to the high level rest client. As part of
this change, the toString method no longer uses a custom output method
from MetadataUtils and instead just relies on Java's toString
implementation.
This commit does the following:
- renames index-lifecycle plugin to ilm
- modifies the endpoints to ilm instead of index_lifecycle
- drops _xpack from the endpoints
- drops a few duplicate endpoints
Added support for string manipulating functions with more than one parameter:
CONCAT, LEFT, RIGHT, REPEAT, POSITION, LOCATE, REPLACE, SUBSTRING, INSERT
This change updates KerberosAuthenticationIT to resolve the host used
to connect to the test cluster. This is needed because the host could
be an IP address but SPNEGO requires a hostname to work properly. This
is done by adding a hook in ESRestTestCase for building the HttpHost
from the host and port.
Additionally, the project now specifies the IPv4 loopback address as
the http host. This is done because we need to be able to resolve the
address used for the HTTP transport before the node starts up, but the
http.ports file is not written until the node is started.
Closes#32498
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack:qa:rolling-upgrade*` projects to use
the new versions.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/security-example-spi-extension`
project to use the new versions.
This commit avoids dependency during compile on copy keytab to
be present in the generated sources so pre-commit does not
stall for updating vagrant box.
Closes#32387
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack:qa:full-cluster-restart` project to use
the new versions.
This bundles the x-pack:protocol project into the x-pack:plugin:core
project because we'd like folks to consider it an implementation detail
of our build rather than a separate artifact to be managed and depended
on. It is now bundled into both x-pack:plugin:core and
client:rest-high-level. To make this work I had to fix a few things.
Firstly, I had to make PluginBuildPlugin work with the shadow plugin.
In that case we have to bundle only the `shadow` dependencies and the
shadow jar.
Secondly, every reference to x-pack:plugin:core has to use the `shadow`
configuration. Without that the reference is missing all of the
un-shadowed dependencies. I tried to make it so that applying the shadow
plugin automatically redefines the `default` configuration to mirror the
`shadow` configuration which would allow us to use bare project references
to the x-pack:plugin:core project but I couldn't make it work. It'd *look*
like it works but then fail for transitive dependencies anyway. I think
it is still a good thing to do but I don't have the willpower to do it
now.
Finally, I had to fix an issue where Eclipse and IntelliJ didn't properly
reference shadowed transitive dependencies. Neither IDE supports shadowing
natively so they have to reference the shadowed projects. We fix this by
detecting `shadow` dependencies when in "Intellij mode" or "Eclipse mode"
and adding `runtime` dependencies to the same target. This convinces
IntelliJ and Eclipse to play nice.
This commit adds support for Kerberos authentication with a platinum
license. Kerberos authentication support relies on SPNEGO, which is
triggered by challenging clients with a 401 response with the
`WWW-Authenticate: Negotiate` header. A SPNEGO client will then provide
a Kerberos ticket in the `Authorization` header. The tickets are
validated using Java's built-in GSS support. The JVM uses a vm wide
configuration for Kerberos, so there can be only one Kerberos realm.
This is enforced by a bootstrap check that also enforces the existence
of the keytab file.
In many cases a fallback authentication mechanism is needed when SPNEGO
authentication is not available. In order to support this, the
DefaultAuthenticationFailureHandler now takes a list of failure response
headers. For example, one realm can provide a
`WWW-Authenticate: Negotiate` header as its default and another could
provide `WWW-Authenticate: Basic` to indicate to the client that basic
authentication can be used in place of SPNEGO.
In order to test Kerberos, unit tests are run against an in-memory KDC
that is backed by an in-memory ldap server. A QA project has also been
added to test against an actual KDC, which is provided by the krb5kdc
fixture.
Closes#30243
* Complete changes for running IT in a fips JVM
- Mute :x-pack:qa:sql:security:ssl:integTest as it
cannot run in FIPS 140 JVM until the SQL CLI supports key/cert.
- Set default JVM keystore/truststore password in top level build
script for all integTest tasks in a FIPS 140 JVM
- Changed top level x-pack build script to use keys and certificates
for trust/key material when spinning up clusters for IT
The notion of "quality" is an overloaded term in the search ranking evaluation
context. Its usually used to decribe certain levels of "good" vs. "bad" of a
seach result with respect to the users information need. We currently report the
result of the ranking evaluation as `quality_level` which is a bit missleading.
This changes the response parameter name to `metric_score` which fits better.
* Remove BouncyCastle dependency from runtime
This commit introduces a new gradle project that contains
the classes that have a dependency on BouncyCastle. For
the default distribution, It builds a jar from those and
in puts it in a subdirectory of lib
(/tools/security-cli) along with the BouncyCastle jars.
This directory is then passed in the
ES_ADDITIONAL_CLASSPATH_DIRECTORIES of the CLI tools
that use these classes.
BouncyCastle is removed as a runtime dependency (remains
as a compileOnly one) from x-pack core and x-pack security.
Resolving wildcards in aliases expression is challenging as we may end
up with no aliases to replace the original expression with, but if we
replace with an empty array that means _all which is quite the opposite.
Now that we support and serialize the original requested aliases,
whenever aliases are replaced we will be able to know what was
initially requested. `MetaData#findAliases` can then be updated to not
return anything in case it gets empty aliases, but the original aliases
were not empty. That means that empty aliases are interpreted as _all
only if they were originally requested that way.
Relates to #31516
The rollup indexer uses a range query to select the next page
of results based on the last time bucket of the previous round
and the `delay` configured on the rollup job. This query uses
the `epoch_millis` format implicitly but doesn't set the `format`.
This result in errors during the rollup job if the field
definition doesn't allow this format. It can also miss documents
if the format is not accepted but another format in the field
definition is able to parse the query (e.g.: `epoch_second`).
This change ensures that we use `epoch_millis` as the only format
to parse the rollup range query.
* Detect and prevent configuration that triggers a Gradle bug
As we found in #31862, this can lead to a lot of wasted time as it's not
immediatly obvius what's going on.
Givent how many projects we have it's getting increasingly easier to run
into gradle/gradle#847.
Ensure our tests can run in a FIPS JVM
JKS keystores cannot be used in a FIPS JVM as attempting to use one
in order to init a KeyManagerFactory or a TrustManagerFactory is not
allowed.( JKS keystore algorithms for private key encryption are not
FIPS 140 approved)
This commit replaces JKS keystores in our tests with the
corresponding PEM encoded key and certificates both for key and trust
configurations.
Whenever it's not possible to refactor the test, i.e. when we are
testing that we can load a JKS keystore, etc. we attempt to
mute the test when we are running in FIPS 140 JVM. Testing for the
JVM is naive and is based on the name of the security provider as
we would control the testing infrastrtucture and so this would be
reliable enough.
Other cases of tests being muted are the ones that involve custom
TrustStoreManagers or KeyStoreManagers, null TLS Ciphers and the
SAMLAuthneticator class as we cannot sign XML documents in the
way we were doing. SAMLAuthenticator tests in a FIPS JVM can be
reenabled with precomputed and signed SAML messages at a later stage.
IT will be covered in a subsequent PR
There have been changes in error messages for `SSLHandshakeException`.
This has caused a couple of failures in our tests.
This commit modifies test verification to assert on exception type of
class `SSLHandshakeException`.
There was another issue in Java11 which caused NPE. The bug has now
been fixed on Java11 - early access build 22.
Bug Ref: https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8206355
Enable the skipped tests due to this bug.
Closes#31940
There is currently no way to see what user executed a watch. This commit
adds the decrypted username to each execution in the watch history, in a
new field "user".
Closes#31772
The old RollupIT was a node IT, an flaky for a number of reasons.
This new version is an ESRestTestCase and should be a little more robust.
This was added to the multi-node QA tests as that seemed like the most
appropriate location. It didn't seem necessary to create a whole new
QA module.
Note: The only test that was ported was the "Big" test for validating
a larger dataset. The rest of the tests are represented in existing
yaml tests.
Closes#31258Closes#30232
Related to #30290
We can leverage the composite agg's new `missing_bucket` feature on
terms groupings. This means the aggregation criteria used in the indexer
will now return null buckets for missing keys.
Because all buckets are now returned (even if a key is null),
we can guarantee correct doc counts with
"combined" jobs (where a job rolls up multiple schemas). This was
previously impossible since composite would ignore documents that
didn't have _all_ the keys, meaning non-overlapping schemas would
cause composite to return no buckets.
Note: date_histo does not use `missing_bucket`, since a timestamp is
always required.
The docs have been adjusted to recommend a single, combined job. It
also makes reference to the previous issue to help users that are upgrading
(rather than just deleting the sections).
Historically we have loaded SSL objects (such as SSLContext,
SSLIOSessionStrategy) by passing in the SSL settings, constructing a
new SSL configuration from those settings and then looking for a
cached object that matches those settings.
The primary issue with this approach is that it requires a fully
configured Settings object to be available any time the SSL context
needs to be loaded. If the Settings include SecureSettings (such as
passwords for keys or keystores) then this is not true, and the cached
SSL object cannot be loaded at runtime.
This commit introduces an alternative approach of naming every cached
ssl configuration, so that it is possible to load the SSL context for
a named configuration (such as "xpack.http.ssl"). This means that the
calling code does not need to have ongoing access to the secure
settings that were used to load the configuration.
This change also allows monitoring exporters to use SSL passwords
from secure settings, however an exporter that uses a secure SSL setting
(e.g. truststore.secure_password) may not have its SSL settings updated
dynamically (this is prevented by a settings validator).
Exporters without secure settings can continue to be defined and updated
dynamically.
Added support for ASCII, BIT_LENGTH, CHAR, CHAR_LENGTH, LCASE, LENGTH, LTRIM, RTRIM, SPACE, UCASE functions.
Wherever Painless scripting is necessary (WHERE conditions, ORDER BY etc), those scripts are being used.
The test failure in #31916 revealed that updating
rules on a job was modifying the detectors list
in-place. That meant the old cluster state and the
updated cluster state had no difference and thus the
change was not propagated to non-master nodes.
This commit fixes that and also reviews all of ML
metadata in order to ensure immutability.
Closes#31916
A scroll holds a reference to the shard store. If the cluster is moving shards
around that reference can prevent a shard from relocating back to node it used
to be on, causing test failures.
Closes#31827
For historical reasons SQL restricts GROUP BY to only one field.
This commit removes the restriction and improves the test suite with
multi group by tests.
Close#31793
Register SQL as an xpackModule
Specify group for SQL QA to disambiguate projects (otherwise due to an
old Gradle bug (https://github.com/gradle/gradle/issues/847) any
subprojects under SQL QA will not be able to refer to SQL xpackModule
Co-authored-by: Alpar Torok <torokalpar@gmail.com>
Significantly improve the example snippets in the documentation.
The examples are part of the test suite and checked nightly.
To help readability, the existing dataset was extended (test_emp renamed
to emp plus library).
Improve output of JDBC tests to be consistent with the CLI
Add lenient flag to JDBC asserts to allow type widening (a long is
equivalent to a integer as long as the value is the same).
Some proxies require all requests to have paths starting with / since
there are no relative paths at the HTTP connection level. Elasticsearch
assumes paths are absolute. In order to run rest tests against a cluster
behind such a proxy, set the system property
tests.rest.client_path_prefix to /.
Make password hashing algorithm/cost configurable for the
stored passwords of users for the realms that this applies
(native, reserved). Replaces predefined choice of bcrypt with
cost factor 10.
This also introduces PBKDF2 with configurable cost
(number of iterations) as an algorithm option for password hashing
both for storing passwords and for the user cache.
Password hash validation algorithm selection takes into
consideration the stored hash prefix and only a specific number
of algorithnm and cost factor options for brypt and pbkdf2 are
whitelisted and can be selected in the relevant setting.
* Move to Gradle 4.8 RC1
* Use latest version of plugin
The current does not work with Gradle 4.8 RC1
* Switch to Gradle GA
* Add and configure build compare plugin
* add work-around for https://github.com/gradle/gradle/issues/5692
* work around https://github.com/gradle/gradle/issues/5696
* Make use of Gradle build compare with reference project
* Make the manifest more compare friendly
* Clear the manifest in compare friendly mode
* Remove animalsniffer from buildscript classpath
* Fix javadoc errors
* Fix doc issues
* reference Gradle issues in comments
* Conditionally configure build compare
* Fix some more doclint issues
* fix typo in build script
* Add sanity check to make sure the test task was replaced
Relates to #31324. It seems like Gradle has an inconsistent behavior and
the taks is not always replaced.
* Include number of non conforming tasks in the exception.
* No longer replace test task, create implicit instead
Closes#31324. The issue has full context in comments.
With this change the `test` task becomes nothing more than an alias for `utest`.
Some of the stand alone tests that had a `test` task now have `integTest`, and a
few of them that used to have `integTest` to run multiple tests now only
have `check`.
This will also help separarate unit/micro tests from integration tests.
* Revert "No longer replace test task, create implicit instead"
This reverts commit f1ebaf7d93e4a0a19e751109bf620477dc35023c.
* Fix replacement of the test task
Based on information from gradle/gradle#5730 replace the task taking
into account the task providres.
Closes#31324.
* Only apply build comapare plugin if needed
* Make sure test runs before integTest
* Fix doclint aftter merge
* PR review comments
* Switch to Gradle 4.8.1 and remove workaround
* PR review comments
* Consolidate task ordering
Merges the `query-builder-bwc` qa project into the
`full-cluster-restart` qa project, saving a cluster starts on every
build and *many* cluster starts on `./gradlew bwcTests`.
This creates a YAML test "features" that indices if the cluster being
tested has xpack installed (`xpack`) or if it does *not* have xpack
installed (`no_xpack`). It uses those features to centralize skipping
a few tests that fail if xpack is installed.
The plan is to use this in a followup to skip docs tests that require
xpack when xpack is not installed. We *plan* to use the declaration
of required license level on the docs page to generate the required
`skip`.
Closes#30933.
This pull request adds a full cluster restart test for a Rollup job.
The test creates and starts a Rollup job on the cluster and checks
that the job already exists and is correctly started on the upgraded
cluster.
This test allows to test that the persistent task state is correctly
parsed from the cluster state after the upgrade, as the status field
has been renamed to state in #31031.
The test undercovers a ClassCastException that can be thrown in
the RollupIndexer when the timestamp as a very low value that fits
into an integer. When it's the case, the value is parsed back as an
Integer instead of Long object and (long) position.get(rollupFieldName)
fails.
This adds an api to allow updating a filter:
POST _xpack/ml/filters/{filter_id}/_update
The request body may have:
- description: setting a new description
- add_items: a list of the items to add
- remove_items: a list of the items to remove
This commit also changes the PUT filter api to
error when the filter_id is already used. As
now there is an api for updating filters, the
put api should only be used to create new ones.
Also, updating a filter results into a notification
message auditing the change for every job that is
using that filter.
This pull request removes the relationship between the state
of persistent task (as stored in the cluster state) and the status
of the task (as reported by the Task APIs and used in various
places) that have been confusing for some time (#29608).
In order to do that, a new PersistentTaskState interface is added.
This interface represents the persisted state of a persistent task.
The methods used to update the state of persistent tasks are
renamed: updatePersistentStatus() becomes updatePersistentTaskState()
and now takes a PersistentTaskState as a parameter. The
Task.Status type as been changed to PersistentTaskState in all
places were it make sense (in persistent task customs in cluster
state and all other methods that deal with the state of an allocated
persistent task).
This adds a `description` to ML filters in order
to allow users to describe their filters in a human
readable form which is also editable (filter updates
to be added shortly).
We currently have a specific REST action to retrieve all aliaes, which
uses internally the get index API. This doesn't seem to be required
anymore though as the existing RestGetAliaesAction could as well take
the requests with no indices and aliases specified.
This commit removes the RestGetAllAliasesAction in favour of using
RestGetAliasesAction also for requests that don't specify indices nor
aliases. Similar to #31129.
Rules allow users to supply a detector with domain
knowledge that can improve the quality of the results.
The model detects statistically anomalous results but it
has no knowledge of the meaning of the values being modelled.
For example, a detector that performs a population analysis
over IP addresses could benefit from a list of IP addresses
that the user knows to be safe. Then anomalous results for
those IP addresses will not be created and will not affect
the quantiles either.
Another example would be a detector looking for anomalies
in the median value of CPU utilization. A user might want
to inform the detector that any results where the actual
value is less than 5 is not interesting.
This commit introduces a `custom_rules` field to the `Detector`.
A detector may have multiple rules which are combined with `or`.
A rule has 3 fields: `actions`, `scope` and `conditions`.
Actions is a list of what should happen when the rule applies.
The current options include `skip_result` and `skip_model_update`.
The default value for `actions` is the `skip_result` action.
Scope is optional and allows for applying filters on any of the
partition/over/by field. When not defined the rule applies to
all series. The `filter_id` needs to be specified to match the id
of the filter to be used. Optionally, the `filter_type` can be specified
as either `include` (default) or `exclude`. When set to `include`
the rule applies to entities that are in the filter. When set to
`exclude` the rule only applies to entities not in the filter.
There may be zero or more conditions. A condition requires `applies_to`,
`operator` and `value` to be specified. The `applies_to` value can be
either `actual`, `typical` or `diff_from_typical` and it specifies
the numerical value to which the condition applies. The `operator`
(`lt`, `lte`, `gt`, `gte`) and `value` complete the definition.
Conditions are combined with `and` and allow to specify numerical
conditions for when a rule applies.
A rule must either have a scope or one or more conditions. Finally,
a rule with scope and conditions applies when all of them apply.
This is in preparation of pushing the new
rules design in the `ml-cpp` side. These
tests will be switched on again after merging
in the new rules implementation.
This commit upgrades us to Netty 4.1.25. This upgrade is more
challenging than past upgrades, all because of a new object cleaner
thread that they have added. This thread requires an additional security
permission (set context class loader, needed to avoid leaks in certain
scenarios). Additionally, there is not a clean way to shutdown this
thread which means that the thread can fail thread leak control during
tests. As such, we have to filter this thread from thread leak control.
The core REST tests with security currently use a hardcoded username and
password. This is not amenable to running these tests in scenarios where
the user controls the creation of the cluster and owns the credentials
for this cluster. This commit enables running the core REST tests with
security with a custom username and password.
The goal of this commit is to address unknown licenses when producing
the dependencies info report. We have two different checks that we run
on licenses. The first check is whether or not we have stashed a copy of
the license text for a dependency in the repository. The second is to
map every dependency to a license type (e.g., BSD 3-clause). The problem
here is that the way we were handling licenses in the second check
differs from how we handle licenses in the first check. The first check
works by finding a license file with the name of the artifact followed
by the text -LICENSE.txt. Yet in some cases we allow mapping an artifact
name to another name used to check for the license (e.g., we map
lucene-.* to lucene, and opensaml-.* to shibboleth. The second check
understood the first way of looking for a license file but not the
second way. So in this commit we teach the second check about the
mappings from artifact names to license names. We do this by copying the
configuration from the dependencyLicenses task to the dependenciesInfo
task and then reusing the code from the first check in the second
check. There were some other challenges here though. For example,
dependenciesInfo was checking too many dependencies. For now, we should
only be checking direct dependencies and leaving transitive dependencies
from another org.elasticsearch artifact to that artifact (we want to do
this differently in a follow-up). We also want to disable
dependenciesInfo for projects that we do not publish, users only care
about licenses they might be exposed to if they use our assembled
products. With all of the changes in this commit we have eliminated all
unknown licenses. A follow-up will enforce that when we add a new
dependency it does not get mapped to unknown, these will be forbidden in
the future. Therefore, with this change and earlier changes are left
having no unknown licenses and two custom licenses; custom here means it
does not map to an SPDX license type. Those two licenses are xz and
ldapsdk. A future change will not allow additional custom licenses
unless they are explicitly whitelisted. This ensures that if a new
dependency is added it is mapped to an SPDX license or mapped to custom
because it does not have an SPDX license.
Use all running nodes as unicast seeds in the rolling restart tests to
avoid a race between pinging and the tests. Without this if the tests
are too fast then when a new node comes up and pings its single
configured seed node that node *might* not have a ping from the other
running node.
I pushed a test that `assertBusy`s for a whole hour accidentally. I was
testing something and forgot to revert my local hack but caught it on
backport. This removes it.
This is much more realistic and can find more issues. This causes the
"mixed cluster" tests to be run twice so I had to fix the tests to work
in that case. In most cases I did as little as possible to get them
working but in a few cases I went a little beyond that to make them
easier for me to debug while getting them to work. My test changes:
1. Remove the "basic indexing" tests and replace them with a copy of the
tests used in the OSS. We have no way of sharing code between these two
projects so for now I copy.
2. Skip the a few tests in the "one third" upgraded scenario:
* creating a scroll to be reused when the cluster is fully upgraded
* creating some ml data to be used when the cluster is fully ugpraded
3. Drop many "assert yellow and that the cluster has two nodes"
assertions. These assertions duplicate those made by the wait condition
and they fail now that we have three nodes.
4. Switch many "assert green and that the cluster has two nodes" to 3
nodes. These assertions are unique from the wait condition and, while
I imagine they aren't required in all cases, now is not the time to
find that out. Thus, I made them work.
5. Rework the index audit trail test so it is more obvious that it is
the same test expecting different numbers based on the shape of the
cluster. The conditions for which number are expected are fairly
complex because the index audit trail is shut down until the template
for it is upgraded and the template is upgraded when a master node is
elected that has the new version of the software.
6. Add some more information to debug the index audit trail test because
it helped me figure out what was going on.
I also dropped the `waitCondition` from the `rolling-upgrade-basic`
tests because it wasn't needed.
Closes#25336
In case an error is returned when calling search_shards on a remote
cluster, which will lead to throwing an exception in the coordinating
node, we should make sure that the status code returned by the
coordinating node is the same as the one returned by the remote
cluster. Up until now a 500 - Internal Server Error was always
returned. This commit changes this behaviour so that for instance if an
index is not found, which causes an 404, a 404 is also returned by the
coordinating node to the client.
Closes#27461
This modifies the high level rest client to allow calling code to
customize per request options for the bulk API. You do the actual
customization by passing a `RequestOptions` object to the API call
which is set on the `Request` that is generated by the high level
client. It also makes the `RequestOptions` a thing in the low level
rest client. For now that just means you use it to customize the
headers and the `httpAsyncResponseConsumerFactory` and we'll add
node selectors and per request timeouts in a follow up.
I only implemented this on the bulk API because it is the first one
in the list alphabetically and I wanted to keep the change small
enough to review. I'll convert the remaining APIs in a followup.
This commit removes the RequestBuilder generic type from Action. It was
needed to be used by the newRequest method, which in turn was used by
client.prepareExecute. Both of these methods are now removed, along with
the existing users of prepareExecute constructing the appropriate
builder directly.
Currently failures to compile a script usually lead to a ScriptException, which
inherits the 500 INTERNAL_SERVER_ERROR from ElasticsearchException if it does
not contain another root cause. Instead, this should be a 400 Bad Request error.
This PR changes this more generally for script compilation errors by changing
ScriptException to return 400 (bad request) as status code.
Closes#12315
Limits the scope of the runtime dependency on
BouncyCastle so that it can be eventually removed.
* Splits functionality related to reading and generating certificates
and keys in two utility classes so that reading certificates and
keys doesn't require BouncyCastle.
* Implements a class for parsing PEM Encoded key material (which also
adds support for reading PKCS8 encoded encrypted private keys).
* Removes BouncyCastle dependency for all of our test suites(except
for the tests that explicitly test certificate generation) by using
pre-generated keys/certificates/keystores.
Removes the last remaining server dependencies from jdbc client. In order to do that it introduces the new project sql-shared-proto that contains only XContent-serializable classes. HTTP Client and JDBC now depend only on sql-shared-proto. I had to keep the original sql-proto project since it is used as a dependency by sql-cli and security integration tests.
Relates #29856
Persistent tasks was moved from X-Pack to core in #28455.
However, registration of the named writables and named
X-content was left in X-Pack.
This change moves the registration of the named writables
and named X-content into core. Additionally, the persistent
task actions are no longer registered in the X-Pack client
plugin, as they are already registered in ActionModule.
This commit adds the ability to configure how a docvalue field should be
formatted, so that it would be possible eg. to return a date field
formatted as the number of milliseconds since Epoch.
Closes#27740
Adding headers rather than setting them all at once seems more
user-friendly and we already do it in a similar way for parameters
(see Request#addParameter).
Make all bool constructs use match/should (that is a query context) as
that is controlled and changed to a filter context by ES automatically
based on the sort order (_doc, field vs _sort) and trackScores.
Fix#29685
By default ML native processes are only allowed to use
30% of RAM, so the previous 2GB setting prevented the
test passing on VMs with only 4GB RAM. This change
reduces the limit to 1200MB, which means it can now
pass on VMs with 4GB RAM.
Meta plugins existed only for a short time, in order to enable breaking
up x-pack into multiple plugins. However, now that x-pack is no longer
installed as a plugin, the need for them has disappeared. This commit
removes the meta plugins infrastructure.
As the first record is random, there's a chance it will
be aligned on a bucket start. Thus we need to check the
bucket count is in [23, 24].
Closes#30715
These tests aim to check the set model memory limit is
respected. Additionally, it was asserting counts of
partition, by, over fields in an attempt to check that
the used memory is spent meaningfully. However, this
made the tests fragile, as changes in the ml-cpp could
lead to CI failures.
This commit removes those assertions. We are working on
adding tests in ml-cpp that will compensate.
diskspace and creates a subfolder for storing data outside of Lucene
indexes, but as part of the ES data paths.
Details:
- tmp storage is managed and does not allow allocation if disk space is
below a threshold (5GB at the moment)
- tmp storage is supposed to be managed by the native component but in
case this fails cleanup is provided:
- on job close
- on process crash
- after node crash, on restart
- available space is re-checked for every forecast call (the native
component has to check again before writing)
Note: The 1st path that has enough space is chosen on job open (job
close/reopen triggers a new search)
It is possible for state documents to be
left behind in the state index. This may be
because of bugs or uncontrollable scenarios.
In any case, those documents may take up quite
some disk space when they add up. This commit
adds a step in the expired data deletion that
is part of the daily maintenance service. The
new step searches for state documents that
do not belong to any of the current jobs and
deletes them.
Closes#30551
The part of the history template responsible for slack attachments had a
dynamic mapping configured which could lead to problems, when a string
value looking like a date was configured in the value field of an
attachment.
This commit fixes the template by setting this field always to text.
This also requires a change in the template numbering to be sure this
will be applied properly when starting watcher.
This commit removes xpack from being a meta-plugin-as-a-module.
It also fixes a couple tests which were missing task dependencies, which
failed once the gradle execution order changed.
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
This commit adds the ability to specify a plugin from maven for a
test cluster to use. Currently, only local projects may be used as
plugins, except when testing bwc, where the coordinates of the project
are used. However, that assumes all projects always keep the same
coordinates, or are even still plugins, which is no longer the case for
x-pack. The full cluster and rolling restart tests are changed to use
this new method when pulling x-pack versions before 6.3.0.
Modifies the SQL tests to use the new `Request` object flavored methods
introduced onto the `RestClient` in #29623. We'd like to remove the old
methods eventually so we should stop using them.
Tweak the return data, in particular with regards for ODBC columns to
better align with the spec
Fix order for SYS TYPES and TABLES according to the JDBC/ODBC spec
Fix#30386Fix#30521
This commit adds a general state listener to the SecurityIndexManager,
and replaces the existing health and up-to-date listeners with that. It
also moves helper methods relating to health to SecurityIndexManager
from SecurityLifecycleService.
This commit moves the generated-resources directory to be within
the build directory for the openldap-tests and saml-idp-tests
projects. Both projects create a generated-resources directory that
should have been in the build directory but were instead at the same
level as the build directory.
This commit renames IndexLifecycleManager to SecurityIndexManager as it
is not actually a general purpose class, but specific to security. It
also removes indirection in code calling the lifecycle service, instead
calling the security index manager directly.
* master: (35 commits)
DOCS: Correct mapping tags in put-template api
DOCS: Fix broken link in the put index template api
Add put index template api to high level rest client (#30400)
Relax testAckedIndexing to allow document updating
[Docs] Add snippets for POS stop tags default value
Move respect accept header on no handler to 6.3.1
Respect accept header on no handler (#30383)
[Test] Add analysis-nori plugin to the vagrant tests
[Docs] Fix bad link
[Docs] Fix end of section in the korean plugin docs
Expose the Lucene Korean analyzer module in a plugin (#30397)
Docs: remove transport_client from CCS role example (#30263)
[Rollup] Validate timezone in range queries (#30338)
Use readFully() to read bytes from CipherInputStream (#28515)
Fix docs Recently merged #29229 had a doc bug that broke the doc build. This commit fixes.
Test: remove cluster permission from CCS user (#30262)
Add Get Settings API support to java high-level rest client (#29229)
Watcher: Remove unneeded index deletion in tests
Set the new lucene version for 6.4.0
[ML][TEST] Clean up jobs in ModelPlotIT
...
Today when processing a request for a URL path for which we can not find
a handler we send back a plain-text response. Yet, we have the accept
header in our hand and can respect the accepted media type of the
request. This commit addresses this.
This commit updates the multi cluster search test with security so that
the user that is simply performing a multi cluster search does not have
any cluster permissions. This is done as none are needed by this user
and excess privileges could mask a behavior change.
QA tests that use security need to use a trial license instead of a
basic license. Basic licenses do not enable security so these tests are
not running in the expected configuration. This can also lead to issues
that seem completely unrelated such as authentication failures right
after cluster formation.
The authentication failure right after cluster formation happens since
a request makes it past the authentication license checks and the
request starts the authentication process. During authentication the
cluster forms and the license is updated to a basic license, which
disables all realms. The request that is being authenticated then tries
to iterate over the realms, but the realms are empty and the request
cannot be authenticated. This results in a authentication failure even
though the credentials provided are correct.
Closes#30306
When dealing with filtering, a composite aggregation might return empty
buckets (which have been filtered) which gets sent as is to the client.
Unfortunately this interprets the response as no more data instead of
retrying.
This now has changed and the listener keeps retrying until either the
query has ended or data passes the filter.
Fix#30292
This commit fixes an issue with the data diagnostics were
empty buckets are not reported even though they should. Once
a job is reopened, the diagnostics do not get initialized from
the current data counts (especially the latest record timestamp).
The result is that if the data that is sent have a time gap compared
to the previous ones, that gap is not accounted for in the empty bucket
count.
This commit fixes that by initializing the diagnostics with the current
data counts.
Closes#30080
Each test now has its own watch id that is being used.
This ensures there are no old history entries, which can potentially
lead to broken test assertions.
The current implementation starts/stops watcher using an executor. This
can result in our of order operations.
This commit reduces those executor calls to an absolute minimum in order
to be able to do state changes within the cluster state listener method,
which runs in sequence.
When a state change occurs that forces the watcher service to pause
(like no watcher index, no master node, no local shards), the service is
now in a paused state.
Pausing is a super lightweight operation, which marks the
ExecutionService as paused and waits for the currently executing watches
to finish in the background via an executor. The same applies for
stopping, the potentially long running operation is outsourced in to an
executor, as waiting for executed watches is decoupled from the current
state.
The only other long running operation is starting, where watches need to
be loaded. This is also done via an executor, but has an additional
protection by checking the cluster state version it was started with. If
another cluster state version was trying to load the watches, then this
loading will not take effect.
This PR also cleans up some unused states, like the a simple boolean in
the HistoryStore/TriggeredWatchStore marking it as started or stopped,
as this can now be caught in the execution service.
Another advantage of this approach is the fact, that now only triggered
watches are not getting executed, while watches that are run via the
Execute Watch API will still be executed regardless if watcher is
stopped or not.
Lastly the TickerScheduleTriggerEngine thread now only starts on data nodes.
Many tests are added with a version check so that they do not run against a
version that doesn't have the feature yet. Master is 7.0, so all tests that
do not run against 6.0+ can be removed and the version check can be removed
on all tests that always run on 6.0+.
The elasticsearch-users utility had various messages that were
outdated or incorrect. This commit updates the output from this
command to reflect current terminology and configuration.
This commit increases the logging for authentication in the x-pack
multi-node qa test project. This is needed to assist in debugging HTTP
authorization failures in waiting for the second node in these tests.
See #30306
We had a number of awaitsFix links that weren't updated after the xpack
merge.
Where possible I changed the links to the new locations, but in some
circumstances the original ticket was closed (suggesting the awaitsfix
should be removed) or was otherwise unclear the status.
This commit moves the repository-s3 fixture test added in #29296 in a
new `repository-s3/qa/amazon-s3` project. This new project allows the
REST integration tests to be executed using the real S3 service when
all the required environment variables are provided. When no env var
is provided, then the tests are executed using the fixture added
in #29296.
The REST tests located at the `repository-s3`plugin project now only
verify that the plugin is correctly loaded.
The REST tests have been adapted to allow a bucket name and a base
path to be specified as env vars. This way it is possible to run the tests
with different base paths (could be anything, like a CI job name or a
branch name) without multiplicating buckets.
Related to #29349
This removes some monitoring tests that have been silenced for a long
time. These tests don't really provide any value for the upgrade suite and
they just create noise due to their occasional timing-related failures.
Add the oss tar distribution to the packaging test plugin. Test the oss
tar distribution in the core packaging tests, and the non-oss tar
distribution in the x-pack packaging tests.
In order to have less qa/ projects, this commit removes the
watcher-smoke-test-mustache and the watcher-smoke-test-painless projects
and moves the tests of both into the watcher-smoke-test projects.
This results in less projects to parse when calling gradle and a shorter test
time due to being able to run everything within one elasticsearch
instance.
Tests need to wait for changes to the job's established memory usage to
propagate and an over enthusiastic optimisation meant jobs were updated
from stale state causing recent change to be lost.
This commit makes x-pack a module and adds it to the default
distrubtion. It also creates distributions for zip, tar, deb and rpm
which contain only oss code.