Analysis limits contain settings that affect the resources
used by ML jobs. Those limits always take place. However,
explictly setting them is not required as they have reasonable
defaults. For a long time those defaults lived on the c++ side.
The job could just not have any explicit limits and that meant
defaults would be used at the c++ side. This has the disadvantage
that it is not obvious to the users what these settings are set to.
Additionally, users might not be aware of the settings existence.
On top of that, since 6.1, the default model_memory_limit was lowered
from 4GB to 1GB. For BWC, this meant that jobs where model_memory_limit
is null, the default of 4GB applies. Jobs that were created from 6.1
onwards, contain an explicit setting for model_memory_limit, which is
1GB unless the user sets it differently. This adds additional confusion.
This commit makes analysis limits an always explicit setting on the job.
Regardless of whether the user sets custom limits or not, the job object
(and response) will contain the full analysis limits values.
The possibilities for interpretation of missing values are:
- the entire analysis_limits is null: this may only happen for jobs
created prior to 6.1. Thus we set the model_memory_limit to 4GB.
- analysis_limits are non-null but model_memory_limit is: this also
may only happen for jobs prior to 6.1. Again, we set memory limit to
4GB.
- model_memory_limit is non-null: this either means the user set an
explicit value or the job was created from 6.1 onwards and it has
the explicit default of 1GB. We simply keep the given value.
For categorization_examples_limit the default has always been 4, so
we fill that in when it's missing.
Finally, note that we still need to handle potential null values
for the situation of a mixed cluster.
Original commit: elastic/x-pack-elasticsearch@5b6994ef75
This adds a new Rollup module to XPack, which allows users to configure periodic "rollup jobs" to pre-aggregate data. That data is then available later for search through a special RollupSearch API, which mimics the DSL and functionality of regular search.
Rollups are used to drastically reduce the on-disk footprint of metric-based data (e.g. timestamped document with numeric and keyword fields). It can also be used to speed up aggregations over large datasets, since the rolled data will be considerably smaller and fewer documents to search.
The PR adds seven new endpoints to interact with Rollups; create/get/delete job, start/stop job, a capabilities API similar to field-caps, and a Rollup-enabled search.
Original commit: elastic/x-pack-elasticsearch@dcde91aacf
This adds a new setting, `xpack.monitoring.collection.enabled`, and
disables it by default (`false`).
Original commit: elastic/x-pack-elasticsearch@4b3a5a1161
... yet support updates. This commit introduces a few changes of how
watches are put.
The GET Watch API will never return credentials like basic auth
passwords, but a placeholder instead now. If the watcher is enabled to
encrypt sensitive settings, then the original encrypted value is
returned otherwise a "::es_redacted::" place holder.
There have been several Put Watch API changes.
The API now internally uses the Update API and versioning. This has
several implications. First if no version is supplied, we assume an
initial creation. This will work as before, however if a credential is
marked as redacted we will reject storing the watch, so users do not
accidentally store the wrong watch.
The watch xcontent parser now has an additional methods to tell the
caller if redacted passwords have been found. Based on this information
an error can be thrown.
If the user now wants to store a watch that contains a password marked
as redacted, this password will not be part of the toXContent
representation of the watch and in combinatination with update request
the existing password will be merged in. If the encrypted password is
supplied this one will be stored.
The serialization for GetWatchResponse/PutWatchRequest has changed.
The version checks for this will be put into the 6.x branch.
The Watcher UI now needs specify the version, when it wants to store a
watch. This also prevents last-write-wins scenarios and is the reason
why the put/get watch response now contains the internal version.
relates elastic/x-pack-elasticsearch#3089
Original commit: elastic/x-pack-elasticsearch@bb63be9f79
* Additional settings for SAML NameID policy
We should not be populating SPNameQualifier by default as it is
intended to be used to specify an alternate SP EntityID rather than
our own. Some IdPs (ADFS) fail when presented with this value.
This commit
- makes the SPNameQualifier a setting that defaults to blank
- adds a setting for "AllowCreate"
- documents the above
Original commit: elastic/x-pack-elasticsearch@093557e88f
The docs include portions of the SQL tests and for that to work they
need to point to position of the tests. They use a relative directory
but relative to *what*? That turns out to be a fairly complex thing to
answer, luckilly, `index.x.asciidoc` defines `xes-repo-dir` which points
to the root of the xpack docs. We can use that to find the sql tests
without having to answer the "relative to what?" question in two places.
Original commit: elastic/x-pack-elasticsearch@ebea586fdf
Includes:
- docs for new realm type "saml"
- docs for new settings for SAML realms
- a guide for setting up SAML accross ES + Kibana
Original commit: elastic/x-pack-elasticsearch@85f8f6d409
This allows any datetime function to be present in `EXTRACT` which feels
more consistent. `EXTRACT(FOO FROM bar)` is now just sugar for
`FOO(bar)`. This is *much* simpler to explain in the documentation then
"these 10 fields are supported by extract and they are the same as this
subset of the datetime functions."
The implementation of this is a little simpler then the old way. Instead
of resolving the function in the parser we create an
`UnresolvedFunction` that looks *almost* just like what we'd create for
a single argument function and resolve the function in the `Analyzer`.
This feels like a net positive as it allows us to group `EXTRACT`
resolution failures with other function resolution failures.
This also creates `UnresolvedFunctionTests` and
`UnresolvedAttributeTests`. I had to create `UnresolvedFunctionTests`
because `UnreolvedFunction` now has three boolean parameters which is
incompatible with the generic `NodeSubclassTests`'s requirement that all
ctor parameters be unique. I created `UnresolvedAttributeTests` because
I didn't want `UnresolvedFunctionTests` to call `NodeSubclassTests` and
figured that we'd want `UnresolvedAttributeTest` eventually and now felt
like as good a time as any.
Added a
Original commit: elastic/x-pack-elasticsearch@358aada308
We don't need the double quotes. Also, we follow up with an example that
shows how to write them in yml.
Original commit: elastic/x-pack-elasticsearch@835deca6f9
Introduce system commands as alternative to meta HTTP endpoints
Pass in cluster name
Use 'BASE TABLE' instead of 'INDEX' when describing a table to stick
with the SQL terminology
Original commit: elastic/x-pack-elasticsearch@600312b8f7
This change removes the XPackExtension mechanism in favor of
SecurityExtension that can be loaded via SPI and doesn't need
another (duplicate) plugin infrastructure
Original commit: elastic/x-pack-elasticsearch@f39e62a040
This commit moves the source file in x-pack-core to a org.elasticsearch.xpack.core package. This is to prevent issues where we have compile-time success reaching through packages that will cross module boundaries at runtime (due to being in different classloaders). By moving these to a separate package, we have compile-time safety. Follow-ups can consider build time checking that only this package is defined in x-pack-core, or sealing x-pack-core until modules arrive for us.
Original commit: elastic/x-pack-elasticsearch@232e156e0e
Adds documentation for all of the date time functions using the new
cli-like format extracted from the csv spec. In the process of doing
this I noticed that the `WEEK` function isn't exposed as a function.
This exposes it for consistency.
Relates to elastic/x-pack-elasticsearch#2898
Original commit: elastic/x-pack-elasticsearch@0459b24cb9
I went to write some docs for datetime functions that look like:
```
SELECT YEAR(CAST('2018-01-19T10:23:27Z' AS TIMESTAMP)) as year;
year
2018
```
because I figured they'd be pretty easy to read because they didn't
require any knowledge of a data set. But it turns out that constant
folding doesn't work properly for date time functions because they don't
actually apply the extraction.
Original commit: elastic/x-pack-elasticsearch@aa9c66b2c7
This commit adds the ability to refresh tokens that have been obtained by the API using a refresh
token. Refresh tokens are one time use tokens that are valid for 24 hours. The tokens may be used
to get a new access and refresh token if the refresh token has not been invalidated or
already refreshed.
relates elastic/x-pack-elasticsearch#2595
Original commit: elastic/x-pack-elasticsearch@23435eb815
* Security Realms: Predictable ordering for realms
To have predictable ordering of realms, by having secondary
sorting on realm name resulting in stable and consistent documentation.
Documentation update describing how ordering of realms is determined.
Testing done by adding unit test for the change, ran gradle clean check locally.
relates elastic/x-pack-elasticsearch#3403
Original commit: elastic/x-pack-elasticsearch@98c42a8c51
Adds a mode parameter to all SQL-related requests. The mode parameter is used for license checks as well as to define the response content. For now only two modes are supported plain (default) and jdbc. We will add other modes in the future as we add more clients.
Relates elastic/x-pack-elasticsearch#3419
Original commit: elastic/x-pack-elasticsearch@b49ca38d4b