This is related to elastic/x-pack-elasticsearch#3877. This commit adds a paramer type to the
start_trial api. This parameter allows the user to pass a type (trial,
gold, or platinum) of license that will be generated. No matter what
type is choosen, you can only generate one per major version.
Original commit: elastic/x-pack-elasticsearch@b42234cbb5
This commit replaces the usage of Lucene IOUtils with Elasticsearch
IOUtils, the former of which is now forbidden.
Original commit: elastic/x-pack-elasticsearch@8e0554001f
This is related to elastic/x-pack-elasticsearch#3877. This commit adds a route /start_basic that
will self generate a basic license. The only validation that is
performed is to check that you do not already have a basic license
installed. Additionally, if you lose features from switching to a basic
license, you must acknowledge the changes.
Original commit: elastic/x-pack-elasticsearch@7b8eeb50b1
A small bug in the `IndexStatsCollector` can potentially returns
statistics for newly created indices that does not exist yet in the
collector's `ClusterState` local instance.
It happens because an instance of the current `ClusterState` is
captured and passed to all the collectors before they are executed (so
that they all share the same view of the state of the cluster). On
some clusters, if an index is created after the `ClusterState` is
captured but before the `IndicesStatsRequest` is executed then it can
appears in the index stats but have no corresponding entry in the
local cluster state.
This commit changes the IndexStatsCollector so that it only return
statistics for indices that already exist in the cluster state. This
way a consistent view is possible between indices/index/shard stats.
Original commit: elastic/x-pack-elasticsearch@da173ae0b0
SslConfiguration can depend on SecureSettings, so it must be
constructed during the correct lifecycle phase.
For PkiRealmBootstrapCheck, moved the construction of SslConfiguration
objets into the constructor rather than the check method
Original commit: elastic/x-pack-elasticsearch@1a4d147216
This is related to elastic/x-pack-elasticsearch#4095. That test uses the a basic license in a test
of the route put license. Occasionally, that license is extended due to
recent work related to indefinite basic licenses before the test
assertions can be performed. This commit changes the test to use a gold
license.
Original commit: elastic/x-pack-elasticsearch@bf2550f044
This is related to elastic/x-pack-elasticsearch#3877. It modifies self-generated basic licenses to
(practically) never expire. Specifically, self-generated basic licenses
will be set with an expiration date 1 year before Long.MAX_VALUE
Additionally, basic licenses with a different expiration date will be
replaced with a new self-generated basic licenses at startup.
Original commit: elastic/x-pack-elasticsearch@de8b343089
This properly registers the `XPackFeatureSetUsage` for Logstash and
it tests it by invoking the Usage API in a Monitoring QA test.
Without those being properly registered, the test will consistently fail.
Original commit: elastic/x-pack-elasticsearch@2e8f2376fd
We had a Usage class before, but weren't registering it with XPack.
Would be nice to add more usage info in the future (like the running
jobs on each node), but unclear the best way to do it since we'd need
to filter through the list of allocated tasks.
Original commit: elastic/x-pack-elasticsearch@5207d2758b
Up to now a job update that reduces the model memory limit
was not allowed. However, there could definitely be cases
where reducing the limit is necessary and reasonable.
This commit makes it possible to decrease the limit as long
as it does not go below the current memory usage. We obtain
the latter from the model size stats.
The conditions under which updating the model_memory_limit
is not allowed are now:
- when the job is open
- latest model_size_stats.model_bytes < new value
relates elastic/x-pack-elasticsearch#2461
Original commit: elastic/x-pack-elasticsearch@5b35923590
This removes the readFrom and writeTo methods from XContentType, instead using
the more generic `readEnum` and `writeEnum` methods. Luckily they are both
encoded exactly the same way, so there is no compatibility layer needed for
backwards compatibility.
This is the x-pack side of https://github.com/elastic/elasticsearch/pull/28927
Original commit: elastic/x-pack-elasticsearch@f1c0928490
This wraps the stream (`.streamInput()`) that is passed to many of the
`createParser` instances in the enclosing (or a new) try-with-resources block.
This ensures the `BytesReference.streamInput()` is closed.
Relates to elastic/x-pack-elasticsearch#28504
Original commit: elastic/x-pack-elasticsearch@7546e3b4d4
Analysis limits contain settings that affect the resources
used by ML jobs. Those limits always take place. However,
explictly setting them is not required as they have reasonable
defaults. For a long time those defaults lived on the c++ side.
The job could just not have any explicit limits and that meant
defaults would be used at the c++ side. This has the disadvantage
that it is not obvious to the users what these settings are set to.
Additionally, users might not be aware of the settings existence.
On top of that, since 6.1, the default model_memory_limit was lowered
from 4GB to 1GB. For BWC, this meant that jobs where model_memory_limit
is null, the default of 4GB applies. Jobs that were created from 6.1
onwards, contain an explicit setting for model_memory_limit, which is
1GB unless the user sets it differently. This adds additional confusion.
This commit makes analysis limits an always explicit setting on the job.
Regardless of whether the user sets custom limits or not, the job object
(and response) will contain the full analysis limits values.
The possibilities for interpretation of missing values are:
- the entire analysis_limits is null: this may only happen for jobs
created prior to 6.1. Thus we set the model_memory_limit to 4GB.
- analysis_limits are non-null but model_memory_limit is: this also
may only happen for jobs prior to 6.1. Again, we set memory limit to
4GB.
- model_memory_limit is non-null: this either means the user set an
explicit value or the job was created from 6.1 onwards and it has
the explicit default of 1GB. We simply keep the given value.
For categorization_examples_limit the default has always been 4, so
we fill that in when it's missing.
Finally, note that we still need to handle potential null values
for the situation of a mixed cluster.
Original commit: elastic/x-pack-elasticsearch@5b6994ef75
* Pass InputStream when creating XContent parser
Rather than passing the raw `BytesReference` in when creating the xcontent
parser, this passes the StreamInput (which is an InputStream), this allows us to
decouple XContent from BytesReference.
This is the x-pack side of https://github.com/elastic/elasticsearch/pull/28754
* Use the streamInput variant, not sourceAsString
Original commit: elastic/x-pack-elasticsearch@dd5d8b1654
This adds a new Rollup module to XPack, which allows users to configure periodic "rollup jobs" to pre-aggregate data. That data is then available later for search through a special RollupSearch API, which mimics the DSL and functionality of regular search.
Rollups are used to drastically reduce the on-disk footprint of metric-based data (e.g. timestamped document with numeric and keyword fields). It can also be used to speed up aggregations over large datasets, since the rolled data will be considerably smaller and fewer documents to search.
The PR adds seven new endpoints to interact with Rollups; create/get/delete job, start/stop job, a capabilities API similar to field-caps, and a Rollup-enabled search.
Original commit: elastic/x-pack-elasticsearch@dcde91aacf
... yet support updates. This commit introduces a few changes of how
watches are put.
The GET Watch API will never return credentials like basic auth
passwords, but a placeholder instead now. If the watcher is enabled to
encrypt sensitive settings, then the original encrypted value is
returned otherwise a "::es_redacted::" place holder.
There have been several Put Watch API changes.
The API now internally uses the Update API and versioning. This has
several implications. First if no version is supplied, we assume an
initial creation. This will work as before, however if a credential is
marked as redacted we will reject storing the watch, so users do not
accidentally store the wrong watch.
The watch xcontent parser now has an additional methods to tell the
caller if redacted passwords have been found. Based on this information
an error can be thrown.
If the user now wants to store a watch that contains a password marked
as redacted, this password will not be part of the toXContent
representation of the watch and in combinatination with update request
the existing password will be merged in. If the encrypted password is
supplied this one will be stored.
The serialization for GetWatchResponse/PutWatchRequest has changed.
The version checks for this will be put into the 6.x branch.
The Watcher UI now needs specify the version, when it wants to store a
watch. This also prevents last-write-wins scenarios and is the reason
why the put/get watch response now contains the internal version.
relates elastic/x-pack-elasticsearch#3089
Original commit: elastic/x-pack-elasticsearch@bb63be9f79
* Additional settings for SAML NameID policy
We should not be populating SPNameQualifier by default as it is
intended to be used to specify an alternate SP EntityID rather than
our own. Some IdPs (ADFS) fail when presented with this value.
This commit
- makes the SPNameQualifier a setting that defaults to blank
- adds a setting for "AllowCreate"
- documents the above
Original commit: elastic/x-pack-elasticsearch@093557e88f
This blocks incoming requests from Kibana, Logstash, and Beats when X-Pack monitoring is effectively disabled by setting `xpack.monitoring.collection.interval: -1`.
Original commit: elastic/x-pack-elasticsearch@016a9472f1
We were missing a notification for when a job is updated. This is
useful so users know that there's been changes which could justify
a change in the job behaviour.
In addition, having those notifications allows our integrations
tests to know when the update was processed which avoids having
to use `sleep()` with its instabilities.
Original commit: elastic/x-pack-elasticsearch@0b4eda2232
* Add fields to `.logstash`'s mapping in template
This "makes room" in the index for pipeline settings and node groups. Due to this change, users will be able to specify settings and node groups for a pipeline via the Centralized Config Management UI in Kibana. Logstash will only retrieve pipelines associated with the node group specified via the `xpack.management.group.id` setting in `logstash.yml`. For the retrieved pipelines, Logstash will apply any (optionally) specified pipeline settings before (re)loading the pipelines.
* Making field name more explicit + adding multi field for better search
Original commit: elastic/x-pack-elasticsearch@2df101f0b1
Since elastic/x-pack-elasticsearch#3254 security headers have been stored in datafeed cluster state
to allow the datafeed to run searches using the credentials of the user
who created/updated it. As a result the parser was changed to read the
"headers" field so that cluster state could be reloaded. However, this
meant that datafeed configs could be submitted with a "headers" field.
No security loophole arose from this, as subsequent code overwrites the
contents of any supplied headers. But it could be confusing that an
erroneously supplied field did not cause a parse failure as it usually
would.
This change makes the config parser for datafeeds reject a "headers"
field. Now only the metadata parser used for reloading cluster state
will read a "headers" field.
Original commit: elastic/x-pack-elasticsearch@afa503275f
java.time features it's own halted clock, called a fixed clock, we can
use that one.
On top of that the watcher xcontent parser does not need a clock at all,
just a timestamp when parsing happened.
Original commit: elastic/x-pack-elasticsearch@2061aeffe1
The api jar was added for xpack extensions. However, extensions have
been removed in favor of using SPI, and the individual xpack jars like
core and security are published to enable this. This commit removes the
api jar, and switches the transport client to use the core jar (which
the api jar was just a rename of).
Original commit: elastic/x-pack-elasticsearch@58e069e66c