- Add the authentication realm and lookup realm name and type in the response for the _authenticate API
- The authentication realm is set as the lookup realm too (instead of setting the lookup realm to null or empty ) when no lookup realm is used.
PR #35242 formalised support for the password_hash field in the body
of the Put User security API.
Since this field is now validated and tested, it can also be
documented.
The Put User API also supports a "refresh" query parameter that was
not documented. This commit adds it to the docs.
This documents how to include the search queries in the audit log.
There is a catch, that even if enabling `emit_request_body`, which should
output queries included in request bodies, search queries were not output
because, implicitly, no REST layer audit event type was included.
This folk knowledge is herein imprinted.
This moves all Realm settings to an Affix definition.
However, because different realm types define different settings
(potentially conflicting settings) this requires that the realm type
become part of the setting key.
Thus, we now need to define realm settings as:
xpack.security.authc.realms:
file.file1:
order: 0
native.native1:
order: 1
- This is a breaking change to realm config
- This is also a breaking change to custom security realms (SecurityExtension)
* Watcher: fix metric stats names
The current watcher stats metric names doesn't match the current
documentation. This commit fixes the behavior of `queued_watches`
metric, deprecates `pending_watches` metric and adds `current_watches`
to match the documented behavior. It also fixes the documentation, which
introduced `executing_watches` metric that was never added.
Fixes#34865
Documents the new structured logfile format for auditing
that was introduced by #31931. Most changes herein
are for 6.x . In 7.0 the deprecated format is gone and a
follow-up PR is in order.
Right now, watches fail on runtime, when invalid email addresses are
used.
All those fields can be checked on parsing, if no mustache is used in
any email address template. In that case we can return immediate
feedback, that invalid email addresses should not be specified when
trying to store a watch.
This enables Elasticsearch to use the JVM-wide configured
PKCS#11 token as a keystore or a truststore for its TLS configuration.
The JVM is assumed to be configured accordingly with the appropriate
Security Provider implementation that supports PKCS#11 tokens.
For the PKCS#11 token to be used as a keystore or a truststore for an
SSLConfiguration, the .keystore.type or .truststore.type must be
explicitly set to pkcs11 in the configuration.
The fact that the PKCS#11 token configuration is JVM wide implies that
there is only one available keystore and truststore that can be used by TLS
configurations in Elasticsearch.
The PIN for the PKCS#11 token can be set as a truststore parameter in
Elasticsearch or as a JVM parameter ( -Djavax.net.ssl.trustStorePassword).
The basic goal of enabling PKCS#11 token support is to allow PKCS#11-NSS in
FIPS mode to be used as a FIPS 140-2 enabled Security Provider.
We have a Kerberos setting to remove realm part from the user
principal name (remove_realm_name). If this is true then
the realm name is removed to form username but in the process,
the realm name is lost. For scenarios like Kerberos cross-realm
authentication, one could make use of the realm name to determine
role mapping for users coming from different realms.
This commit adds user metadata for kerberos_realm and
kerberos_user_principal_name.
This change removes the wrapping of the created field in the put user
response. The created field was added as a top level field in #32332,
while also still being wrapped within the `user` object of the
response. Since the value is available in both formats in 6.x, we can
remove the wrapped version for 7.0.
With features like CCR building on the CCS infrastructure, the settings
prefix search.remote makes less sense as the namespace for these remote
cluster settings than does a more general namespace like
cluster.remote. This commit replaces these settings with cluster.remote
with a fallback to the deprecated settings search.remote.
This commit adds a security client to the high level rest client, which
includes an implementation for the put user api. As part of these
changes, a new request and response class have been added that are
specific to the high level rest client. One change here is that the response
was previously wrapped inside a user object. The plan is to remove this
wrapping and this PR adds an unwrapped response outside of the user
object so we can remove the user object later on.
See #29827
Authorization Realms allow an authenticating realm to delegate the task
of constructing a User object (with name, roles, etc) to one or more
other realms.
E.g. A client could authenticate using PKI, but then delegate to an LDAP
realm. The LDAP realm performs a "lookup" by principal, and then does
regular role-mapping from the discovered user.
This commit includes:
- authorization_realm support in the pki, ldap, saml & kerberos realms
- docs for authorization_realms
- checks that there are no "authorization chains"
(whereby "realm-a" delegates to "realm-b", but "realm-b" delegates to "realm-c")
Authorization realms is a platinum feature.
We need to limit the search request aggregations to whole multiples
of the configured interval for both histogram and date_histogram.
Otherwise, agg buckets won't overlap with the rolled up buckets
and the results will be incorrect.
For histogram, the validation is very simple: request must be >= the config,
and modulo evenly.
Dates are more tricky.
- If both request and config are fixed dates, we can convert to millis
and treat them just like the histo
- If both are calendar, we make sure the request is >= the config with
a static lookup map that ranks the calendar values relatively. All
calendar units are "singles", so they are evenly divisible already
- We disallow any other combination (one fixed, one calendar, etc)
This change adds support for the client credentials grant type to the
token api. The client credentials grant allows for a client to
authenticate with the authorization server and obtain a token to access
as itself. Per RFC 6749, a refresh token should not be included with
the access token and as such a refresh token is not issued when the
client credentials grant is used.
The addition of the client credentials grant will allow users
authenticated with mechanisms such as kerberos or PKI to obtain a token
that can be used for subsequent access.
* Adding new MonitoredSystem for APM server
* Teaching Monitoring template utils about APM server monitoring indices
* Documenting new monitoring index for APM server
* Adding monitoring index template for APM server
* Copy pasta typo
* Removing metrics.libbeat.config section from mapping
* Adding built-in user and role for APM server user
* Actually define the role :)
* Adding missing import
* Removing index template and system ID for apm server
* Shortening line lengths
* Updating expected number of built-in users in integration test
* Removing "system" from role and user names
* Rearranging users to make tests pass
If a search request doesn't contain aggs (or an empty agg object),
we should just retun an empty response. This is how the normal search
API works if you specify zero hits and empty aggs.
The existing behavior throws an exception because it tries to send
an empty msearch.
Closes#32256
Add documentation for #31238
- Add documentation for the req_authn_context_class_ref setting
- Add a section in SAML Guide regarding the use of SAML
Authentication Context.
* Add relevant documentation for FIPS 140-2 compliance.
* Introduce `fips_mode` setting.
* Discuss necessary configuration for FIPS 140-2
* Discuss introduced limitations by FIPS 140-2
* [DOCS] Add configurable password hashing docs
Adds documentation about the newly introduced configuration option
for setting the password hashing algorithm to be used for the users
cache and for storing credentials for the native and file realm.
This commit adds troubleshooting section for Kerberos.
Most of the times the problems seen are caused due to invalid
configurations like keytab missing principals or credentials
not up to date. Time synchronization is an important part for
Kerberos infrastructure and the time skew can cause problems.
To debug further documentation explains how to enable JAAS
Kerberos login module debugging and Kerberos/SPNEGO debugging
by setting JVM system properties.
This commit adds documentation for configuring Kerberos realm.
Configuring Kerberos realm documentation highlights important
terminology and requirements before creating Kerberos realm.
Most of the documentation is centered around configuration from
Elasticsearch rather than go deep into Kerberos implementation.
Kerberos realm settings are mentioned in the security settings
for Kerberos realm.
- Missing links to new IndexCaps API
- Incorrect security permissions on IndexCaps API
- GetJobs API must supply a job (or `_all`), omitting throws error
- Link to search/agg limitations from RollupSearch API
- Tweak URLs in quick reference
- Formatting of overview page
Previously, we were using a simple CRC32 for the IDs of rollup documents.
This is a very poor choice however, since 32bit IDs leads to collisions
between documents very quickly.
This commit moves Rollups over to a 128bit ID. The ID is a concatenation
of all the keys in the document (similar to the rolling CRC before),
hashed with 128bit Murmur3, then base64 encoded. Finally, the job
ID and a delimiter (`$`) are prepended to the ID.
This gurantees that there are 128bits per-job. 128bits should
essentially remove all chances of collisions, and the prepended
job ID means that _if_ there is a collision, it stays "within"
the job.
BWC notes:
We can only upgrade the ID scheme after we know there has been a good
checkpoint during indexing. We don't rely on a STARTED/STOPPED
status since we can't guarantee that resulted from a real checkpoint,
or other state. So we only upgrade the ID after we have reached
a checkpoint state during an active index run, and only after the
checkpoint has been confirmed.
Once a job has been upgraded and checkpointed, the version increments
and the new ID is used in the future. All new jobs use the
new ID from the start
This commit introduces "Application Privileges" to the X-Pack security
model.
Application Privileges are managed within Elasticsearch, and can be
tested with the _has_privileges API, but do not grant access to any
actions or resources within Elasticsearch. Their purpose is to allow
applications outside of Elasticsearch to represent and store their own
privileges model within Elasticsearch roles.
Access to manage application privileges is handled in a new way that
grants permission to specific application names only. This lays the
foundation for more OLS on cluster privileges, which is implemented by
allowing a cluster permission to inspect not just the action being
executed, but also the request to which the action is applied.
To support this, a "conditional cluster privilege" is introduced, which
is like the existing cluster privilege, except that it has a Predicate
over the request as well as over the action name.
Specifically, this adds
- GET/PUT/DELETE actions for defining application level privileges
- application privileges in role definitions
- application privileges in the has_privileges API
- changes to the cluster permission class to support checking of request
objects
- a new "global" element on role definition to provide cluster object
level security (only for manage application privileges)
- changes to `kibana_user`, `kibana_dashboard_only_user` and
`kibana_system` roles to use and manage application privileges
Closes#29820Closes#31559
Resolving wildcards in aliases expression is challenging as we may end
up with no aliases to replace the original expression with, but if we
replace with an empty array that means _all which is quite the opposite.
Now that we support and serialize the original requested aliases,
whenever aliases are replaced we will be able to know what was
initially requested. `MetaData#findAliases` can then be updated to not
return anything in case it gets empty aliases, but the original aliases
were not empty. That means that empty aliases are interpreted as _all
only if they were originally requested that way.
Relates to #31516
Remove references to the `platinum` image and add a self-generated trial
licence to the example for TLS on Docker.
Fixeselastic/elasticsearch-docker#176
This introduces a new GetRollupIndexCaps API which allows the user to retrieve rollup capabilities of a specific rollup index (or index pattern). This is distinct from the existing RollupCaps endpoint.
- Multiple jobs can be stored in multiple indices and point to a single target data index pattern (logstash-*). The existing API finds capabilities/config of all jobs matching that data index pattern.
- One rollup index can hold data from multiple jobs, targeting multiple data index patterns. This new API finds the capabilities based on the concrete rollup indices.
There is currently no way to see what user executed a watch. This commit
adds the decrypted username to each execution in the watch history, in a
new field "user".
Closes#31772
We can leverage the composite agg's new `missing_bucket` feature on
terms groupings. This means the aggregation criteria used in the indexer
will now return null buckets for missing keys.
Because all buckets are now returned (even if a key is null),
we can guarantee correct doc counts with
"combined" jobs (where a job rolls up multiple schemas). This was
previously impossible since composite would ignore documents that
didn't have _all_ the keys, meaning non-overlapping schemas would
cause composite to return no buckets.
Note: date_histo does not use `missing_bucket`, since a timestamp is
always required.
The docs have been adjusted to recommend a single, combined job. It
also makes reference to the previous issue to help users that are upgrading
(rather than just deleting the sections).
This change adds stats about forecasts, to the jobstats api as well as xpack/_usage. The following
information is collected:
_xpack/ml/anomaly_detectors/{jobid|_all}/_stats:
- total number of forecasts
- memory statistics (mean/min/max)
- runtime statistics
- record statistics
- counts by status
_xpack/usage
- collected by job status as well as overall (_all):
- total number of forecasts
- number of jobs that have at least 1 forecast
- memory, runtime, record statistics
- counts by status
Fixes#31395
This is related to #31325. There is currently information about the
get-trial-status api on the start-trial api documentation page. It also
has the incorrect route for that api. This commit removes that
information as the start-trial page properly links to a page providing
documenation about get-trial-status.
- All rollup pages should be marked as experimental instead of just
the top page
- While the job config docs state which aggregations are allowed, adding
a section which specifically details this in one place is more convenient
for the user
- Add a clarification that the DeleteJob API does not delete the rollup
data, just the rollup job.
Although elasticsearch-certutil generates PKCS#12
files which are usable as both keystore and truststore
this is uncommon in practice. Settle these expectations
for the users following our security guides.
This commit adds the ability to configure how a docvalue field should be
formatted, so that it would be possible eg. to return a date field
formatted as the number of milliseconds since Epoch.
Closes#27740
diskspace and creates a subfolder for storing data outside of Lucene
indexes, but as part of the ES data paths.
Details:
- tmp storage is managed and does not allow allocation if disk space is
below a threshold (5GB at the moment)
- tmp storage is supposed to be managed by the native component but in
case this fails cleanup is provided:
- on job close
- on process crash
- after node crash, on restart
- available space is re-checked for every forecast call (the native
component has to check again before writing)
Note: The 1st path that has enough space is chosen on job open (job
close/reopen triggers a new search)
This change adds a grok_pattern field to the GET categories API
output in ML. It's calculated using the regex and examples in the
categorization result, and applying a list of candidate Grok
patterns to the bits in between the tokens that are considered to
define the category.
This can currently be considered a prototype, as the Grok patterns
it produces are not optimal. However, enough people have said it
would be useful for it to be worthwhile exposing it as experimental
functionality for interested parties to try out.
This commit removes the unnecessary transport_client cluster permission
from the role that is used as an example in our documentation. This
permission is not needed to use cross cluster search.
This commit removes the http.enabled setting. While all real nodes (started with bin/elasticsearch) will always have an http binding, there are many tests that rely on the quickness of not actually needing to bind to 2 ports. For this case, the MockHttpTransport.TestPlugin provides a dummy http transport implementation which is used by default in ESIntegTestCase.
closes#12792
This is related to #30134. It modifies the start_trial action to require
an acknowledgement parameter in the rest request to actually start the
trial license. There are backwards compatibility issues as prior ES
versions did not support this parameter. To handle this, it is assumed
that a request coming from a node prior to 6.3 is acknowledged. And
attempts to write a non-acknowledged request to a prior to 6.3 node will
throw an exception.
Additionally this PR adds messages about the trial license the user is
generating.