If someone deletes the watch index (i.e. by deleting all indices), the watcher
in memory store still contains all the watches and tries to execute watches -
which results in exceptions as the watch itself cannot be updated anymore.
In order to minimize this problem (it cant be get rid of completely), we should
act accordingly if the watch index goes missing (either deleted or closed) and
clear out the memory representation of watches in the watchstore as well as trying
to finish all the current executions.
Closeselastic/elasticsearch#2794
Original commit: elastic/x-pack-elasticsearch@12d98cd566
This change moves the logfile audit output from determining what to log based on the
logger level to a enum based configuration that is used by the index output.
A few notable changes were made:
* We alway log all the information we have except for the request body
* The request body is no longer logged by default for REST events; the user needs to
explicitly opt in as there could be sensitive data in the body
* Added a `realm_authentication_failed` event that separates overall authentication
failure from that of an individual realm
Original commit: elastic/x-pack-elasticsearch@343a2bcdd9
This change adds support for disabling users. Users can be disabled by setting the enabled
property to false and the AuthenticationService will check to make sure that the user is enabled.
If the user is not enabled, this will be audited as an authentication failure.
Also as part of this work, the AnonymousUser was cleaned up to remove having a static instance
that caused issues with tests.
Finally, the poller of users was removed to simplify the code in the NativeUsersStore. In our other
realms we rely on the clear cache APIs and the timeout of the user cache. We should have the
same semantics for the native realm.
Closeselastic/elasticsearch#2172
Original commit: elastic/x-pack-elasticsearch@0820e40183
This rewrites the HTTP Exporter to use the REST client underneath. Functionality is improved in resource blocking (templates and pipelines existing) and the majority of the code fundamentall simplified by removing direct HTTP calls.
This is blocked by the SSLService pull request. After that is merged, the I will update this PR to reflect those changes and it could possibly allow us to remove the security privileges required for monitoring.
Original commit: elastic/x-pack-elasticsearch@1ad25f17f8
Basic backwards compatibility support for watcher.
Closeselastic/elasticsearch#3230
Relates to elastic/elasticsearch#3231 - this actually should fix all the failures caused
by fractional time values but it does so by being able to parse them.
Being able to parse them is important for 2.x compatibility but 5.0
watches shouldn't produce fractional time values. This fixes the
particular way of making fractional time values mentioned in elastic/elasticsearch#3231
but I expect there are a half dozen more places to fix. The actual
watcher tests are fairly basic.
Original commit: elastic/x-pack-elasticsearch@328717455c
This publishes X-Pack usage data to the cluster info from the elected master node. This allows phone home to retrieve this data from the index, rather than fetching it live from the connected cluster (thereby not getting it from any n - 1 clusers that are not connceted).
Original commit: elastic/x-pack-elasticsearch@79bfaaaf0b
This removes the "agent" package from org.elasticsearch.xpack.monitoring.agent.*, so that now everything is simply org.elasticsearch.xpack.monitoring.*.
Follow-on work will be refactoring some of the other code, but this is a first step now that it's always the agent (in effect).
Original commit: elastic/x-pack-elasticsearch@14025cb17c
This change migrates xpack (security, watcher, and monitoring) to use the common ssl
configuration for the elastic stack. As part of this work, several aspects of how we deal
with SSL has been modified.
From a functionality perspective, an xpack wide configuration for SSL was added and
all of the code that needs SSL uses the SSLService now. The following is a list of all
of the aspects of xpack that can have their own SSL configuration, which are separate
from the xpack wide configuration:
* Transport
* Transport profiles
* HTTP Transport
* Realms
* Monitoring Exporters
* HTTP Client
In terms of the code, some cleanups were made with these changes. SSLConfiguration is
now a concrete class and SSLConfiguration.Custom and SSLConfiguration.Global have been
removed. The validate method on key and trust configurations has been removed and these
classes will now throw exceptions when they are constructed with bad values. The
OptionalSettings helper class has been removed as it was just a file with one line functions
that made the code harder to understand. The SSL configuration and service classes have
been moved from the security source directories to the main xpack source set. The SSLService
now handles more of the configuration of the SSLEngine it returns to prevent callers from
having to handle those aspects. The settings that get registered for SSL have been moved to
XPackSettings.
Also included in this PR is a update to the docs around SSL. This includes a large simplification to
the documentation in that the certificate authority configuration section has been removed and the
process that is documented for generating certificates only includes the CLI tool that we bundle.
Closeselastic/elasticsearch#3104Closeselastic/elasticsearch#2971Closeselastic/elasticsearch#3164
Original commit: elastic/x-pack-elasticsearch@5bd9e5ef38
* master:
only lint .js and .jsx files
Designating list and count APIs as system APIs
Inverting logic
Use system API module function exported by Kibana plugin
monitoring ui/cluster row: IsClusterSupported helper
monitoring ui: Initial Test Automation Hooks
monitoring ui/license: toaster content update
monitoring ui: onClick handler syntax polish
monitoring ui/license: updating wording on unsupported cluster toaster
monitoring ui:fix default min shard replication to `N/A`
monitoring ui: empty state cleanups
monitoring ui/license: fix redirect issue with license expiry page + back button
monitoring ui:cluster listing treatment for clusters w/ invalid and unsupported license
monitoring ui: remove “health” check for cluster listing
monitoring ui: show clusters that have had license deleted
Designate certain API calls as system APIs + treat them as special
Original commit: elastic/x-pack-elasticsearch@95865f89ac
* master:
Changes tests to conform with new cluster health API, calling setWaitForNoRelocatingShards(true) instead of setWaitForRelocatingShards(0)
Original commit: elastic/x-pack-elasticsearch@bde6ad8c8a
* master:
Use releasable locks in NativeRolesStore
security: limit the size of the role store cache
security: remove explicit handshake wait in netty4 transport
test: smoke-test-plugins-ssl no longer relies on logging to start
kibana monitoring/uuid config key reference update
Docs: Updated release date for 2.4 in RNs.
Update README.md
Build: Add apijar task to assemble so it gets built with other artifacts
monitoring ui/license: cluster listing status cell treatment for basic/unsupported cluster
monitoring ui:fix cluster overview when cluster has no indices/shards
monitoring ui/license: logic cleanup per feedback
monitoring ui/license: primary cluster asterisk styling
monitoring ui/license: allow clicking into primary cluster if all are basic
monitoring ui: add isPrimary property to cluster listing response
Security: throw exception if we cannot extract indices from an indices request
Security: add tests for delete and update by query
Original commit: elastic/x-pack-elasticsearch@3cb41739ee
Previously the roles store cache was unbounded as it was a just using a ConcurrentHashMap,
which could lead to excessive memory usage in cases where there are a large number of roles
as we tried to eagerly load the roles into the cache if they were not present. The roles store now
loads roles on demand and caches them for a finite period of time.
Additionally, the background polling of roles has been removed to reduce complexity. A best effort
attempt is made to clear the roles cache upon modification and if necessary the cache can be
cleared manually.
See elastic/elasticsearch#1837
Original commit: elastic/x-pack-elasticsearch@450dd779c8
Netty 4's SslHandler does not require the application to wait for the handshake to
be completed before data is written. This change removes the explicit wait on each
handshake future.
Original commit: elastic/x-pack-elasticsearch@c19bcebb83