The NotificationService (base class for SlackService, HipchatService ...) has both dynamic
cluster settings and SecureSettings and builds the clients (Account) that are used to comm
with external services. This commit fixes an important bug about updating/reloading any
of these settings (both Secure and dynamic cluster). Briefly the bug is due to the fact that
both the secure settings as well as the dynamic node scoped ones can be updated
independently, but when constructing the clients some of the settings might not be visible.
This commit removes the use of AbstractComponent in xpack where it was
still being extended. It has been replaced with explicit logger
declarations.
See #34488
The trigger engine did always create a new schedule data structure, when
the watcher indexing listener called an add. However the indexing
listener also called add, when the watch status was updated. This means,
that upon a watch status update the watch got retriggered, potentially
waiting a defined interval from the watch status update onwards, instead
of waiting from the last run.
This commit only updates the schedule in the trigger engine, if it
actually has changed, otherwise the existing schedule will not be
touched. This has two results
1. If a watch is updated by an execution, the existing interval will not
be touched (meaning the scheduled time will not move forward).
2. If a watch is updated by a user, but the schedule is not changed, it
will not be reset from the update (for example starting to count from 5
minutes again, if the interval was set to 5 minutes).
Furthermore some minor cleanups were applied, making variables final in
the ctor, preventing double creation of variables.
This commit switches from using java util's default timezone method to
using joda. The former can cause problems when the string representation
of the timezone is unknown to joda.
closes#35518
The elasticsearch-croneval CLI tool uses local dates to display when
something gets triggered the next time. This is very confusing.
This commit ensures, that UTC and local timezone times will be written
out.
The output looks like this and contains localized dates for each trigger
date as well as for `now`.
Now is [Tue, 28 Aug 2018 17:23:51 +0000] in UTC, local time is [ᏔᎵᏁ, 28 ᎦᎶ 2018 12:23:51 -0500]
Here are the next 10 times this cron expression will trigger:
1. Mon, 2 Jan 2040 11:00:00 +0000
ᏉᏅᎯ, 2 ᎤᏃ 2040 06:00:00 -0500
2. ...
This also removes an old outstanding TODO to use the jopt parsing to
cast the count to an integer instead of doing it ourselves.
* Watcher: fix metric stats names
The current watcher stats metric names doesn't match the current
documentation. This commit fixes the behavior of `queued_watches`
metric, deprecates `pending_watches` metric and adds `current_watches`
to match the documented behavior. It also fixes the documentation, which
introduced `executing_watches` metric that was never added.
Fixes#34865
Stop passing `Settings` to `AbstractComponent`'s ctor. This allows us to
stop passing around `Settings` in a *ton* of places. While this change
touches many files, it touches them all in fairly small, mechanical
ways, doing a few things per file:
1. Drop the `super(settings);` line on everything that extends
`AbstractComponent`.
2. Drop the `settings` argument to the ctor if it is no longer used.
3. If the file doesn't use `logger` then drop `extends
AbstractComponent` from it.
4. Clean up all compilation failure caused by the `settings` removal
and drop any now unused `settings` isntances and method arguments.
I've intentionally *not* removed the `settings` argument from a few
files:
1. TransportAction
2. AbstractLifecycleComponent
3. BaseRestHandler
These files don't *need* `settings` either, but this change is large
enough as is.
Relates to #34488
Drops the `Settings` member from `AbstractComponent`, moving it from the
base class on to the classes that use it. For the most part this is a
mechanical change that doesn't drop `Settings` accesses. The one
exception to this is naming threads where it switches from an invocation
that passes `Settings` and extracts the node name to one that explicitly
passes the node name.
This change doesn't drop the `Settings` argument from
`AbstractComponent`'s ctor because this change is big enough as is.
We'll do that in a follow up change.
With this commit we cleanup hand-coded duplicate checks in XContent
parsing. They were necessary previously but since we reconfigured the
underlying parser in #22073 and #22225, these checks are obsolete and
were also ineffective unless an undocumented system property has been
set. As we also remove this escape hatch, we can remove the additional
checks as well.
Closes#22253
Relates #34588
Right now, watches fail on runtime, when invalid email addresses are
used.
All those fields can be checked on parsing, if no mustache is used in
any email address template. In that case we can return immediate
feedback, that invalid email addresses should not be specified when
trying to store a watch.
In 54cb890 a setting for testing only was introduced, that delayed the start up of watcher. With the changes of how is watcher is started/stopped over time, this is not needed anymore.
Since all calls to `ESLoggerFactory` outside of the logging package were
deprecated, it seemed like it'd simplify things to migrate all of the
deprecated calls and declare `ESLoggerFactory` to be package private.
This does that.
Drops the last logging constructor that takes `Settings` because it is
no longer needed.
Watcher goes through a lot of effort to pass `Settings` to `Logger`
constructors and dropping `Settings` from all of those calls allowed us
to remove quite a bit of log-based ceremony from watcher.
This enables Elasticsearch to use the JVM-wide configured
PKCS#11 token as a keystore or a truststore for its TLS configuration.
The JVM is assumed to be configured accordingly with the appropriate
Security Provider implementation that supports PKCS#11 tokens.
For the PKCS#11 token to be used as a keystore or a truststore for an
SSLConfiguration, the .keystore.type or .truststore.type must be
explicitly set to pkcs11 in the configuration.
The fact that the PKCS#11 token configuration is JVM wide implies that
there is only one available keystore and truststore that can be used by TLS
configurations in Elasticsearch.
The PIN for the PKCS#11 token can be set as a truststore parameter in
Elasticsearch or as a JVM parameter ( -Djavax.net.ssl.trustStorePassword).
The basic goal of enabling PKCS#11 token support is to allow PKCS#11-NSS in
FIPS mode to be used as a FIPS 140-2 enabled Security Provider.
This commit removes the use of ExecutableScript from watcher in favor of
custom script contexts for both watcher condition scripts and transform
scripts.
`Settings` is no longer required to get a `Logger` and we went to quite
a bit of effort to pass it to the `Logger` getters. This removes the
`Settings` from all of the logger fetches in security and x-pack:core.
This commit adds the ability to plug in compilation of custom contexts
in mock script engine. This is needed for testing plugins which add
custom contexts like watcher.
Watcher is using a lot of so called TextTemplate fields in a watch
definition, which can use mustache to insert the watch id for example.
For the user it is non-obvious which field is just a string field or
which field is a text template.
This also means, that for every such field, we currently do a script
compilation, even if the field does not contain any mustache syntax.
This will lead to an increased script cache churn, because those
compiled scripts (that only contain a string), will evict other scripts.
On top of that, this also means that an unneeded compilation has
happened, instead of returning that string immediately.
The usages of mustache templating are in all of the actions (most of the time far
more than one compilation) as well as most of the inputs.
Especially when running a lot of watches in parallel, this will reduce
execution times and help reuse of real scripts.
This change cleans up "unused variable" warnings. There are several cases were we
most likely want to suppress the warnings (especially in the client documentation test
where the snippets contain many unused variables). In a lot of cases the unused
variables can just be deleted though.
This commit reverts most of #33157 as it introduces another race
condition and breaks a common case of watcher, when the first watch is
added to the system and the index does not exist yet.
This means, that the index will be created, which triggers a reload, but
during this time the put watch operation that triggered this is not yet
indexed, so that both processes finish roughly add the same time and
should not overwrite each other but act complementary.
This commit reverts the logic of cleaning out the ticker engine watches
on start up, as this is done already when the execution is paused -
which also gets paused on the cluster state listener again, as we can be
sure here, that the watches index has not yet been created.
This also adds a new test, that starts a one node cluster and emulates
the case of a non existing watches index and a watch being added, which
should result in proper execution.
Closes#33320
Changes the default of the `node.name` setting to the hostname of the
machine on which Elasticsearch is running. Previously it was the first 8
characters of the node id. This had the advantage of producing a unique
name even when the node name isn't configured but the disadvantage of
being unrecognizable and not being available until fairly late in the
startup process. Of particular interest is that it isn't available until
after logging is configured. This forces us to use a volatile read
whenever we add the node name to the log.
Using the hostname is available immediately on startup and is generally
recognizable but has the disadvantage of not being unique when run on
machines that don't set their hostname or when multiple elasticsearch
processes are run on the same host. I believe that, taken together, it
is better to default to the hostname.
1. Running multiple copies of Elasticsearch on the same node is a fairly
advanced feature. We do it all the as part of the elasticsearch build
for testing but we make sure to set the node name then.
2. That the node.name defaults to some flavor of "localhost" on an
unconfigured box feels like it isn't going to come up too much in
production. I expect most production deployments to at least set the
hostname.
As a bonus, production deployments need no longer set the node name in
most cases. At least in my experience most folks set it to the hostname
anyway.
Currently a watch execution results in one bulk request, when the
triggered watches are written into the that index, that need to be
executed. However the update of the watch status, the creation of the
watch history entry as well as the deletion of the triggered watches
index are all single document operations.
This can have quite a negative impact, once you are executing a lot of
watches, as each execution results in 4 documents writes, three of them
being single document actions.
This commit switches to a bulk processor instead of a single document
action for writing watch history entries and deleting triggered watch
entries. However the defaults are to run synchronous as before because
the number of concurrent requests is set to 0. This also fixes a bug,
where the deletion of the triggered watch entry was done asynchronously.
However if you have a high number of watches being executed, you can
configure watcher to delete the triggered watches entries as well as
writing the watch history entries via bulk requests.
The triggered watches deletions should still happen in a timely manner,
where as the history entries might actually be bound by size as one
entry can easily have 20kb.
The following settings have been added:
- xpack.watcher.bulk.actions (default 1)
- xpack.watcher.bulk.concurrent_requests (default 0)
- xpack.watcher.bulk.flush_interval (default 1s)
- xpack.watcher.bulk.size (default 1mb)
The drawback of this is of course, that on a node outage you might end
up with watch history entries not being written or watches needing to be
executing again because they have not been deleted from the triggered
watches index. The window of these two cases increases configuring the bulk processor to wait to reach certain thresholds.
Change the logging infrastructure to handle when the node name isn't
available in `elasticsearch.yml`. In that case the node name is not
available until long after logging is configured. The biggest change is
that the node name logging no longer fixed at pattern build time.
Instead it is read from a `SetOnce` on every print. If it is unset it is
printed as `unknown` so we have something that fits in the pattern.
On normal startup we don't log anything until the node name is available
so we never see the `unknown`s.
Watcher validates `action.auto_create_index` upon startup. If a user
specifies a pattern that does not contain watcher indices, it raises an
error message to include a list of three indices. However, the indices
are separated by a comma and a space which is not considered in parsing.
With this commit we change the error message string so it does not
contain the additional space thus making it more straightforward to copy
it to the configuration file.
Closes#33369
Relates #33497
This change collapses all metrics aggregations classes into a single package `org.elasticsearch.aggregations.metrics`.
It also restricts the visibility of some classes (aggregators and factories) that should not be used outside of the package.
Relates #22868
Drops and unused logging constructor, simplifies a rarely used one, and
removes `Settings` from a third. There is now only a single logging ctor
that takes `Settings` and we'll remove that one in a follow up change.
This commit ensures that when `TriggerService.start()` is called,
we ensure in the trigger engine implementations that current watches are
removed instead of adding to the existing ones in
`TickerScheduleTriggerEngine.start()`
Two additional minor fixes, where the result remains the same but less code gets executed.
1. If the node is not a data node, we forgot to set the status to
STARTING when watcher is being started. This should not be a big issue,
because a non-data node does not spent a lot of time loading as there
are no watches which need loading.
2. If a new cluster state came in during a reload, we had two checks in
place to abort loading the current one. The first one before we load all
the watches of the local node and the second before watcher is starting
with those new watches. Turned out that the first check was not
returning, which meant we always tried to load all the watches, and then
would fail on the second check. This has been fixed here.
When a node dies that carries a watcher shard or a shard is relocated to
another node, then watcher needs not only trigger a reload on the node
where the shard relocation happened, but also on other nodes where
copies of this shard, as different watches may need to be loaded.
This commit takes the change of remote nodes into account by not only
storing the local shard allocation ids in the WatcherLifeCycleService,
but storing a list of ShardRoutings based on the local active shards.
This also fixes some tests, which had a wrong assumption. Using
`TestShardRouting.newShardRouting` in our tests for cluster state
creation led to the issue of always creating new allocation ids which
implicitely lead to a reload.
This commit removes the unused User class from the protocol project.
This class was originally moved into protocol in preparation for moving
more request and response classes, but given the change in direction
for the HLRC this is no longer needed. Additionally, this change also
changes the package name for the User object in x-pack/plugin/core to
its original name.
The code introduced in 3fa36807f8 to fix
an issue with crons always returning -1 was not very readable. This
implementation uses streams to improve readability.
The new implementation is functional equivalent with the old, ant based one.
It parses task standard error to get the missing classes and violations in the same way.
I considered re-using ForbiddenApisCliTask but Gradle makes it hard to build inheritance with tasks that have task actions , since the order of the task actions can't be controlled.
This inheritance isn't dully desired either as the third party audit task is much more opinionated and we don't want to expose some of the configuration.
We could probably extract a common base class without any task actions, but probably more trouble than it's worth.
Closes#31715
CronEvalTool prints an error only for cron expressions that result in
no upcoming time events.
If a cron expression results in less than the specified count
(default 10) time events, now all the coming times are printed
without displaying error message.
Closes#32735
This reworks how we configure the `shadow` plugin in the build. The major
change is that we no longer bundle dependencies in the `compile` configuration,
instead we bundle dependencies in the new `bundle` configuration. This feels
more right because it is a little more "opt in" rather than "opt out" and the
name of the `bundle` configuration is a little more obvious.
As an neat side effect of this, the `runtimeElements` configuration used when
one project depends on another now contains exactly the dependencies needed
to run the project so you no longer need to reference projects that use the
shadow plugin like this:
```
testCompile project(path: ':client:rest-high-level', configuration: 'shadow')
```
You can instead use the much more normal:
```
testCompile "org.elasticsearch.client:elasticsearch-rest-high-level-client:${version}"
```
When a list/an array of cron expressions is provided, and one of those addresses
is already expired, the expired one will be considered as an option
instead of the valid next one.
This commit also reduces the visibility of the CronnableSchedule and
refactors a comparator to look like java 8.
This removes custom Response classes that extend `AcknowledgedResponse` and do nothing, these classes are not needed and we can directly use the non-abstract super-class instead.
While this appears to be a large PR, no code has actually changed, only class names have been changed and entire classes removed.
The HipChatMessage#render is no longer used, and instead the
HipChatAccount#render is used in the ExecutableHipChatAction. Only a
test that validated the HttpProxy used this render method still. This
commit cleans it up.
The auth.basic package was an example of a single implementation
interface that leaked into many different classes. In order to clean
this up, the HttpAuth interface, factories, and Registries all were
removed and the single implementation, BasicAuth, was substituted in all
cases. This removes some dependenies between Auth and the Templates,
which can now use static methods on BasicAuth. BasicAuth was also moved
into the http package and all of the other classes were removed.
The PagerDuty v1 API is EOL and will stop accepting new accounts
shortly. This commit swaps out the watcher use of the v1 API with the
new v2 API. It does not change anything about the existing watcher
API.
Closes#32243
The User class has been moved to the protocol project for upcoming work
to add more security APIs to the high level rest client. As part of
this change, the toString method no longer uses a custom output method
from MetadataUtils and instead just relies on Java's toString
implementation.
The error message mentioned in #30094 does not link to to a cause by the
test itself, as there are still inflight requests according to the
circuit breaker.
I ran this test class 100k times on bare metal and could not reproduce
it. I will reenable the test for now.
Closes#30094
Today we allow plugins to add index store implementations yet we are not
doing this in our new way of managing plugins as pull versus push. That
is, today we still allow plugins to push index store providers via an on
index module call where they can turn around and add an index
store. Aside from being inconsistent with how we manage plugins today
where we would look to pull such implementations from plugins at node
creation time, it also means that we do not know at a top-level (for
example, in the indices service) which index stores are available. This
commit addresses this by adding a dedicated plugin type for index store
plugins, removing the index module hook for adding index stores, and by
aggregating these into the top-level of the indices service.
This bundles the x-pack:protocol project into the x-pack:plugin:core
project because we'd like folks to consider it an implementation detail
of our build rather than a separate artifact to be managed and depended
on. It is now bundled into both x-pack:plugin:core and
client:rest-high-level. To make this work I had to fix a few things.
Firstly, I had to make PluginBuildPlugin work with the shadow plugin.
In that case we have to bundle only the `shadow` dependencies and the
shadow jar.
Secondly, every reference to x-pack:plugin:core has to use the `shadow`
configuration. Without that the reference is missing all of the
un-shadowed dependencies. I tried to make it so that applying the shadow
plugin automatically redefines the `default` configuration to mirror the
`shadow` configuration which would allow us to use bare project references
to the x-pack:plugin:core project but I couldn't make it work. It'd *look*
like it works but then fail for transitive dependencies anyway. I think
it is still a good thing to do but I don't have the willpower to do it
now.
Finally, I had to fix an issue where Eclipse and IntelliJ didn't properly
reference shadowed transitive dependencies. Neither IDE supports shadowing
natively so they have to reference the shadowed projects. We fix this by
detecting `shadow` dependencies when in "Intellij mode" or "Eclipse mode"
and adding `runtime` dependencies to the same target. This convinces
IntelliJ and Eclipse to play nice.
Relates #29827
This implementation behaves like the current transport client, that you basically cannot configure a Watch POJO representation as an argument to the put watch API, but only a bytes reference. You can use the the `WatchSourceBuilder` from the `org.elasticsearch.plugin:x-pack-core` dependency to build watches.
This commit also changes the license type to trial, so that watcher is available in high level rest client tests.
/cc @hub-cap
Ensure our tests can run in a FIPS JVM
JKS keystores cannot be used in a FIPS JVM as attempting to use one
in order to init a KeyManagerFactory or a TrustManagerFactory is not
allowed.( JKS keystore algorithms for private key encryption are not
FIPS 140 approved)
This commit replaces JKS keystores in our tests with the
corresponding PEM encoded key and certificates both for key and trust
configurations.
Whenever it's not possible to refactor the test, i.e. when we are
testing that we can load a JKS keystore, etc. we attempt to
mute the test when we are running in FIPS 140 JVM. Testing for the
JVM is naive and is based on the name of the security provider as
we would control the testing infrastrtucture and so this would be
reliable enough.
Other cases of tests being muted are the ones that involve custom
TrustStoreManagers or KeyStoreManagers, null TLS Ciphers and the
SAMLAuthneticator class as we cannot sign XML documents in the
way we were doing. SAMLAuthenticator tests in a FIPS JVM can be
reenabled with precomputed and signed SAML messages at a later stage.
IT will be covered in a subsequent PR
There is currently no way to see what user executed a watch. This commit
adds the decrypted username to each execution in the watch history, in a
new field "user".
Closes#31772
This commit allows for rebuilding watcher secure secrets via the
reload_secure_settings API call. The commit also renames a method in the
Notification Service to make it a bit more readable.
This commit adds the _xpack/usage api to the high level rest client.
Currently in the transport api, the usage data is exposed in a limited
fashion, at most giving one level of helper methods for the inner keys
of data, but then exposing thos subobjects as maps of objects. Rather
than making parsers for every set of usage data from each feature, this
PR exposes the entire set of usage data as a map of maps.
Other watcher actions already account for secure settings in their
sensitive settings, whereas the email sending action did not. This adds
the ability to optionally set a secure_password for email accounts.
Previously, the ensureWatchExists was overridable. This commit makes
it final so that it cannot be overridden, and cleans up some redundant
code in the process.
There was still a case with a null text that allowed for 0 attachments
to be created. This commit ensures that greater than zero are created
if the text is null. Otherwise, it uses the same logic to create 0 to 3
random attachments.
Closes#31948
Historically we have loaded SSL objects (such as SSLContext,
SSLIOSessionStrategy) by passing in the SSL settings, constructing a
new SSL configuration from those settings and then looking for a
cached object that matches those settings.
The primary issue with this approach is that it requires a fully
configured Settings object to be available any time the SSL context
needs to be loaded. If the Settings include SecureSettings (such as
passwords for keys or keystores) then this is not true, and the cached
SSL object cannot be loaded at runtime.
This commit introduces an alternative approach of naming every cached
ssl configuration, so that it is possible to load the SSL context for
a named configuration (such as "xpack.http.ssl"). This means that the
calling code does not need to have ongoing access to the secure
settings that were used to load the configuration.
This change also allows monitoring exporters to use SSL passwords
from secure settings, however an exporter that uses a secure SSL setting
(e.g. truststore.secure_password) may not have its SSL settings updated
dynamically (this is prevented by a settings validator).
Exporters without secure settings can continue to be defined and updated
dynamically.
A new commit was merged that does not allow a null attachement &&
text. This is valid for the slack API, as it does not allow this, but
our unit tests did. This commit fixes the broken unit test.
Closes#31948
Slack accepts an empty text or attachments, but not both. This commit
ensures that both are not empty when creating a watch.
Closes#30071
Replacing old pull request: #31288
Removes support for storing scripts without the usual json around the
script. So You can no longer do:
```
POST _scripts/<templatename>
{
"query": {
"match": {
"title": "{{query_string}}"
}
}
}
```
and must instead do:
```
POST _scripts/<templatename>
{
"script": {
"lang": "mustache",
"source": {
"query": {
"match": {
"title": "{{query_string}}"
}
}
}
}
}
```
This improves error reporting when you attempt to store a script but don't
quite get the syntax right. Before, there was a good chance that we'd
think of it as a "raw" template and just store it. Now we won't do that.
Nice.
The ack watch action has a check for currently executed watches, to make
sure that currently running watches cannot be acknowledged. This check
only checked on the coordinating node for watches being executed, but should
have checked the whole cluster using a WatcherStatsRequest, which is
being switched to in this commit.
As SecureSetting is extended from Setting, you can easily accidentally
use `SecureSetting.simpleString()` to read a secure setting instead of
`SecureSetting.secureString()`. This commit changes this behaviour in
some watcher notification services.
Previously the call to register a listener for settings updates was in
each individual service, rather than in the notification service
itself. This change ensures that each child of the notification service
gets registered with the settings update consumer.
The xcontent parameters were not passed to the xcontent serialization
of the chain input for each chain. This could lead to wrongly stored
watches, which did not contain passwords but only their redacted counterparts, when an input inside of a chain input contained a password.
The removed code snippet was never executed, as the version was never set and
thus always -1, after parsing the watch. With the changes done in
c9d77d20fd this logic would not have
worked correctly anyway.
If no version is specified when putting a watch, the index API should be
used instead of the update API, so that the whole watch gets overwritten
instead of being merged with the existing one.
Merging only happens when a version is specified, so that credentials can be omitted, which is important for the watcher UI.
TransportAction currently contains 2 doExecute methods, one which takes
a the task, and one that does not. The latter is what some subclasses
implement, while the first one just calls the latter, dropping the given
task. This commit combines these methods, in favor of just always
assuming a task is present.
Most transport actions don't need the node ThreadPool. This commit
removes the ThreadPool as a super constructor parameter for
TransportAction. The actions that do need the thread pool then have a
member added to keep it from their own constructor.
Most transport actions don't need to resolve index names. This commit
removes the index name resolver as a super constructor parameter for
TransportAction. The actions that do need the resolver then have a
member added to keep the resolver from their own constructor.
Trying to post a new watch without any body currently results in a
NullPointerException. This change fixes that by validating that
Post and Put requests always have a body.
Closes#30057
This commit upgrades us to Netty 4.1.25. This upgrade is more
challenging than past upgrades, all because of a new object cleaner
thread that they have added. This thread requires an additional security
permission (set context class loader, needed to avoid leaks in certain
scenarios). Additionally, there is not a clean way to shutdown this
thread which means that the thread can fail thread leak control during
tests. As such, we have to filter this thread from thread leak control.
This commit adjusts the indentation in the CLI scripts to give a clear
visual indication that the line being indented is a continuation of the
previous line.
A previous refactoring of the CLI scripts migrated all of the CLI tools
to shell to a common script, elasticsearch-cli. This approach is fine in
Bash where it is easy to tear arguments apart but it doesn't work so
well on Windows where quoting is insane. To avoid having to tear the
arguments apart to separate the first argument to elasticsearch-cli from
the remaining arguments, we instead choose a strategy where we can avoid
tearing the arguments apart. To do this, we will instead pass the main
class by an environment variable and then we can pass the arguments
straight through. This will let us avoid awful quoting issues on
Windows. This is the Windows side of that effort and the Bash side was
in a previous commit.
A previous refactoring of the CLI scripts migrated all of the CLI tools
to shell to a common script, elasticsearch-cli. This approach is fine in
Bash where it is easy to tear arguments apart but it doesn't work so
well on Windows where quoting is insane. To avoid having to tear the
arguments apart to separate the first argument to elasticsearch-cli from
the remaining arguments, we instead choose a strategy where we can avoid
tearing the arguments apart. To do this, we will instead pass the main
class by an environment variable and then we can pass the arguments
straight through. This will let us avoid awful quoting issues on
Windows. This is the non-Windows side of that effort and the Windows
side will be in a follow-up.
move `finger_print`, `pattern` and `standard_html_strip` analyzers
to analysis-common module. (both AnalysisProvider and PreBuiltAnalyzerProvider)
Changed PreBuiltAnalyzerProviderFactory to extend from PreConfiguredAnalysisComponent and
changed to make sure that predefined analyzers are always instantiated with the current
ES version and if an instance is requested for a different version then delegate to PreBuiltCache.
This is similar to the behaviour that exists today in AnalysisRegistry.PreBuiltAnalysis and
PreBuiltAnalyzerProviderFactory. (#31095)
Relates to #23658
ObjectParser should throw XContentParseExceptions, not IAE. A dedicated parsing
exception can includes the place where the error occurred.
Closes#30605
If you invoke elasticsearch-plugin (or any other CLI script on Windows)
with a path that has a percent-encoded space (or any other
percent-encoded character) because the CLI scripts now shell into a
common shell script (elasticsearch-cli) the percent-encoded space ends
up being interpreted as a parameter. For example passing install --batch
file:/c:/encoded%20%space/analysis-icu-7.0.0.zip to elasticsearch-plugin
leads to the %20 being interpreted as %2 followed by a zero. Here, the
%2 is interpreted as the second parameter (--batch) and the
InstallPluginCommand class ends up seeing
file:/c/encoded--batch0space/analysis-icu-7.0.0.zip as the path which
will not exist. This commit addresses this by escaping the %* that is
used to pass the parameters to the common CLI script so that the common
script sees the correct parameters without the %2 being substituted.
The .watcher-history-* template is currently using a plugin-custom index setting xpack.watcher.template.version,
which prevents this template from being installed in a mixed OSS / X-Pack cluster, ultimately
leading to the situation where an X-Pack node is constantly spamming an OSS master with (failed)
template updates. Other X-Pack templates (e.g. security-index-template or security_audit_log)
achieve the same versioning functionality by using a custom _meta field in the mapping instead.
This commit switches the .watcher-history-* template to use the _meta field instead.
Enables a rolling restart from the OSS distribution to the x-pack based distribution by preventing
x-pack code from installing custom metadata into the cluster state until all nodes are capable of
deserializing this metadata.
This commit reduces the Windows CLI scripts to one-liners by moving all
of the redundant logic to an elasticsearch-cli script. This commit is
only the Windows side, a previous commit covered the Linux side.
This commit reduces the Linux CLI scripts to one-liners by moving all of
the redundant logic to an elasticsearch-cli script. This commit is only
the Linux side, a follow-up will do this for Windows too.
Watcher tests now always fail hard when watches that were
tried to be triggered in a test using the trigger() method,
but could not because they were not found on any of the
nodes in the cluster.
This commit removes xpack from being a meta-plugin-as-a-module.
It also fixes a couple tests which were missing task dependencies, which
failed once the gradle execution order changed.
* Refactors ClientHelper to combine header logic
This change removes all the `*ClientHelper` classes which were
repeating logic between plugins and instead adds
`ClientHelper.executeWithHeaders()` and
`ClientHelper.executeWithHeadersAsync()` methods to centralise the
logic for executing requests with stored security headers.
* Removes Watcher headers constant
When the encrpytion of sensitive date is enabled, test that a
scheduled watch is executed as expected and produces the correct value
from a secret in the basic auth header.
Since adding back the per-watch statistics, we do not need to access
every trigger engine implementation to get the current total job count.
This commit removes the unused methods to do so.
The HTTPClient used in watcher is based on the apache http client. The
current client is using a lot of defaults - which are not always
optimal. Two of those defaults are the maximum number of total
connections and the maximum number of connections to a single route.
If one of those limits is reached, the HTTPClient waits for a connection
to be finished thus acting in a blocking fashion. In order to prevent
this when many requests are being executed, we increase the limit of
total connections as well as the connections per route (a route is
basically an endpoint, which also contains proxy information, not
containing an URL, just hosts).
On top of that an additional option has been set to evict
long running connections, which can potentially be reused after some
time. As this requires an additional background thread, this required
some changes to ensure that the httpclient is closed properly. Also the
timeout for this can be configured.
Starting watcher should wait for the watcher to be started before
marking the status as started, which is now done via a callback.
Also, reloading watcher could set the execution service to paused. This could
lead to watches not being executed, when run in tests. This fix does not
change the paused flag in the execution service, just clears out the
current queue and executions.
Closes#30381
When the watcher service pauses execution due to a cluster state update,
the trigger service and its engines also need to pause properly instead
of keeping going. This is also important when the .watches index is
deleted, so that watches don't stay in a triggered mode.
The current implementation starts/stops watcher using an executor. This
can result in our of order operations.
This commit reduces those executor calls to an absolute minimum in order
to be able to do state changes within the cluster state listener method,
which runs in sequence.
When a state change occurs that forces the watcher service to pause
(like no watcher index, no master node, no local shards), the service is
now in a paused state.
Pausing is a super lightweight operation, which marks the
ExecutionService as paused and waits for the currently executing watches
to finish in the background via an executor. The same applies for
stopping, the potentially long running operation is outsourced in to an
executor, as waiting for executed watches is decoupled from the current
state.
The only other long running operation is starting, where watches need to
be loaded. This is also done via an executor, but has an additional
protection by checking the cluster state version it was started with. If
another cluster state version was trying to load the watches, then this
loading will not take effect.
This PR also cleans up some unused states, like the a simple boolean in
the HistoryStore/TriggeredWatchStore marking it as started or stopped,
as this can now be caught in the execution service.
Another advantage of this approach is the fact, that now only triggered
watches are not getting executed, while watches that are run via the
Execute Watch API will still be executed regardless if watcher is
stopped or not.
Lastly the TickerScheduleTriggerEngine thread now only starts on data nodes.
This commit removes the http.enabled setting. While all real nodes (started with bin/elasticsearch) will always have an http binding, there are many tests that rely on the quickness of not actually needing to bind to 2 ports. For this case, the MockHttpTransport.TestPlugin provides a dummy http transport implementation which is used by default in ESIntegTestCase.
closes#12792
The variadic constructor was only used in a few places and the
RepositoriesMetaData class is backed by a List anyway, so just using a
List will make it simpler to instantiate it.
Starting with the refactoring in https://github.com/elastic/elasticsearch/pull/22778 (released in 5.3) we may fail to properly replicate operation when a mapping update on master fails. If a bulk
operations needs a mapping update half way, it will send a request to the master before continuing
to index the operations. If that request times out or isn't acked (i.e., even one node in the cluster
didn't process it within 30s), we end up throwing the exception and aborting the entire bulk. This is
a problem because all operations that were processed so far are not replicated any more to the
replicas. Although these operations were never "acked" to the user (we threw an error) it cause the
local checkpoint on the replicas to lag (on 6.x) and the primary and replica to diverge.
This PR does a couple of things:
1) Most importantly, treat *any* mapping update failure as a document level failure, meaning only
the relevant indexing operation will fail.
2) Removes the mapping update callbacks from `IndexShard.applyIndexOperationOnPrimary` and
similar methods for simpler execution. We don't use exceptions any more when a mapping
update was successful.
I think we need to do more work here (the fact that a single slow node can prevent those mappings
updates from being acked and thus fail operations is bad), but I want to keep this as small as I can
(it is already too big).
We had a number of awaitsFix links that weren't updated after the xpack
merge.
Where possible I changed the links to the new locations, but in some
circumstances the original ticket was closed (suggesting the awaitsfix
should be removed) or was otherwise unclear the status.
Email message IDs are supposed to be unique. In order to guarantee this,
we need to take the action id of a watch action into account as well,
not just the watch id from the watch execution context. This prevents
that two actions from the same watch execution end up with the same
message id.
This commit adds the distribution type to the startup scripts so that we
can discern from log output and the main response the type of the
distribution (deb/rpm/tar/zip).
This commit adds the distribution flavor (default versus oss) to the
build process which is passed through the startup scripts to
Elasticsearch. This change will be used to customize the message on
attempting to install/remove x-pack based on the distribution flavor.
This commit makes x-pack a module and adds it to the default
distrubtion. It also creates distributions for zip, tar, deb and rpm
which contain only oss code.