* Fix Translog.Delete serialization for sequence numbers
Translog.Delete used `.writeVLong` instead of `.writeLong` for the sequence
number and primary term (and their respective "read" variants). This could lead
to issues where a 5.x node sent a translog operation with a negative sequence
number (-2 for unassigned seq no) that tripped an assertion serializing a
negative number and causing ES to exit.
Adds a unit test for serialization and a mixed-cluster REST test, since that was
how this was originally caught.
* Use more realistic values for random seqNum and primary term
* Add comment with TODO for removal in 7.0
* Change comment into an assert
Instead of `search.remote.seeds.${clustername}` we now specify the seeds as:
`search.remote.${clustername}.seeds` which is a real list setting compared to an unvalidated
group setting before.
`ToXContentObject` extends `ToXContent` without adding new methods to it, while allowing to mark classes that output complete xcontent objects to distinguish them from classes that require starting and ending an anonymous object externally.
Ideally ToXContent would be renamed to ToXContentFragment, but that would be a huge change in our codebase, hence we simply document the fact that toXContent outputs fragments with no guarantees that the output is valid per se without an external ancestor.
Relates to #16347
this commit adds full support for proxy nodes on the search layer.
This allows to connection only to a small set of nodes on a remote cluster
to exectue the search. The nodes will proxy the request to the correct node in the
cluster while the coordinting node doesn't need to be connected to the target node.
This change is the first towards providing the ability to store
sensitive settings in elasticsearch. It adds the
`elasticsearch-keystore` tool, which allows managing a java keystore.
The keystore is loaded upon node startup in Elasticsearch, and used by
the Setting infrastructure when a setting is configured as secure.
There are a lot of caveats to this PR. The most important is it only
provides the tool and setting infrastructure for secure strings. It does
not yet provide for keystore passwords, keypairs, certificates, or even
convert any existing string settings to secure string settings. Those
will all come in follow up PRs. But this PR was already too big, so this
at least gets a basic version of the infrastructure in.
The two main things to look at. The first is the `SecureSetting` class,
which extends `Setting`, but removes the assumption for the raw value of the
setting to be a string. SecureSetting provides, for now, a single
helper, `stringSetting()` to create a SecureSetting which will return a
SecureString (which is like String, but is closeable, so that the
underlying character array can be cleared). The second is the
`KeyStoreWrapper` class, which wraps the java `KeyStore` to provide a
simpler api (we do not need the entire keystore api) and also extend
the serialized format to add metadata needed for loading the keystore
with no assumptions about keystore type (so that we can change this in
the future) as well as whether the keystore has a password (so that we
can know whether prompting is necessary when we add support for keystore
passwords).
* Remove a checked exception, replacing it with `ParsingException`.
* Remove all Parser classes for the yaml sections, replacing them with static methods.
* Remove `ClientYamlTestFragmentParser`. Isn't used any more.
* Remove `ClientYamlTestSuiteParseContext`, replacing it with some static utility methods.
I did not rewrite the parsers using `ObjectParser` because I don't think it is worth it right now.
Today if an older version of a plugin exists, we fail to notify the user
with a helpful error message. This happens because during plugin
verification, we attempt to read the plugin descriptors for all existing
plugins. When an older version of a plugin is sitting on disk, we will
attempt to read this old plugin descriptor and fail due to a version
mismatch. This leads to an unhelpful error message. Instead, we should
check for existence of the plugin as part of the verification phase, but
before attempting to read plugin descriptors for existing plugins. This
enables us to provide a helpful error message to the user.
Relates #22305
* Internal: Refactor SettingCommand into EnvironmentAwareCommand
This change renames and changes the behavior of SettingCommand to have
its primary method take in a fully initialized Environment for
elasticsearch instead of just a map of settings. All of the subclasses
of SettingCommand already did this at some point, so this just removes
duplication.
We are currenlty checking that no deprecation warnings are emitted in our query tests. That can be moved to ESTestCase (disabled in ESIntegTestCase) as it allows us to easily catch where our tests use deprecated features and assert on the expected warnings.
Sequence BWC logic consists of two elements:
1) Wire level BWC using stream versions.
2) A changed to the global checkpoint maintenance semantics.
For the sequence number infra to work with a mixed version clusters, we have to consider situation where the primary is on an old node and replicas are on new ones (i.e., the replicas will receive operations without seq#) and also the reverse (i.e., the primary sends operations to a replica but the replica can't process the seq# and respond with local checkpoint). An new primary with an old replica is a rare because we do not allow a replica to recover from a new primary. However, it can occur if the old primary failed and a new replica was promoted or during primary relocation where the source primary is treated as a replica until the master starts the target.
1) Old Primary & New Replica - this case is easy as is taken care of by the wire level BWC. All incoming requests will have their seq# set to `UNASSIGNED_SEQ_NO`, which doesn't confuse the local checkpoint logic (keeping it at `NO_OPS_PERFORMED`)
2) New Primary & Old replica - this one is trickier as the global checkpoint service currently takes all in sync replicas into consideration for the global checkpoint calculation. In order to deal with old replicas, we change the semantics to say all *new node* in sync replicas. That means the replicas on old nodes don't count for the global checkpointing. In this state the seq# infra is not fully operational (you can't search on it, because copies may miss it) but it is maintained on shards that can support it. The old replicas will have to go through a file based recovery at some point and will get the seq# information at that point. There is still an edge case where a new primary fails and an old replica takes over. I'lll discuss this one with @ywelsch as I prefer to avoid it completely.
This PR also re-enables the BWC tests which were disabled. As such it had to fix any BWC issue that had crept in. Most notably an issue with the removal of the `timestamp` field in #21670.
The commit also includes a fix for the default value of the seq number field in replicated write requests (it was 0 but should be -2), that surface some other minor bugs which are fixed as well.
Last - I added some debugging tools like more sane node names and forcing replication request to implement a `toString`
Problem: We introduced the ability to shorten the rank eval request by using a
template in #20231. When playing with the API it turned out that there might be
use cases where - e.g. due to various heuristics - folks might want to translate
the original user query into more than just one type of Elasticsearch query.
Solution: Give each template an id that can later be referenced in the
actual requests.
Closes#21257
Today in the codebase we refer to seccomp everywhere instead of system
call filter even if we are not specifically referring to Linux. This
commit is a purely mechanical change to refer to system call filter
where appropriate instead of the general seccomp, and only leaves
seccomp in place when actually referring to the Linux implementation.
Relates #22243
Inline scripts defined in Ingest Pipelines are now compiled at creation time to preemptively catch errors on initialization of the pipeline.
Fixes#21842.
Moves the last of the "easy" parser construction into
`RestRequest`, this time with a new method
`RestRequest#contentParser`. The rest of the production
code that builds `XContentParser` isn't "easy" because it is
exposed in the Transport Client API (a Builder) object.
This commit enables CLI commands to be closeable and installs a runtime
shutdown hook to ensure that if the JVM shuts down (as opposed to
aborting) the close method is called.
It is not enough to wrap uses of commands in main methods in
try-with-resources blocks as these will not run if, say, the virtual
machine is terminated in response to SIGINT, or system shutdown event.
Relates #22126
Add checks to RankEvalSpec to safe guard against missing parameters.
Fail early in case no metric is supplied, no rated requests are supplied or the search source builder is missing but no template is supplied neither.
Add stricter checks around rank eval request parsing: Fail if in a rated request we see both, a verbatim request as well as request
template parameters.
Relates to #21260
Our query DSL supports empty queries (`{}`), which have a different meaning depending on the query that holds it, either ignored, match_all or match_none. We deprecated the support for empty queries in 5.0, where we log a deprecation warning wherever they are used.
The way we supported it once we moved query parsing to the coordinating node was having an Optional<QueryBuilder> return type in all of our parse methods (called fromXContent). See #17624. The central place for this was QueryParseContext#parseInnerQueryBuilder. We can now remove all the optional return types and simply throw an exception whenever an empty query is found.
Move rank-eval template compilation down to TransportRankEvalAction
Closes#21777 and #21465
Search templates for rank_eval endpoint so far only worked when sent through
REST end point
However we also allow templates to be set through a Java API call to
"setTemplate" on that same spec. This doesn't go through template execution so
fails further down the line.
To make this work, moved template execution further down, probably to
TransportRankEvalAction.
* Add template IT test for Java API
* Move template compilation to TransportRankEvalAction
Action filters currently have the ability to filter both the request and
response. But the response side was not actually used. This change
removes support for filtering responses with action filters.
We don't use the test infra nor do we run the tests. They might all be
entirely out of date. We also have a different BWC test infra in-place.
This change removes all of the legacy infra.
For the record, I also had to remove the geo-hash cell and geo-distance range
queries to make the code compile. These queries already throw an exception in
all cases with 5.x indices, so that does not hurt any more.
I also had to rename all 2.x bwc indices from `index-${version}` to
`unsupported-${version}` to make `OldIndexBackwardCompatibilityIT`
happy.
This commit adds `qa/multi-cluster-search` which currently does a
simple search across 2 clusters. This commit also adds support for IPv6
addresses and fixes an issue where all shards of the local cluster are searched
when only a remote index was given.
* Scripting: Remove groovy scripting language
Groovy was deprecated in 5.0. This change removes it, along with the
legacy default language infrastructure in scripting.
We kept `netty_3` as a fallback in the 5.x series but now that master
is 6.0 we don't need this or in other words all issues coming up with
netty 4 will be blockers for 6.0.
* master: (22 commits)
Add proper toString() method to UpdateTask (#21582)
Fix `InternalEngine#isThrottled` to not always return `false`. (#21592)
add `ignore_missing` option to SplitProcessor (#20982)
fix trace_match behavior for when there is only one grok pattern (#21413)
Remove dead code from GetResponse.java
Fixes date range query using epoch with timezone (#21542)
Do not cache term queries. (#21566)
Updated dynamic mapper section
Docs: Clarify date_histogram bucket sizes for DST time zones
Handle release of 5.0.1
Fix skip reason for stats API parameters test
Reduce skip version for stats API parameter tests
Strict level parsing for indices stats
Remove cluster update task when task times out (#21578)
[DOCS] Mention "all-fields" mode doesn't search across nested documents
InternalTestCluster: when restarting a node we should validate the cluster is formed via the node we just restarted
Fixed bad asciidoc in boolean mapping docs
Fixed bad asciidoc ID in node stats
Be strict when parsing values searching for booleans (#21555)
Fix time zone rounding edge case for DST overlaps
...
There is not yet a BWC layer in sequence numbers. This commit sets the
BWC version to 6.0.0 for the BWC and rolling upgrade tests until this
BWC layer is built.
Adds a version constant for it, bwc indices, and a vagrant upgrade-from
version. Also bumps the "upgrade from" version for the backwards-5.0
test and adds `skip`s for tests that don't fail against 5.0 so we skip
them during the backwards testing.
Finally, this skips the "Shrink index via API" test because it fails
consistently for me. Inconsistently for CI, but consistently for me.
I'll work on making it consistent tomorrow.
In #21348 the command executed to run the packaging tests has been changed to "sudo -E bats ...", forcing all environment variables from the vagrant user to be passed to the `sudo` command. This breaks a test on opensuse-13 (the one where it checks that elasticsearch cannot be started when `java` is not found) because all the PATH from the user is passed to the sudo command.
This commit restores the previous behavior while allowing only necessary testing environment variables to be passed using a /etc/sudoers.d file.
This changes adds a test discovery (which internally uses the existing
mock zenping by default). Having the mock the test framework selects be a discovery
greatly simplifies discovery setup (no more weird callback to a Node
method).
Today when a node starts, we create dynamic socket permissions based on
the configured HTTP ports and transport ports. If no ports are
configured, we use the default port ranges. When a tribe node starts, a
tribe node creates an internal node client for connecting to each remote
cluster. If neither an explicit HTTP port nor transport ports were
specified, the default port ranges are large enough for the tribe node
and its internal node clients. If an explicit HTTP port or transport
port was specified for the tribe node, then socket permissions for those
ports will be created, but not for the internal node clients. Whether
the internal node clients have explicit ports specified, or attempt to
bind within the default range, socket permissions for these will not
have been created and the internal node clients will hit a permissions
issue when attempting to bind. This commit addresses this issue by also
accounting for tribe nodes when creating the dynamic socket
permissions. Additionally, we add our first real integration test for
tribe nodes.
This commit enables real BWC testing against a 5.1 snapshot. All
REST tests plus rolling upgrade test now run against a mixed version
cross major version cluster.
This commit enables real BWC testing against a 5.1 snapshot. All
REST tests plus rolling upgrade test now run against a mixed version
cross major version cluster.
This commit changes the current :elactisearch:qa:vagrant build file and transforms it into a Gradle plugin in order to reuse it in other projects.
Most of the code from the build.gradle file has been moved into the VagrantTestPlugin class. To avoid duplicated VMs when running vagrant tests, the Gradle plugin sets the following environment variables before running vagrant commands:
VAGRANT_CWD: absolute path to the folder that contains the Vagrantfile
VAGRANT_PROJECT_DIR: absolute path to the Gradle project that use the VagrantTestPlugin
The VAGRANT_PROJECT_DIR is used to share project folders and files with the vagrant VM. These folders and files are exported when running the task `gradle vagrantSetUp` which:
- collects all project archives dependencies and copies them into `${project.buildDir}/bats/archives`
- copy all project bats testing files from 'src/test/resources/packaging/tests' into `${project.buildDir}/bats/tests`
- copy all project bats utils files from 'src/test/resources/packaging/utils' into `${project.buildDir}/bats/utils`
It is also possible to inherit and grab the archives/tests/utils files from project dependencies using the plugin configuration:
apply plugin: 'elasticsearch.vagrant'
esvagrant {
inheritTestUtils true|false
inheritTestArchives true|false
inheritTests true|false
}
dependencies {
// Inherit Bats test utils from :qa:vagrant project
bats project(path: ':qa:vagrant', configuration: 'bats')
}
The folders `${project.buildDir}/bats/archives`, `${project.buildDir}/bats/tests` and `${project.buildDir}/bats/utils` are then exported to the vagrant VMs and mapped to the BATS_ARCHIVES, BATS_TESTS and BATS_UTILS environnement variables.
The following Gradle tasks have also be renamed:
* gradle vagrantSetUp
This task copies all the necessary files to the project build directory (was `prepareTestRoot`)
* gradle vagrantSmokeTest
This task starts the VMs and echoes a "Hello world" within each VM (was: `smokeTest`)
On some systems these utilities are in /usr/lib/systemd/systemd-sysctl
and /usr/sbin/sysctl, and on others the /usr is dropped. This commit
accounts for that fact.
Our docs claim that we set vm.max_map_count automatically. This is not
quite the case. The story is that on SysV init we set vm.max_map_count
each time the service starts, which is good. On systemd, we create a
sysctl.d conf file that sets vm.map_max_count, but this is only
meaningful if the system is rebooted after package install. This commit
modifies the post-install script so that we run systemd-sysctl so that
the vm.max_map_count change occurs after package install without a
reboot.
Relates #21507
This commit ensure that VirtualBox is available in version 5.1+ in the system before running packaging tests. It also check for Vagrant version is now greater than 1.8.6.
The environment variable ES_JVM_OPTIONS allows end-users to specify a
custom location for the jvm.options file. Unfortunately, this
environment variable is not exported from the SysV init scripts. This
commit addresses this issue, and includes a test that ES_JVM_OPTIONS and
ES_JAVA_OPTS work for the SysV init packages.
Relates #21445
At one point in the past when moving out the rest tests from core to
their own subproject, we had multiple test classes which evenly split up
the tests to run. However, we simplified this and went back to a single
test runner to have better reproduceability in tests. This change
removes the remnants of that multiplexing support.
Today if you start Elasticsearch with the status logger configured to
the warn level, or use a transport client with the default status logger
level, you will see warn messages about deprecation loggers being
created with different message factories and that formatting might be
broken. This happens because the deprecation logger is constructed using
the message factory from its parent, an artifact leftover from the first
Log4j 2 implementation that used a custom message factory. When that
custom message factory was removed, this constructor invocation should
have been changed to not explicitly use the message factory from the
parent. This commit fixes this invocation. However, we also had some
status checking to all tests to ensure that there are no warn status log
messages that might indicate a configuration problem with Log4j 2. These
assertions blow up badly without the fix for the deprecation logger
construction, and also caught a misconfiguration in one of the logging
tests.
Relates #21339
The usage information for `elasticsearch-plugin` is quiet verbose and makes the
actual error message that is shown when trying to remove an non-existing plugin
hard to spot. This changes the error code to not trigger printing the usage
information.
Closes#21250
Plugins: Remove pluggability of ZenPing
ZenPing is the part of zen discovery which knows how to ping nodes.
There is only one alternative implementation, which is just for testing.
This change removes the ability to add custom zen pings, and instead
hooks in the MockZenPing for tests through an overridden method in
MockNode. This also folds in the ZenPingService (which was really just a
single method) into ZenDiscovery, and removes the idea of having
multiple ZenPing instances. Finally, this was the last usage of the
ExtensionPoint classes, so that is also removed here.
When installing a plugin when the plugins directory does not exist, the
install plugin command outputs a line saying that it is creating this
directory. The packaging tests for the archive distributions accounted
for this including an assertion that this line was output. The packages
have since been updated to include an empty plugins folder, so this line
will no longer be output. This commit removes this stale assertion from
the packaging tests.
Relates #21275
Today when installing Elasticsearch from an archive distribution (tar.gz
or zip), an empty plugins folder is not included. This means that if you
install Elasticsearch and immediately run elasticsearch-plugin list, you
will receive an error message about the plugins directory missing. While
the plugins directory would be created when starting Elasticsearch for
the first time, it would be better to just include an empty plugins
directory in the archive distributions. This commit makes this the
case. Note that the package distributions already include an empty
plugins folder.
Relates #21204
This adds support for templating in rank eval requests.
Relates to #20231
Problem: In it's current state the rank-eval request API forces the user to repeat complete queries for each test request. In most use cases the structure of the query to test will be stable with only parameters changing across requests, so this looks like lots of boilerplate json for something that could be expressed in a more concise way.
Uses templating/ ScriptServices to enable users to submit only one test request template and let them only specify template parameters on a per test request basis.
Vagrant tests use a static list of dependencies to upgrade from
and we weren't including 5.0.0 deps in that list. Also when the
list was incorrect we weren't sorting the "current" list so it
was difficult to read.
Also adds 2.4.1 to the list but *doesn't* add 5.0.0 because we
still can't resolve it yet. We still only print an error when
the list is wrong but don't abort the build. We'll abort the build
once we've fixed resolution for 5.0.0 and we can re-add it.
We are upgrading from out of date versions in our tests right now and we
can't fix that because the current versions to upgrade from aren't in
maven central. We'll resolve the resolution issue soon, but for now
let's get the build green.
Since with j`ava-1.8.0-openjdk-1.8.0.111-1.b15.el7_2.x86_64`, the OpenJDK packaged for CentOS and OEL override the default value (`false`) for the JVM option `AssumeMP` and force it to `true` (see [this patch](https://git.centos.org/blob/rpms!!java-1.8.0-openjdk.git/ab03fcc7a277355a837dd4c8500f8f90201ea353/SOURCES!always_assumemp.patch))
Because it is forced to true by default for these packages, the following warning message is printed to the standard output when the Vagrant box has only 1 CPU:
> OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
This message will then fail the test introduced in #20422 where we check if no entries have been added to the journal after the service has been started.
This commit restore the default value for the `AssumeMP` option for CentOS and OracleServer.
This commit mutes a check on the output of journalctl after the Elasticsearch's systemd service has been started. It expected no entries in the journal but since OpenJDK build 1.8.0_111-b15 the following warning message is printed:
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
`LocalDiscovery` is a discovery implementation that uses static in memory maps to keep track of current live nodes. This is used extensively in our tests in order to speed up cluster formation (i.e., shortcut the 3 second ping period used by `ZenDiscovery` by default). This is sad as that mean that most of the test run using a different discovery semantics than what is used in production. Instead of replacing the entire discovery logic, we can use a similar approach to only shortcut the pinging components.
This change proposes the removal of all non-tcp transport implementations. The
mock transport can be used by default to run tests instead of local transport that has
roughly the same performance compared to TCP or at least not noticeably slower.
This is a master only change, deprecation notice in 5.x will be committed as a
separate change.
Today when parsing a request, Elasticsearch silently ignores incorrect
(including parameters with typos) or unused parameters. This is bad as
it leads to requests having unintended behavior (e.g., if a user hits
the _analyze API and misspell the "tokenizer" then Elasticsearch will
just use the standard analyzer, completely against intentions).
This commit removes lenient URL parameter parsing. The strategy is
simple: when a request is handled and a parameter is touched, we mark it
as such. Before the request is actually executed, we check to ensure
that all parameters have been consumed. If there are remaining
parameters yet to be consumed, we fail the request with a list of the
unconsumed parameters. An exception has to be made for parameters that
format the response (as opposed to controlling the request); for this
case, handlers are able to provide a list of parameters that should be
excluded from tripping the unconsumed parameters check because those
parameters will be used in formatting the response.
Additionally, some inconsistencies between the parameters in the code
and in the docs are corrected.
Relates #20722
this change adds a hard limit to `index.number_of_shard` that prevents
indices from being created that have more than 1024 shards. This is still
a huge limit and can only be changed via settings a system property.
Today when executing the install plugin command without a plugin id, we
end up throwing an NPE because the plugin id is null yet we just keep
going (ultimatley we try to lookup the null plugin id in a set, the
direct cause of the NPE). This commit modifies the install command so
that a missing plugin id is detected and help is provided to the user.
Relates #20660
When testing tribe nodes in an integration test, we should pass the classpath
plugins of the node down to the tribe client nodes. Without this the tribe client
nodes could be prevented from communicating with the tribes.
Today when CLI tools are executed, logging statements can intentionally
or unintentionally be executed when logging is not configured. This
leads to log messages that the status logger is not configured. This
commit reworks logging configuration for CLI tools so that logging is
always configured.
Relates #20575
This PR introduces backward compatibility index tests to test the rolling upgrade process amongst Elasticsearch instances within the same major version. The test executes in three phases. In the first phase, we form a cluster of 2 ES instances on an old version. In the second phase, we keep one of the nodes from the old cluster, kill the other node, but preserve its data directory and start an instance of the current version of ES using the same data directory as the killed instance. In the third phase, we kill the other old version ES instance from the first phase and launch a new instance, using the same data directory as the killed instance. Therefore, during phase 3, we have fully migrated and have all current versions of ES running. In each phase, we run REST tests that index documents and search them, ensuring at each stage that the documents from the previous phase are still there.
Note that because we haven't released a GA yet of 5.0, the tests currently don't start an old version cluster in the first phase. Once GA is released, this will be changed to make the backward compatibility version 5.0, while the current version in the cluster will be 5.x.
automatically between tasks, as we want some of the nodes from
the previous task to continue running in the next task. This
commit enables a cluster configuration setting to not stop
nodes automatically after a task runs, but instead the creator
of the test task must stop the running nodes explicitly in a
cleanup phase.
cluster, we wait for the cluster health to indicate the
necessary nodes have formed a cluster. This check was an
exact value (equality) check. However, if we are trying to
connect the nodes in the cluster to nodes from a previously
formed cluster (of the same name), then we will have more
nodes returned by the cluster health check than the current
task's configured number of nodes. Hence, this check needs
to be a >= check. This commit fixes it.
Today when starting Elasticsearch without a Log4j 2 configuration file,
we end up throwing an array index out of bounds exception. This is
because we are passing no configuration files to Log4j. Instead, we
should throw a useful error message to the user. This commit modifies
the Log4j configuration setup to throw a user exception if no Log4j
configuration files are present in the config directory.
Relates #20493
BATS upgrade tests fails on master branch because it tries to install 2.x versions to upgrade from instead of 5.x versions. And since #18554 we should only test upgrades from 5.0.0-alpha4 versions.
This commit changes the vagrant tests so that it tries to list all the previous releases from version N-1. If nothing is found, it will fetch the current version and will run the upgrade tests with it. It works nicely with the current master 6.0.0-alpha1-SNAPSHOT. Once 5.0.0 is released it should run the test with it.
When uninstalling or upgrading elasticsearch using the RPM package some empty directories remain on the filesystem:
/usr/share/elasticsearch/bin
/usr/share/elasticsearch/lib
/usr/share/elasticsearch/modules
/usr/share/elasticsearch/modules/foo
Having empty directories in modules can prevent elasticsearch to start after an upgrade: the plugins service expects to find a plugin-descriptor.properties file in every sub directory of modules.
This PR cleans things a bit so that these empty directories are removed on upgrade/removal like it was in 2.x.
The Log4j shutdown hack test tests that a hack we have in place to
workaround a bug in Log4j during shutdown is effective. Log4j can use
JMX to control logging levels, but we disable this through the use of a
system property log4j2.disable.jmx (mainly because there is no need for
this feature, but it also means granting additional security
permissions). The bug in Log4j is that during shutdown, it neglects to
check whether or not its usage of JMX is disable and so it attempts to
unregister management beans, leading to a permissions violation. The
test works by attempting to shutdown Log4j and thus triggering the bad
code path. With the Log4j hack in place, we have introduced jar hell so
that its our code running instead of code from the Log4j jar. Our code
correctly checks that the usage of JMX is disabled and thus does not
trip on a permissions violation. The test was a little complicated in
that it attempted to just grant the minimal permissions needed for Log4j
to do its thing, but this can sometimes lead to other unwanted
permissions violations because the permissions put in place are more
restrictive necessary. This commit simplifies this situation by
rewriting the test to only deny Log4j the sole permission needed to
trigger the bug.
Relates #20476
When upgrading elasticsearch using the RPM package, the scripts directory is removed if it's empty but it won't be recreated by the upgraded package. But after that the service won't start because the scripts dir is missing.
Today when setting the logging level via the command-line or an API
call, the expectation is that the logging level should trickle down the
hiearchy to descendant loggers. However, this is not necessarily the
case. For example, if loggers x and x.y are already configured then
setting the logging level on x will not descend to x.y. This is because
the logging config for x.y has already been forked from the logging
config for x. Therefore, we must explicitly descend the hierarchy when
setting the logging level and that is what this commit does.
Relates #20463
This commit introduces a new plugin for file-based unicast hosts
discovery. This allows specifying the unicast hosts participating
in discovery through a `unicast_hosts.txt` file located in the
`config/discovery-file` directory. The plugin will use the hosts
specified in this file as the set of hosts to ping during discovery.
The format of the `unicast_hosts.txt` file is to have one host/port
entry per line. The hosts file is read and parsed every time
discovery makes ping requests, thus a new version of the file that
is published to the config directory will automatically be picked
up.
Closes#20323
Today we add a prefix when logging within Elasticsearch. This prefix
contains the node name, and index and shard-level components if
appropriate.
Due to some implementation details with Log4j 2 , this does not work for
integration tests; instead what we see is the node name for the last
node to startup. The implementation detail here is that Log4j 2 there is
only one logger for a name, message factory pair, and the key derived
from the message factory is the class name of the message factory. So,
when the last node starts up and starts setting prefixes on its message
factories, it will impact the loggers for the other nodes.
Additionally, the prefixes are lost when logging an exception. This is
due to another implementation detail in Log4j 2. Namely, since we log
exceptions using a parameterized message, Log4j 2 decides that that
means that we do not want to use the message factory that we have
provided (the prefix message factory) and so logs the exception without
the prefix.
This commit fixes both of these issues.
Relates #20429
This commit adds a -q/--quiet option to Elasticsearch so that it does not log anything in the console and closes stdout & stderr streams. This is useful for SystemD to avoid duplicate logs in both journalctl and /var/log/elasticsearch/elasticsearch.log while still allows the JVM to print error messages in stdout/stderr if needed.
closes#17220
The plugin command now displays the version of the plugin, which is
compared to a string without the version. This removes the version from
the string.
The evil logger tests rely on external configuration. This configuration
is shared between these tests which means that changing the
configuration for one test can cause an unrelated test to fail. In
particular, removing the appenders on the root logger so that inherited
loggers in one test do not have a console and file appender by default
breaks tests that were expecting the root logger to have these
appenders. This commit separates these configs so that these tests are
not subject to this problem.
Log4j has a bug where on shutdown it ignores that JMX might be disabled;
since it does not respect this on shutdown, it proceeds to attempt to
access JMX leading to a security exception that should have otherwise
not occurred had it respected that JMX is disabled. This commit
intentionally introduces jar hell with the Server class to work around
this bug until a fix is released.
Relates #20389
Previously we would disable console logging in certain circumstances
(for example, if Elasticsearch is not in the foreground, or if
Elasticsearch is in the foreground but an exception was thrown during
bootstrap). This commit makes this handling work with Log4j 2. This will
prevent users from seeing double bootstrap check failure messages.
Relates #20387
Previous versions of Elasticsearch permitted unquoted JSON field names even though this is against the JSON spec. This leniency was disabled by default in the 5.x series of Elasticsearch but a backwards compatibility layer was added via a system property with the intention of removing this layer in 6.0.0. This commit removes this backwards compatibility layer.
Relates #20388
The 5.x series of Elasticsearch emits a warning if any of the old
logging configuration formats are present. This commit removes that
warning.
Relates #20386
By default, when an exception causes the JVM to terminate, the stack
trace is printed. In the case of failing bootstrap checks, this stack
trace is useless to the user, and might even distract them from seeing
that the bootstrap checks failed for reasons under their control. With
this commit, we cause the stack trace for a failing bootstrap check to
be truncated.
We also modify some methods to not declare that they throw the top level
checked exception type Exception, but instead explicitly declare the
exceptions that they throw. These exceptions are caught and wrapped in a
BootstrapException so that we can percolate only two exception types out
of Bootstrap#init as checked exception, BootstrapException and
NodeValidationException.
Relates #19989
The logging configuration tests write to log files which are deleted at
the end of the test. If these files are not closed, some operating
systems will complain when these deletes are performed. This commit
ensures that the logging system is properly shutdown so that these files
can be properly deleted.
The evil logging tests write to log files which are deleted at the end
of the test. If these files are not closed, some operating systems will
complain when these deletes are performed. This commit ensures that the
logging system is properly shutdown so that these files can be properly
deleted.
This commit expands on the message printed when config files are
preserved when removing a plugin to give the user an indication of the
reason the config files are preserved.
When removing a plugin with a config directory, we preserve the config
directory. This is because the workflow for upgrading a plugin involves
removing and then installing the plugin again and losing the plugin
config in this case would be terrible. This commit causes a message
regarding this to be printed in case the user wants to manually delete
these files.
* master:
Avoid NPE in LoggingListener
Randomly use Netty 3 plugin in some tests
Skip smoke test client on JDK 9
Revert "Don't allow XContentBuilder#writeValue(TimeValue)"
[docs] Remove coming in 2.0.0
Don't allow XContentBuilder#writeValue(TimeValue)
[doc] Remove leftover from CONSOLE conversion
Parameter improvements to Cluster Health API wait for shards (#20223)
Add 2.4.0 to packaging tests list
Docs: clarify scale is applied at origin+offest (#20242)
When Netty 4 was introduced, it was not the default network
implementation. Some tests were constructed to randomly use Netty 4
instead of the default network implementation. When Netty 4 was made the
default implementation, these tests were not updated. Thus, these tests
are randomly choosing between the default network implementation (Netty
4) and Netty 4. This commit updates these tests to reverse the role of
Netty 3 and Netty 4 so that the randomization is choosing between Netty
3 and the default (again, now Netty 4).
Relates #20265
This commit adds an assumption to SmokeTestClientIT tests on JDK 9. The
underlying issue is that Netty attempts to access sun.nio.ch but this
package is not exported from java.base on JDK 9. This throws an uncaught
InaccessibleObjectException causing the test to fail. This assumption
can be removed when Netty 4.1.6 is released as it will include a fix for
this scenario.
Relates #20260
This commit enables CLI tools to have console logging. For the CLI
tools, we skip configuring the logging infrastructure via the config
file, and instead set the level only via a system property.
This commit fixes failing evil logging configuration tests. The test for
resolving multiple configuration files was failing after
9a58fc2348 removed some of the
configuration needed for this test. The solution is revert the removal
of that configuration, but remove additivity from the test logger to
prevent the evil logger tests from failing.
This commit defaults the max local storage nodes to one. The motivation
for this change is that a default value greather than one is dangerous
as users sometimes end up unknowingly starting a second node and start
thinking that they have encountered data loss.
Relates #19964
When compiling many dynamically changing scripts, parameterized
scripts (<https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-scripting-using.html#prefer-params>)
should be preferred. This enforces a limit to the number of scripts that
can be compiled within a minute. A new dynamic setting is added -
`script.max_compilations_per_minute`, which defaults to 15.
If more dynamic scripts are sent, a user will get the following
exception:
```json
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "i",
"node" : "a5V1eXcZRYiIk8lecjZ4Jw",
"reason" : {
"type" : "general_script_exception",
"reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
}
}
],
"caused_by" : {
"type" : "general_script_exception",
"reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
}
},
"status" : 500
}
```
This also fixes a bug in `ScriptService` where requests being executed
concurrently on a single node could cause a script to be compiled
multiple times (many in the case of a powerful node with many shards)
due to no synchronization between checking the cache and compiling the
script. There is now synchronization so that a script being compiled
will only be compiled once regardless of the number of concurrent
searches on a node.
Relates to #19396
This commit fixes a test bug in
EvilJNANativesTests#testSetMaximumNumberOfThreads. Namely, the test was
not checking whether or not the value from /proc/self/limits was equal
to "unlimited" before attempting to parse as a long. This commit fixes
that error.
Today when we load the Netty plugins, we indirectly cause several Netty
classes to initialize. This is because we attempt to load some classes
by name, and loading these classes is done in a way that triggers a long
chain of class initializers within Netty. We should not do this, this
can lead to log messages before the logger is loader, and it leads to
initialization in cases when the classes would never be needed (for
example, Netty 3 class initialization is never needed if Netty 4 is
used, and vice versa). This commit avoids this early initialization of
these classes by removing the need for the early loading.
Relates #19819
This makes it obvious that these tests are for running the client yaml
suites. Now that there are other ways of running tests using the REST
client against a running cluster we can't go on calling the shared
client yaml tests "REST tests". They are rest tests, but they aren't
**the** rest tests.
This adds a header that looks like `Location: /test/test/1` to the
response for the index/create/update API. The requirement for the header
comes from https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.htmlhttps://tools.ietf.org/html/rfc7231#section-7.1.2 claims that relative
URIs are OK. So we use an absolute path which should resolve to the
appropriate location.
Closes#19079
This makes large changes to our rest test infrastructure, allowing us
to write junit tests that test a running cluster via the rest client.
It does this by splitting ESRestTestCase into two classes:
* ESRestTestCase is the superclass of all tests that use the rest client
to interact with a running cluster.
* ESClientYamlSuiteTestCase is the superclass of all tests that use the
rest client to run the yaml tests. These tests are shared across all
official clients, thus the `ClientYamlSuite` part of the name.
* rethrow script compilation exceptions into ingest configuration exceptions
* update readProcessor to rethrow any exception as an ElasticsearchException
Remove `ParseField` constants used for names where there are no deprecated
names and just use the `String` version of the registration method instead.
This is step 2 in cleaning up the plugin interface for extending
search time actions. Aggregations are next.
This is breaking for plugins because those that register a new query should
now implement `SearchPlugin` rather than `onModule(SearchModule)`.
We throw IOException, which is the exception that is going to be thrown in 99% of the cases. A more generic exception can happen, and if it is a runtime one we just let it bubble up as is, otherwise we wrap it into runtime one so that we don't require to catch Exception everywhere, which seems odd.
Also adjusted javadocs for all performRequest methods
The new method accepts the usual parameters (method, endpoint, params, entity and headers) plus a response listener and an async response consumer. Shortcut methods are also added that don't require params, entity and the async response consumer optional.
There are a few relevant api changes as a consequence of the move to async client that affect sync methods:
- Response doesn't implement Closeable anymore, responses don't need to be closed
- performRequest throws Exception rather than just IOException, as that is the the exception that we get from the FutureCallback#failed method in the async http client
- ssl configuration is a bit simpler, one only needs to call setSSLStrategy from a custom HttpClientConfigCallback, that doesn't end up overridng any other default around connection pooling (it used to happen with the sync client and make ssl configuration more complex)
Relates to #19055
The `client/transport` project adds a new jar build project that
pulls in all dependencies and configures all required modules.
Preinstalled modules are:
* transport-netty
* lang-mustache
* reindex
* percolator
The `TransportClient` classes are still in core
while `TransportClient.Builder` has only a protected construcutor
such that users are redirected to use the new `TransportClientBuilder`
from the new jar.
Closes#19412
This change removes the multiple ways that plugins can be added to the
integ test cluster. It also removes the use of the default
configuration, and instead adds a zip configuration to all plugins. This
will enable using project substitutions with plugins, which must be done
with the default configuration.
creation in the REST tests, as we no longer need it due
to index creation now waiting for active shard copies
before returning (by default, it waits for the primary of
each shard, which is the same as ensuring yellow health).
Relates #19450
This commit renames the Netty 3 transport module from transport-netty to
transport-netty3. This is to make room for a Netty 4 transport module,
transport-netty4.
Relates #19439
Currently custom headers that should be passed through rest requests are
registered by depending on the RestController in guice and calling a
registration method. This change moves that registration to a getter for
plugins, and makes the RestController take the set of headers on
construction.
Today `node.mode` and `node.local` serve almost the same purpose, they
are a shortcut for `discovery.type` and `transport.type`. If `node.local: true`
or `node.mode: local` is set elasticsearch will start in _local_ mode which means
only nodes within the same JVM are discovered and a non-network based transport
is used. The _local_ mode it only really used in tests or if nodes are embedded.
For both, embedding and tests explicit configuration via `discovery.type` and `transport.type`
should be preferred.
This change removes all the usage of these settings and by-default doesn't
configure a default transport implemenation since netty is now a module. Yet, to make
the user expericence flawless, plugins or modules can set a `http.type.default` and
`transport.type.default`. Plugins set this via `PluginService#additionalSettings()`
which enforces _set-once_ which prevents node startup if set multiple times. This means
that our distributions will just startup with netty transport since it's packaged as a
module unless `transport.type` or `http.transport.type` is explicitly set.
This change also found a bunch of bugs since several NamedWriteables were not registered if a
transport client is used. Now that we don't rely on the `node.mode` leniency which is inherited
instead of using explicit settings, `TransportClient` uses `AssertingLocalTransport` which detects these problems since it serializes all messages.
Closes#16234
This moves all netty related code into modules/transport-netty the module is build as a zip file as well as a JAR to serve as a dependency for transport client. For the time being this is required otherwise we have no network based impl. for transport client users. This might be subject to change given that we move forward http client.
Some tests still start http implicitly or miss configuring the transport clients correctly.
This commit fixes all remaining tests and adds a depdenceny to `transport-netty` from
`qa/smoke-test-http` and `modules/reindex` since they need an http server running on the nodes.
This also moves all required permissions for netty into it's module and out of core.
The callback replaces the ability to fully replace the http client instance. By doing that, one used to lose any default that the RestClient had set for the underlying http client. Given that you'd usually override one or two things only, like a couple of timeout values, the ssl factory or the default credentials providers, it is not uder friendly if by doing that users end up replacing the whole http client instance and lose any default set by us.
This change adds a createComponents() method to Plugin implementations
which they can use to return already constructed componenents/services.
Eventually this should be just services ("components" don't really do
anything), but for now it allows any object so that preconstructed
instances by plugins can still be bound to guice. Over time we should
add basic services as arguments to this method, but for now I have left
it empty so as to not presume what is a necessary service.
this commit moves the most of the http related integ tests out into it's own
`qa/smoke-test-http` project where most of the test can run against the external cluster.
This commit migrates the Vagrant box for Fedora for the packaging tests
from Fedora 22 to Fedora 24 as Fedora 22 reached end-of-line upon the
release of Fedora 24.
Relates #19308
The top-level class Throwable represents all errors and exceptions in
Java. This hierarchy is divided into Error and Exception, the former
being serious problems that applications should not try to catch and the
latter representing exceptional conditions that an application might
want to catch and handle. This commit renames
org.elasticsearch.cli.UserError to org.elasticsearch.UserException to
make its name consistent with where it falls in this hierarchy.
Relates #19254
Node IDs are currently randomly generated during node startup. That means they change every time the node is restarted. While this doesn't matter for ES proper, it makes it hard for external services to track nodes. Another, more minor, side effect is that indexing the output of, say, the node stats API results in creating new fields due to node ID being used as keys.
The first approach I considered was to use the node's published address as the base for the id. We already [treat nodes with the same address as the same](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/discovery/zen/NodeJoinController.java#L387) so this is a simple change (see [here](https://github.com/elastic/elasticsearch/compare/master...bleskes:node_persistent_id_based_on_address)). While this is simple and it works for probably most cases, it is not perfect. For example, if after a node restart, the node is not able to bind to the same port (because it's not yet freed by the OS), it will cause the node to still change identity. Also in environments where the host IP can change due to a host restart, identity will not be the same.
Due to those limitation, I opted to go with a different approach where the node id will be persisted in the node's data folder. This has the upside of connecting the id to the nodes data. It also means that the host can be adapted in any way (replace network cards, attach storage to a new VM). I
It does however also have downsides - we now run the risk of two nodes having the same id, if someone copies clones a data folder from one node to another. To mitigate this I changed the semantics of the protection against multiple nodes with the same address to be stricter - it will now reject the incoming join if a node exists with the same id but a different address. Note that if the existing node doesn't respond to pings (i.e., it's not alive) it will be removed and the new node will be accepted when it tries another join.
Last, and most importantly, this change requires that *all* nodes persist data to disk. This is a change from current behavior where only data & master nodes store local files. This is the main reason for marking this PR as breaking.
Other less important notes:
- DummyTransportAddress is removed as we need a unique network address per node. Use `LocalTransportAddress.buildUnique()` instead.
- I renamed `node.add_lid_to_custom_path` to `node.add_lock_id_to_custom_path` to avoid confusion with the node ID which is now part of the `NodeEnvironment` logic.
- I removed the `version` paramater from `MetaDataStateFormat#write` , it wasn't really used and was just in the way :)
- TribeNodes are special in the sense that they do start multiple sub-nodes (previously known as client nodes). Those sub-nodes do not store local files but derive their ID from the parent node id, so they are generated consistently.
Today throughout the codebase, catch throwable is used with reckless
abandon. This is dangerous because the throwable could be a fatal
virtual machine error resulting from an internal error in the JVM, or an
out of memory error or a stack overflow error that leaves the virtual
machine in an unstable and unpredictable state. This commit removes
catch throwable from the codebase and removes the temptation to use it
by modifying listener APIs to receive instances of Exception instead of
the top-level Throwable.
Relates #19231
As discussed at https://github.com/elastic/elasticsearch-cloud-azure/issues/91#issuecomment-229113595, we know that the current `discovery-azure` plugin only works with Azure Classic VMs / Services (which is somehow Legacy now).
The proposal here is to rename `discovery-azure` to `discovery-azure-classic` in case some users are using it.
And deprecate it for 5.0.
Closes#19144.
As some plugins are becoming big now, it is hard for the user to know, if the plugin
is being downloaded or just nothing happens.
This commit adds a progress bar during download, which can be disabled by using the `-q`
parameter.
In addition this updates to jimfs 1.1, which allows us to test the batch mode, as adding
security policies are now supported due to having jimfs:// protocol support in URL stream
handlers.
This commit adds randomization for the packaging upgrade test. In
particular, we extract a list of the released version of Elasticsearch
from Maven Central and randomize the selection of the version to upgrade
from. The randomization is repeatable, and supports the tests.seed
property. Specific versions can be tested by setting the property
tests.packaging.upgrade.from.versions.
Relates #19033
Registering a script engine or native scripts still uses Guice today
and is much more complicated than needed. This change moves to a pull
based model where script plugins have to implement a dedicated interface
`ScriptPlugin` and defines simple getter returning instances rather than
classes.
The database files have been doubled in size compared to the previous files being used.
For this reason the database files are now gzip compressed, which required using
`GZIPInputStream` when loading database files.
Previously Elasticsearch used $DATA_DIR/$CLUSTER_NAME/nodes for the path
where data is stored, this commit changes that to be $DATA_DIR/nodes.
On startup, if the old folder structure is detected it will be used.
This behavior will be removed in Elasticsearch 6.0
Resolves#17810
Folded grok processor into ingest-common module.
The rest tests have been moved to ingest-common module as well, because these tests don't run in the rest-api-spec module but in the distribution:integ-test-zip module
and adding a test plugin there felt just wrong to me. I think this is ok. I left a tiny ingest rest test behind in that tests with an empty pipeline.
Removed messy tests, these tests were already covered in the rest tests
Added ingest test plugin in test infra so that each module testing integration with ingest doesn't need write its own plugin
Moved reindex ingest tests to qa module
Closes#18490
We have 3 evil tests for jarhell. They have been failing in java 9
because of how evil they are. The first checks the leniency we add for
jarhell in the jdk itself. This is unecessary, since if the leniency
wasn't there, we would already be failing all jarhell checks. The second
is checking the compile version is compatible with the jdk. This is
simpler since we don't need to fake the java version: we know 1.7 should
be compatibile with both java 8 and 9, so we can use that as a constant.
Finally the last test checks if the java version system property is
broken. This is simply something we should not check, we have to trust
that java specifies it correctly, and again, if it was broken, all
jarhell checks would be broken.
Lucene SuppressForbidden is marked lucene.internal and should not be
used outside of Lucene. This commit removes the uses of this class
within Elasticsearch. Instead,
org.elasticsearch.common.SuppressForbidden should be used, which was
already the case in most places.
This commit fixes an issue with the plugins directory being a symbolic
link. Namely, the install plugins command attempts to always create the
plugins directory just in case it does not exist. The JDK method used
here guarantees that the directory is created, and an exception is not
thrown if the directory could not be created because it already
exists. The problem is that this JDK method does not respect symlinks so
its internal existence checks fails, it proceeds to attempt to create
the directory, but the directory creation fails because the symlink
exists. This is documented as being not an issue. We work around this by
checking if there is a symlink where we expect the plugins directory to
be, and only attempt to create if not. We add a unit test that plugin
installation to a symlinked plugins directory works as expected.
This commit removes the ability to specify a custom plugins
path. Instead, the plugins path will always be a subdirectory called
"plugins" off of the home directory.
- now you can specify a list of grok patterns to match your field with
and the first one to successfully match wins.
- only non-null captures will be inserted into your matched document.
Fixes#17903.
This removes the ScriptMode class entirely, which was an enum with two
options (ON and OFF) which essentially boiled down to true and false.
Now the boolean values are used instead.
Today when parsing settings during bootstrap, we add a system property
for every Elasticsearch setting. Additionally, settings can be set via
system properties. This commit simplifies this situation.
- settings are no longer propogated to system properties
- system properties can not be used to set settings
- the "es." prefix on settings is no longer required (nor permitted)
- test logging has a dedicated system property (tests.logger.level)
Relates #18198
The plugin script parses command-line options looking for Java system
properties and extracts these arguments to pass to the java command when
starting the JVM. Since elasticsearch-plugin allows arbitrary user
arguments to the JVM via ES_JAVA_OPTS, this parsing is unnecessary. This
commit removes this unnecessary
Relates #18207
This commit modifies the packaging tests to account for the fact that
rpm behaves differently with respect to preserving directories marked as
"CONFIG | NOREPLACE" on older versions versus newer versions. Older
versions will leave the directory as-is while newer versions will append
the suffix ".rpmsave" to the directory name.
Relates #18216
This change makes the vagrant tasks extend LoggedExec, so that the
entire vagrant output can be dumped on failure (and completely logged
when using --info). It should help for debugging issues like #18122.
Exit with proper exit code (1) and an error message if elasticsearch
executable binary does not exists or has insufficient permissions to
execute.
Squashed commit of the following:
commit 9768d316303418ba4f9c96d3f87c376048a1b1bc
Author: Puru <tuladharpuru@gmail.com>
Date: Fri Apr 22 23:26:47 2016 +0545
Fixed ES_HOME typo
commit 79a2b0394297f8b02b6f71b71ba35ff79f1a684e
Author: Puru <tuladharpuru@gmail.com>
Date: Sun Apr 10 11:00:24 2016 +0545
Improve elasticsearch startup script test
Added improvement as per conversation in https://github.com/elastic/elasticsearch/pull/17082#issuecomment-206459613
commit 7be38e1fefd4baa6ccdbdc14745c00f6dc052e0c
Author: Puru <tuladharpuru@gmail.com>
Date: Wed Mar 23 13:23:52 2016 +0545
Add elasticsearch startup script test
The test ensures that elasticsearch startup script exists and is executable.
commit d10eed5c08260fa9c158a4487bbb3103a8d867ed
Author: Puru <tuladharpuru@gmail.com>
Date: Wed Mar 23 12:30:25 2016 +0545
Fixed IF syntax and failure message
commit 6dc66f616545572485b4d43bee05a4cbbf1bed72
Author: Puru <tuladharpuru@gmail.com>
Date: Sat Mar 12 11:08:11 2016 +0545
Fix exit code
Exit with proper exit code (1) and an error message if elasticsearch executable binary does not exists or has insufficient permissions to execute.
This changes our packaging to be explicit about the permissions of files
and directories in the tar.gz, rpm, and deb packages. This is to protect
against a user having an incorrectly set umask when installing.
Additionally, plugins that are installed now have their permissions set
by the plugin installation so that plugins that may have been packaged
with incorrect permissions are secured.
Resolves#17634
* `rename` processor, renamed `to` to `target_field`
* `date` processor, renamed `match_field` to `field` and renamed `match_formats` to `formats`
* `geoip` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
* `attachment` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
Closes#17835
In Elasticsearch 5.0.0, by default unquoted field names in JSON will be
rejected. This can cause issues, however, for documents that were
already indexed with unquoted field names. To alleviate this, a system
property has been added that can be enabled so migration can occur.
This system property will be removed in Elasticsearch 6.0.0
Resolves#17674
This commit adds a new configuration file jvm.options to centralize and
simplify management of JVM options. This separates the configuration of
the JVM from the packaging scripts (bin/elasticsearch*, bin/service.bat,
and init.d/elasticsearch) simplifying end-user operational management of
custom JVM options.
This change makes specifying which boxes to run vagrant tests on a
little easier. Previously there were two tasks, checkPackages and
checkPackagesAllDistros. With this change, there is a single
packagingTest task. The boxes to run on are specified using the
gradle property vagrant.boxes, which can be easily specified on the
command line, or in a gradle properties file. There are also two
alias names, 'sample' for a yum and apt box, and 'all' for all boxes.
This commit ensures that the data, logs, and config directories have the
proper ownership after the packages are installed. Additionally, this
commit ensures that the configs in /etc/elasticsearch are preserved
after removal of the RPM package.
Debian asks during installation, if the configuration file should be updated.
This is asked via a prompt and thus hangs.
This adds an option to always update to the newer config file, so automated
installation keeps working.
Change version, required a minor fix in the RPM building.
In case of a alpha/beta version, the release will contain alpha/beta
as the RPM version cannot contains dashes/tildes.
This commit adds a bootstrap check on Linux and OS X for the max size of
virtual memory (address space) to the user running the Elasticsearch
process.
Closes#16935
This commit sets up the default filesystem used during install plugins
tests. A hack is neeeded to handle the temporary directory because the
system property "java.io.tmpdir" will have been initialized to a value
that is sensible for the default filesystem, but not necessarily to a
value that makes sense for the mock filesystem in use during the
tests. This property is restored after each test.
This commit refactors the unit tests for installing plugins to test
against mock filesystems (as well as the native filesystem) for better
test coverage. This commit also adds tests that cover the POSIX
attributes handling when installing plugins (e.g., ensuring that the
plugins directory has the right permissions, the bin directory has
execute permissions, and the config directory has the same owner and
group as its parent).
Today, certain bootstrap properties are set and read via system
properties. This action-at-distance way of managing these properties is
rather confusing, and completely unnecessary. But another problem exists
with setting these as system properties. Namely, these system properties
are interpreted as Elasticsearch settings, not all of which are
registered. This leads to Elasticsearch failing to startup if any of
these special properties are set. Instead, these properties should be
kept as local as possible, and passed around as method parameters where
needed. This eliminates the action-at-distance way of handling these
properties, and eliminates the need to register these non-setting
properties. This commit does exactly that.
Additionally, today we use the "-D" command line flag to set the
properties, but this is confusing because "-D" is a special flag to the
JVM for setting system properties. This creates confusion because some
"-D" properties should be passed via arguments to the JVM (so via
ES_JAVA_OPTS), and some should be passed as arguments to
Elasticsearch. This commit changes the "-D" flag for Elasticsearch
settings to "-E".
This change adds the infrastructure to run the rest tests on a multi-node
cluster that users 2 different minor versions of elasticsearch. It doesn't implement
any dedicated BWC tests but rather leverages the existing REST tests.
Since we don't have a real version to test against, the tests uses the current version
until the first minor / RC is released to ensure the infrastructure works.
Relates to #14406Closes#17072
When unzipping a plugin zip, the zip entries are resolved relative to
the directory being unzipped into. However, there are currently no
checks that the entry name was not absolute, or relatively points
outside of the plugin dir. This change adds a check for those two cases.
Indexing failures have caused the reindex http request to fail for a while
now. Both search and indexing failures cause it to abort. But search
failures didn't cause a non-200 response code from the http api. This
fixes that.
Also slips in a fix to some infrequently failing rest tests.
Closes#16037
Currently, the cluster service is tightly coupled to the transport service by both managing node connections and requiring the bound address in order to create the local disco node. This commit introduces a new NodeConnectionsService which is in charge of node connection management and makes it possible to remove all network related calls from the cluster service. The local DiscoNode is now created by DiscoveryNodeService and is set both the cluster service and the transport service during node start up.
Closes#16788Closes#16872
This commit simplifies and consolidates the two different
implementations of terminals used in tests. There is now a single
MockTerminal which captures output, and allows accessing as one large
string (with unix style \n as newlines), as well as configuring
input.
The big win here is catching tests that are incorrectly named and will
be skipped by gradle, providing a false sense of security.
The whole thing takes about 10 seconds on my Macbook Air, not counting
compiling the test classes, which seems worth it. Because this runs as
a gradle task with propery UP-TO-DATE handling it can be skipped if the
tests haven't been changed which should save some time.
I chose to keep this in test:framework rather than a new subproject of
buildSrc because ESIntegTestCase and doesn't inroduce any additional
dependencies.
DiscoveryService was a bridge into the discovery universe. This is unneeded and we can just access discovery directly or do things in a different way.
One of those different ways, is not having a dedicated discovery implementation for each our dicovery plugins but rather reuse ZenDiscovery.
UnicastHostProviders are now classified by discovery type, removing unneeded checks on plugins.
Closes#16821
Today we might start a node and some of the paths might not have the
required permissions. This commit goes through all data directories as
well as index, shard and state directories and ensures we have write access.
To make this work across all OS etc. we are trying to write a real file
and remove it again in each of those directories
All we do is check the cancelled flag and stop the request at a few key
points.
Adds the cancellation cause to the status so any request that is cancelled
but doesn't die can be seen in the task list.
One of our tests leaked a system property here since we failed after appling some
system properties in BootstrapCLIParser. This is not a huge deal in production since
we exit the JVM if we fail on that. Yet for correctnes we should only apply them if
we manage to parse them all.
This also caused a test failure lately on CI but on an unrelated test:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+periodic/314/console
This processor is useful when all elements of a json array need to be processed in the same way.
This avoids that a processor needs to be defined for each element in an array.
Also it is very likely that it is unknown how many elements are inside an json array.
Renamed `ingest-with-mustache` to `smoke-test-ingest-with-all-dependencies`
Also renamed `ingest-disabled` to `smoke-test-ingest-disabled` so that the name is more inline with other qa smoke test modules.
Identifying when a plugin id is maven coordinates is currently done by
checking if the plugin id contains 2 colons. However, a valid url could
have 2 colons, for example when a port is specified. This change adds
another check, ensuring the plugin id with maven coordinates does not
contain a slash, which only a url would have.
closes#16376
This commit enableds strict settings validation on node startup. All settings
passed to elasticsearch either through system properties, yaml files or any other
way to pass settings must be registered and valid. Settings that are unknown ie. due to
typos or due to deprecation or removal will cause the node to NOT start up. Plugins
have to declare all their settings on the `SettingsModule#registerSetting` and settings for
plugins that are not installed must be removed.
This commit also removes the ability to specify the nodes name via `-Des.name` or just `name` in the
configuration files. The node name must be prefixed with the node prexif like `node.name: Boom`. Left over
usage of `name` will also cause startup to fail.
Cli tools currently catch all exceptions, and only print the exception
message, except when a special system property is set. Even with this
flag set, certain exceptions, like IOException, are captured and their
stack trace is always lost.
This change adds a UserError class, which can be used a cli tools to
specify a message to the user, as well as an exit status. All other
exceptions are propagated out of main, so java will exit with non-zero
and print the stack trace.
The plugin cli currently is extremely lenient, allowing most errors to
simply be logged. This can lead to either corrupt installations (eg
partially installed plugins), or confused users.
This change rewrites the plugin cli to have almost no leniency.
Unfortunately it was not possible to remove all leniency, due in
particular to how config files are handled.
The following functionality was simplified:
* The format of the name argument to install a plugin is now an official
plugin name, maven coordinates, or a URL.
* Checksum files are required, and only checked, for official plugins
and maven plugins. Checksums are also only SHA1.
* Downloading no longer uses a separate thread, and no longer has a timeout.
* Installation, and removal, attempts to be atomic. This only truly works
when no config or bin files exist.
* config and bin directories are verified before copying is attempted.
* Permissions and user/group are no longer set on config and bin files.
We rely on the users umask.
* config and bin directories must only contain files, no subdirectories.
* The code is reorganized so each command is a separate class. These
classes already existed, but were embedded in the plugin cli class, as
an extra layer between the cli code and the code running for each command.
If we don't do this, and some path.conf is set when starting the tribe node, that path.conf will be ignored and the inner tribe clients will try to read elsewhere, where they most likely don't have permissions to read from.
Closes#16253Closes#16258
This commit converts the script mode settings to the new settings
infrastructure. This is a major refactoring of the handling of script
mode settings. This refactoring is necessary because these settings are
determined at runtime based on the registered script engines and the
registered script contexts.
The rest test framework, because it used to be tightly integrated with
ESIntegTestCase, currently expects the addresses for the test cluster to
be passed using the transport protocol port. However, it only uses this
to then find the http address.
This change makes ESRestTestCase extend from ESTestCase instead of
ESIntegTestCase, and changes the sysprop used to tests.rest.cluster,
which now takes the http address.
closes#15459
Site plugins used to be used for things like kibana and marvel, but
there is no longer a need since kibana (and marvel as a kibana plugin)
uses node.js. This change removes site plugins, as well as the flag for
jvm plugins. Now all plugins are jvm plugins.
This creates an reindex plugin with a very basic implementation that is
very like delete-by-query. New we'll integrate it with the task managament
work but for now this works.
Rather than failing the request, when a node with node.ingest set to false receives an index or bulk request with a pipeline id, it should try to redirect the request to another node with node.ingest set to true. If there are no node with ingest set to true based on the current cluster state, an exception will be returned and the request will fail. Note that in case there are no ingest nodes and bulk has a pipeline id specified only for a subset of index requests, the whole bulk will fail.
Now that the ingest infra is part of es core we can remove some code that was required by the plugin and have a better integration with es core. We allow to specify the pipeline id in bulk and index as a request parameter, we have a REST filter that parses it and adds it to the relevant action request. That is not required anymore, as we can add this logic to RestIndexAction and RestBulkAction directly, no need for a filter. Also, we can allow to specify a pipeline id for each index requests in a bulk request. The small downside of this is that the ingest filter has to go over each item of a bulk request, all the time, to figure out whether they have a pipeline id.
CRUD and simulate apis work now fine, every node has the pipelines in memory, but node.ingest disables ingestion, meaning that any index or bulk request with a pipeline id is going to fail
We will keep this abstractions as it's convenient, otherwise IngestDocument would depend on ScriptService directly, and would explicitly rely on mustache which is not even part of core. better to have the interface in core, and the impl as part of the ingest plugin, which relies on mustache, shipped with core by default.
The append processor allows to append one or more values to an existing list; add a new list with the provided values if the field doesn't exist yet, or convert an existing scalar into a list and add the provided values to the newly created list.
This required adapting of IngestDocument#appendFieldValue behaviour, also added support for templating to it.
Closes#14324
This change adds a Fixture class for use by gradle. A Fixture is an
external process that integration tests will use. It can be added as a
dependsOn for integTest, and will automatically be shutdown upon success
or failure, as well as relevant information dumped on failure. There is
also an example fixture in this change.
Added ingest wide template infrastructure to IngestDocument
Added a TemplateService interface that the ingest framework uses
Added a TemplateService implementation that the ingest plugin provides that delegates to the ES' script service
Cut SetProcessor over to use the template infrastructure for the `field` and `value` settings.
Removed the MetaDataProcessor
Removed dependency on mustache library
Added qa ingest mustache rest test so that the ingest and mustache integration can be tested.
This fixes the `lenient` parameter to be `missingClasses`. I will remove this boolean and we can handle them via the normal whitelist.
It also adds a check for sheisty classes (jar hell with the jdk).
This is inspired by the lucene "sheisty" classes check, but it has false positives. This check is more evil, it validates every class file against the extension classloader as a resource, to see if it exists there. If so: jar hell.
This jar hell is a problem for several reasons:
1. causes insanely-hard-to-debug problems (like bugs in forbidden-apis)
2. hides problems (like internal api access)
3. the code you think is executing, is not really executing
4. security permissions are not what you think they are
5. brings in unnecessary dependencies
6. its jar hell
The more difficult problems are stuff like jython, where these classes are simply 'uberjared' directly in, so you cant just fix them by removing a bogus dependency. And there is a legit reason for them to do that, they want to support java 1.4.
This change removes hardcoded ports from cluster formation. It passes
port 0 for http and transport, and then uses a special property to have
the node log the ports used for http and transport (just for tests).
This does not yet work for multi node tests. This brings us one step
closer to working with --parallel.
This commit removes and now forbids all uses of
Collections#shuffle(List) and Random#<init>() across the codebase. The
rationale for removing and forbidding these methods is to increase test
reproducibility. As these methods use non-reproducible seeds, production
code and tests that rely on these methods contribute to
non-reproducbility of tests.
Instead of Collections#shuffle(List) the method
Collections#shuffle(List, Random) can be used. All that is required then
is a reproducible source of randomness. Consequently, the utility class
Randomness has been added to assist in creating reproducible sources of
randomness.
Instead of Random#<init>(), Random#<init>(long) with a reproducible seed
or the aforementioned Randomess class can be used.
Closes#15287
The NodeBuilder is currently used to construct a Node. However, this is
really just yet-another-builder that wraps around a Settings.Builder
witha couple convenience methods. But there are very few uses of these
convenience methods. This change removes NodeBuilder, in favor of just
using the Node constructor.
The tribe node creates one local client node for each cluster it
connects to. Refactorings in #13383 broke this so that each local client
node now tries to load the full elasticsearch.yml that the real tribe
node uses.
This change fixes the problem by adding a TribeClientNode which is a
subclass of Node. The Environment the node uses is now passed in (in
place of Settings), and the TribeClientNode simply does not use
InternalSettingsPreparer.prepareEnvironment.
The tests around tribe nodes are not great. The existing tests pass, but
I also manually tested by creating 2 local clusters, and configuring and
starting a tribe node. With this I was able to see in the logs the tribe
node connecting to each cluster.
closes#13383
We currently use the full suite of packaged rest tests for each
distribution. We also used to run rest tests within core integ tests,
but this stopped working when we split out the test-framework, since the
test files are in there.
This change simplifies the code to run packaged rest tests just once,
for the integ-test-zip, and removes the unused rest tests from
test-framework. Distributions rest tests now check that all modules
were loaded.
The current mechanism for adding plugins to the integTest cluster is to
have a FileCollection. This works well for the integTests for a single
plugin, which automatically adds itself to be installed. However, for qa
tests where many plugins may be installed, and from other projects, it
is cumbersome to add configurations, dependencies and dependsOn
statements over and over. This simplifies installing a plugin from
another project by moving this common setup into the cluster
configuration code.
This change adds back the multi node smoke test, as well as making the
cluster formation for any test allow multiple nodes. The main changes in
cluster formation are abstracting out the node specific configuration to
a helper struct, as well as making a single wait task that waits for all
nodes after their start tasks have run. The output on failure was also
improved to log which node's info is being printed.
Currently we use the "gradle project attachment plugin" to support
building elasticsearch as part of another project. However, this plugin
has a number of issues, a large part of which is requiring consistent
use of the projectsPrefix.
This change removes projectsPrefix, and adds support for a special
extra-plugins directory in the root of elasticsearch. Any projects
checked out within this directory will be automatically added to
elasticsearch.
* Forbid System.setProperties & co in forbidden APIs.
* Ban property write access at runtime with security manager.
Plugins that need to modify system properties will need to request permission in their plugin-security.policy
closes#14726
Squashed commit of the following:
commit 5b591e98570e3fa481b2816a44063b98bff36ddf
Author: Robert Muir <rmuir@apache.org>
Date: Fri Nov 13 00:54:08 2015 -0500
add assumption for self-signing in PluginManagerTests
commit ed11e5371b6f71591dc41c6f60d033502cfcf029
Author: Robert Muir <rmuir@apache.org>
Date: Fri Nov 13 00:20:59 2015 -0500
show error output from integ test startup
commit d8b187a10e95d89a0e775333dcbe1aaa903fb376
Author: Robert Muir <rmuir@apache.org>
Date: Thu Nov 12 22:14:11 2015 -0500
fix gradle check under jigsaw
Just suck in the system policy, so its compatible with any version of java.
It means it also respects configuration (e.g. for monitoring agents)
Closes#14704
Many other improvements:
* Use spaces in ES path
* Use space in path for plugin file installation
* Use a different cwd than ES home
* Use jps to ensure process being stopped is actually elasticsearch
* Stop ES if pid file already exists
* Delete pid file when successfully killed
Also, refactored the cluster formation code to be a little more organized.
closes#14464
This gets the tar and tar_plugins tests working in gradle. It does so by
adding a subproject, qa/vagrant, which adds the following tasks:
Verification
------------
checkPackages - Check the packages against a representative sample of the
linux distributions we have in our Vagrantfile
checkPackagesAllDistros - Check the packages against all the linux
distributions we have in our Vagrantfile
Package Verification
--------------------
checkCentos6 - Run packaging tests against centos-6
checkCentos7 - Run packaging tests against centos-7
checkDebian8 - Run packaging tests against debian-8
checkFedora22 - Run packaging tests against fedora-22
checkOel7 - Run packaging tests against oel-7
checkOpensuse13 - Run packaging tests against opensuse-13
checkSles12 - Run packaging tests against sles-12
checkUbuntu1204 - Run packaging tests against ubuntu-1204
checkUbuntu1404 - Run packaging tests against ubuntu-1404
checkUbuntu1504 - Run packaging tests against ubuntu-1504
Vagrant
-------
smokeTestCentos6 - Smoke test the centos-6 VM
smokeTestCentos7 - Smoke test the centos-7 VM
smokeTestDebian8 - Smoke test the debian-8 VM
smokeTestFedora22 - Smoke test the fedora-22 VM
smokeTestOel7 - Smoke test the oel-7 VM
smokeTestOpensuse13 - Smoke test the opensuse-13 VM
smokeTestSles12 - Smoke test the sles-12 VM
smokeTestUbuntu1204 - Smoke test the ubuntu-1204 VM
smokeTestUbuntu1404 - Smoke test the ubuntu-1404 VM
smokeTestUbuntu1504 - Smoke test the ubuntu-1504 VM
vagrantHaltCentos6 - Shutdown the vagrant VM running centos-6
vagrantHaltCentos7 - Shutdown the vagrant VM running centos-7
vagrantHaltDebian8 - Shutdown the vagrant VM running debian-8
vagrantHaltFedora22 - Shutdown the vagrant VM running fedora-22
vagrantHaltOel7 - Shutdown the vagrant VM running oel-7
vagrantHaltOpensuse13 - Shutdown the vagrant VM running opensuse-13
vagrantHaltSles12 - Shutdown the vagrant VM running sles-12
vagrantHaltUbuntu1204 - Shutdown the vagrant VM running ubuntu-1204
vagrantHaltUbuntu1404 - Shutdown the vagrant VM running ubuntu-1404
vagrantHaltUbuntu1504 - Shutdown the vagrant VM running ubuntu-1504
vagrantSmokeTest - Smoke test some representative distros from the Vagrantfile
vagrantSmokeTestAllDistros - Smoke test all distros from the Vagrantfile
vagrantUpCentos6 - Startup a vagrant VM running centos-6
vagrantUpCentos7 - Startup a vagrant VM running centos-7
vagrantUpDebian8 - Startup a vagrant VM running debian-8
vagrantUpFedora22 - Startup a vagrant VM running fedora-22
vagrantUpOel7 - Startup a vagrant VM running oel-7
vagrantUpOpensuse13 - Startup a vagrant VM running opensuse-13
vagrantUpSles12 - Startup a vagrant VM running sles-12
vagrantUpUbuntu1204 - Startup a vagrant VM running ubuntu-1204
vagrantUpUbuntu1404 - Startup a vagrant VM running ubuntu-1404
vagrantUpUbuntu1504 - Startup a vagrant VM running ubuntu-1504
It does not make the "check" task depend on "checkPackages" so running the
vagrant tests is still optional. They are slow and depend on vagrant and
virtualbox.
The Package Verification tasks are useful for testing individual distros.
The Vagrant tasks are listed in `gradle tasks` primarily for discoverability.
This change removes the leftover pom files. A couple files were left for
reference, namely in qa tests that have not yet been migrated (vagrant
and multinode). The deb and rpm assemblies also still exist for
reference when finishing their setup in gradle.
See #13930
There are three ways `@Test` was used. Way one:
```java
@Test
public void flubTheBlort() {
```
This way was always replaced with:
```java
public void testFlubTheBlort() {
```
Or, maybe with a better method name if I was feeling generous.
Way two:
```java
@Test(throws=IllegalArgumentException.class)
public void testFoo() {
methodThatThrows();
}
```
This way of using `@Test` is actually pretty OK, but to get the tools to ban
`@Test` entirely it can't be used. Instead:
```java
public void testFoo() {
try {
methodThatThrows();
fail("Expected IllegalArgumentException");
} catch (IllegalArgumentException e ) {
assertThat(e.getMessage(), containsString("something"));
}
}
```
This is longer but tests more than the old ways and is much more precise.
Compare:
```java
@Test(throws=IllegalArgumentException.class)
public void testFoo() {
some();
copy();
and();
pasted();
methodThatThrows();
code(); // <---- This was left here by mistake and is never called
}
```
to:
```java
@Test(throws=IllegalArgumentException.class)
public void testFoo() {
some();
copy();
and();
pasted();
try {
methodThatThrows();
fail("Expected IllegalArgumentException");
} catch (IllegalArgumentException e ) {
assertThat(e.getMessage(), containsString("something"));
}
}
```
The final use of test is:
```java
@Test(timeout=1000)
public void testFoo() {
methodThatWasSlow();
}
```
This is the most insidious use of `@Test` because its tempting but tragically
flawed. Its flaws are:
1. Hard and fast timeouts can look like they are asserting that something is
faster and even do an ok job of it when you compare the timings on the same
machine but as soon as you take them to another machine they start to be
invalid. On a slow VM both the new and old methods fail. On a super-fast
machine the slower and faster ways succeed.
2. Tests often contain slow `assert` calls so the performance of tests isn't
sure to predict the performance of non-test code.
3. These timeouts are rude to debuggers because the test just drops out from
under it after the timeout.
Confusingly, timeouts are useful in tests because it'd be rude for a broken
test to cause CI to abort the whole build after it hits a global timeout. But
those timeouts should be very very long "backstop" timeouts and aren't useful
assertions about speed.
For all its flaws `@Test(timeout=1000)` doesn't have a good replacement __in__
__tests__. Nightly benchmarks like http://benchmarks.elasticsearch.org/ are
useful here because they run on the same machine but they aren't quick to check
and it takes lots of time to figure out the regressions. Sometimes its useful
to compare dueling implementations but that requires keeping both
implementations around. All and all we don't have a satisfactory answer to the
question "what do you replace `@Test(timeout=1000)`" with. So we handle each
occurrence on a case by case basis.
For files with `@Test` this also:
1. Removes excess blank lines. They don't help anything.
2. Removes underscores from method names. Those would fail any code style
checks we ever care to run and don't add to readability. Since I did this manually
I didn't do it consistently.
3. Make sure all test method names start with `test`. Some used to end in `Test` or start
with `verify` or `check` and they were picked up using the annotation. Without the
annotation they always need to start with `test`.
4. Organizes imports using the rules we generate for Eclipse. For the most part
this just removes `*` imports which is a win all on its own. It was "required"
to quickly remove `@Test`.
5. Removes unneeded casts. This is just a setting I have enabled in Eclipse and
forgot to turn off before I did this work. It probably isn't hurting anything.
6. Removes trailing whitespace. Again, another Eclipse setting I forgot to turn
off that doesn't hurt anything. Hopefully.
7. Swaps some tests override superclass tests to make them empty with
`assumeTrue` so that the reasoning for the skips is logged in the test run and
it doesn't "look like" that thing is being tested when it isn't.
8. Adds an oxford comma to an error message.
The total test count doesn't change. I know. I counted.
```bash
git checkout master && mvn clean && mvn install | tee with_test
git no_test_annotation master && mvn clean && mvn install | tee not_test
grep 'Tests summary' with_test > with_test_summary
grep 'Tests summary' not_test > not_test_summary
diff with_test_summary not_test_summary
```
These differ somewhat because some tests are skipped based on the random seed.
The total shouldn't differ. But it does!
```
1c1
< [INFO] Tests summary: 564 suites (1 ignored), 3171 tests, 31 ignored (31 assumptions)
---
> [INFO] Tests summary: 564 suites (1 ignored), 3167 tests, 17 ignored (17 assumptions)
```
These are the core unit tests. So we dig further:
```bash
cat with_test | perl -pe 's/\n// if /^Suite/;s/.*\n// if /IGNOR/;s/.*\n// if /Assumption #/;s/.*\n// if /HEARTBEAT/;s/Completed .+?,//' | grep Suite > with_test_suites
cat not_test | perl -pe 's/\n// if /^Suite/;s/.*\n// if /IGNOR/;s/.*\n// if /Assumption #/;s/.*\n// if /HEARTBEAT/;s/Completed .+?,//' | grep Suite > not_test_suites
diff <(sort with_test_suites) <(sort not_test_suites)
```
The four tests with lower test numbers are all extend `AbstractQueryTestCase`
and all have a method that looks like this:
```java
@Override
public void testToQuery() throws IOException {
assumeTrue("test runs only when at least a type is registered", getCurrentTypes().length > 0);
super.testToQuery();
}
```
It looks like this method was being double counted on master and isn't anymore.
Closes#14028
This commit makes sure that the plugin script looks at user, group and permissions of the elasticsearch bin dir and copies them over to the plugin bin subdirectory, whatever they are, so that they get properly setup depending on how elasticsearch was installed. We also make sure that execute permissions are added for files (we already did this before).
Relates to #11016Closes#14088
Depending on how elasticsearch is installed, we have two scenarios to take into account that relate to user, group and permissions assigned to the config directory:
1) deb/rpm package: /etc/elasticsearch is root:elasticsearch 750 and the plugin script is run from root user
2) tar/zip archive: es config dir is most likely elasticsearch:elasticsearch and the plugin script is most likely run from elasticsearch user
When the plugin script copies over the plugin config dir within the es config dir, it should take care of setting the proper user, group and permissions, which vary depending on how elasticsearch was installed in the first place. Should be root:elasticsearch 750 if installed from a package, or elasticsearch:elasticsearch if installed from an archive.
This commit makes sure that the plugin script looks at user, group and permissions of the config dir and copies them over to the plugin config subdirectory, whatever they are, so that they get properly setup depending on how elasticsearch was installed in the first place. We also make sure that execute permissions are left untouched for files.
Relates to #11016Closes#14048
When generating the rpm and dep package we now set proper group (elasticsearch) and permissions (750) to the conf dir (default /etc/elasticsearch). Same for the scripts subdirectory.
Expanded the assert_file bash function to also optionally check the group of files, so we can actually test that the group was set correctly.
Relates to #11016Closes#14017
It is rarely used and was not consistently handled by different distributions anyway.
This commit also adds a test for specifying CONF_DIR when installing plugins and
starting elasticsearch.
relates to #12712 and #12954closes#5329closes#13715
package installation creates the plugin directory already so when a plugin
is installed it prints the additional line
Plugins directory [/tmp/elasticsearch/plugins] does not exist. Creating...
Plugin cli tools configures logging with whatever is in the logging.yml.
If a file appender is configured for any of the logs this will cause creation
of an empty log file. If a plugin was for example installed as root it will
create empty logs at es.home/logs.
This is problematic when for example plugins are installed as root and es is run
as service. Logs will then be created in /usr/share/elasticsearch/logs
and can later not be removed by for example dpkg -r or -purge.
To avoid this, configure the logger to use an appender that writes to the same
output that plugin cli tool does. This allows other components that are called
from Plugin cli tool to write to the same terminal that plugin cli tool writes to
by using the logging mechanism already in place.
The logging conf is not read at all pb plugin cli tool.
As a side effect, the loging level for components that are called
from the plugin command such as the jar hell check can now be configured
with -Des.logger.level which makes it easier to debug the jar hell check.
Installs javatana in vivid, emulates its on-login actions when starting
elasticsearch and verifies that elasticsearch turns off javatana.
Relates to #13813