* Don't print download progress in batch mode
With this change we will no longer provide the progress bar in batch
mode.
Assuming that this is mode is mainly for consumption by tools which
will serialize the output, we shouldn't print a progress bar to be
for every percentile.
* PR review
For each API, the following updates were made:
- Add deprecation warnings to `Rest*Action`, plus tests in `Rest*ActionTests`.
- For each REST yml test, make sure there is one version without types, and another legacy version that retains types (called *_with_types.yml).
- Deprecate relevant methods on the Java HLRC requests/ responses.
- Update documentation (for both the REST API and Java HLRC).
This commit introduces the building of the Docker images as bonafide
packaging formats alongside our existing archive and packaging
distributions. This build is migrated from a dedicated repository, and
converted to Gradle in the process.
Currently is `java` is not in $PATH the preinst script fails
prematurely and prevents an appropriate message from getting displayed
to the user.
Make package installation more user friendly when java is not in
$PATH and add a test for it.
Also use a she-bang in the preinst script, as, at least in Debian,
maintainer scripts must start with the #! convention [1].
Relates #31845
[1] https://www.debian.org/doc/debian-policy/ch-maintainerscripts.html
In the long run we want to move all of startup to a Java program. This
will simplify our startup scripts and make maintenance of startup less
dependent on the underlying platform that we run on. This commit moves
the creation of the temporary directory off of system-dependent commands
and onto a simple Java program.
The list of official plugins accidentally included `qa` projects like,
well, `qa` and `amazon-ec2`. This changes the mechanism that we use to
build the list and adds a test to catch this.
Closes#35623
With this change, `Version` no longer carries information about the qualifier,
we still need a way to show the "display version" that does have both
qualifier and snapshot. This is now stored by the build and red from `META-INF`.
* Introduce property to set version qualifier
- VersionProperties.elasticsearch is now a string which can have qualifier
and snapshot too
- The Version class in the build no longer cares about snapshot and
qualifier.
This commit updates the procrun manager and service exes to 1.1.0. There
are a few bug fixes, including for a bug which can cause lingering
processes when removing the service.
Back in #32983 I broke running the integ-test-zip tests against an
external cluster by adding a test that reads the contents of the log
file. This fixes running against an external cluster by explicitly
skipping that test if running against an external cluster.
The BWC builds for the 6.x branch should be using JDK 11. This commit
fixes the BWC builds to specify that they use JDK 11 instead of JDK 10
which is now incompatible with the 6.x build.
To pass the HOSTNAME envrionment variable to the Windows service, we
have to add some command line flags to the service invocation. Namely,
we have to specify that we are passing HOSTNAME variable, and we will
pass for it the value of %%COMPUTERNAME%%. This ensures that if the
hostname is changed, we pick this up the next time that the service is
started. This change is needed for the service now that we use the
HOSTNAME as the default node name.
#32281 adds elasticsearch-shard to provide bwc version of elasticsearch-translog for 6.x; have to remove elasticsearch-translog for 7.0
Relates to #31389
When we implemented `refresh=wait_for` I added a test with the wrong
name. This caused us to not run it. The test asserted that running
several operations with `refresh=wait_for` did not fail if the index was
`_close`d while the operations were waiting. But to be honest, failure
here isn't that bad. The index being waited on is closed. You can't do
anything with it any way. The most important thing is actually that
these operations don't hang forever. Because hanging forever means that
the resources used by the operations aren't freed.
Anyway, when I noticed the error I reenabled the test. But they don't
pass consistently because *sometimes* the operations being tested fail.
They don't seem to hang and they always fail with "this index is closed
so you can't do anything with it" sorts of messages.
When the test started failing we disabled it again. This reenables the
test but causes it to ignore these "index is closed" failures. We'd
prefer they not happen at all but in the grand scheme of things they are
fine and making sure these operations don't hang is much more important.
This also updates the test to bring it more in line with my current
understanding of the "right" way to use the low level rest client.
* Add commented out JVM options for G1GC
These options are available now that we will be supporting G1GC for Java 10 and
above. They are also designed so that the CMS options don't have to be commented
out in order for the G1 options to take effect.
* Update wording
Changes the default of the `node.name` setting to the hostname of the
machine on which Elasticsearch is running. Previously it was the first 8
characters of the node id. This had the advantage of producing a unique
name even when the node name isn't configured but the disadvantage of
being unrecognizable and not being available until fairly late in the
startup process. Of particular interest is that it isn't available until
after logging is configured. This forces us to use a volatile read
whenever we add the node name to the log.
Using the hostname is available immediately on startup and is generally
recognizable but has the disadvantage of not being unique when run on
machines that don't set their hostname or when multiple elasticsearch
processes are run on the same host. I believe that, taken together, it
is better to default to the hostname.
1. Running multiple copies of Elasticsearch on the same node is a fairly
advanced feature. We do it all the as part of the elasticsearch build
for testing but we make sure to set the node name then.
2. That the node.name defaults to some flavor of "localhost" on an
unconfigured box feels like it isn't going to come up too much in
production. I expect most production deployments to at least set the
hostname.
As a bonus, production deployments need no longer set the node name in
most cases. At least in my experience most folks set it to the hostname
anyway.
I created a test a few days ago and declared a package that doesn't line
up with the directory structure. Oops. I a little surprised nothing
complained. But this fixes it.
I disabled one branch a few hours ago because it failed in CI. It looks
like other branches can also fail so I'll disable them as well and look
more closely on Monday.
Change the logging infrastructure to handle when the node name isn't
available in `elasticsearch.yml`. In that case the node name is not
available until long after logging is configured. The biggest change is
that the node name logging no longer fixed at pattern build time.
Instead it is read from a `SetOnce` on every print. If it is unset it is
printed as `unknown` so we have something that fits in the pattern.
On normal startup we don't log anything until the node name is available
so we never see the `unknown`s.
The main benefit of the upgrade for users is the search optimization for top scored documents when the total hit count is not needed. However this optimization is not activated in this change, there is another issue opened to discuss how it should be integrated smoothly.
Some comments about the change:
* Tests that can produce negative scores have been adapted but we need to forbid them completely: #33309Closes#32899
Gradle triggers the build of artifacts even if assemble is disabled.
Most users will not need bwc distributions after running `./gradlew
assemble` so instead of forcing them to add `-x buildBwcVersion`, we
detect this and skip the configuration of the artifacts.
- third party audit detects jar hell with JDK so we disable it
- jdk non portable in forbiddenapis detects classes being used from the
JDK ( for fips ) that are not portable, this is intended so we don't
scan for it on fips.
- different exclusion rules for third party audit on fips
Closes#33179
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `client` and `distribution` projects to use
the new versions.
On some Linux distributions tmpfiles.d cleans files and
directories under /tmp if they haven't been accessed for
10 days.
This can cause problems for ML as ML is currently the only
component that uses the temp directory more than a few
seconds after startup. If you didn't open an ML job for
10 days and then tried to open one then the temp directory
would have been deleted.
This commit prevents the problem occurring in the case of
Elasticsearch being managed by systemd, as systemd private
temp directories are not subject to periodic cleanup (by
default).
Additionally there are now some docs to warn people about
the risk and suggest a manual mitigation for .tar.gz users.
First, some background: we have 15 different methods to get a logger in
Elasticsearch but they can be broken down into three broad categories
based on what information is provided when building the logger.
Just a class like:
```
private static final Logger logger = ESLoggerFactory.getLogger(ActionModule.class);
```
or:
```
protected final Logger logger = Loggers.getLogger(getClass());
```
The class and settings:
```
this.logger = Loggers.getLogger(getClass(), settings);
```
Or more information like:
```
Loggers.getLogger("index.store.deletes", settings, shardId)
```
The goal of the "class and settings" variant is to attach the node name
to the logger. Because we don't always have the settings available, we
often use the "just a class" variant and get loggers without node names
attached. There isn't any real consistency here. Some loggers get the
node name because it is convenient and some do not.
This change makes the node name available to all loggers all the time.
Almost. There are some caveats are testing that I'll get to. But in
*production* code the node name is node available to all loggers. This
means we can stop using the "class and settings" variants to fetch
loggers which was the real goal here, but a pleasant side effect is that
the ndoe name is now consitent on every log line and optional by editing
the logging pattern. This is all powered by setting the node name
statically on a logging formatter very early in initialization.
Now to tests: tests can't set the node name statically because
subclasses of `ESIntegTestCase` run many nodes in the same jvm, even in
the same class loader. Also, lots of tests don't run with a real node so
they don't *have* a node name at all. To support multiple nodes in the
same JVM tests suss out the node name from the thread name which works
surprisingly well and easy to test in a nice way. For those threads
that are not part of an `ESIntegTestCase` node we stick whatever useful
information we can get form the thread name in the place of the node
name. This allows us to keep the logger format consistent.
Explicitly include all subdirectories of these folders in
/usr/share/elasticsearch in package distributions so that they are
managed by the package manager. This change does really have an
effect in the 7.x series, where there are no subdirectories in bin, and
we were already doing this in lib and modules. It does have an effect in
the 6.x series where the bin/x-pack subdirectory was not previously
tracked by the package manager and could be left behind on removal in
rpm distributions.
* Remove BouncyCastle dependency from runtime
This commit introduces a new gradle project that contains
the classes that have a dependency on BouncyCastle. For
the default distribution, It builds a jar from those and
in puts it in a subdirectory of lib
(/tools/security-cli) along with the BouncyCastle jars.
This directory is then passed in the
ES_ADDITIONAL_CLASSPATH_DIRECTORIES of the CLI tools
that use these classes.
BouncyCastle is removed as a runtime dependency (remains
as a compileOnly one) from x-pack core and x-pack security.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `distribution/archives/integ-test-zip` project
to use the new versions.
The C2 compiler in JDK 10 appears to have an issue compiling to AVX-512
instructions (on hardware that supports such). As a workaround, this
commit adds a JVM flag on JDK 10+ to disable the use of AVX-512
instructions until a fix is introduced to the JDK. Instead, we use a
flag to enable AVX and AVX2 only.
Note: Based on my reading of the C2 code, this flag does not appear to
have any impact on hardware that does not support AVX2. I have tested
this manually on an Intel Atom C2538 processor that supports neither AVX
nor AVX2. I have also tested this manually on an Intel i5-3317U
processor that supports AVX but not AVX2.
Ensure our tests can run in a FIPS JVM
JKS keystores cannot be used in a FIPS JVM as attempting to use one
in order to init a KeyManagerFactory or a TrustManagerFactory is not
allowed.( JKS keystore algorithms for private key encryption are not
FIPS 140 approved)
This commit replaces JKS keystores in our tests with the
corresponding PEM encoded key and certificates both for key and trust
configurations.
Whenever it's not possible to refactor the test, i.e. when we are
testing that we can load a JKS keystore, etc. we attempt to
mute the test when we are running in FIPS 140 JVM. Testing for the
JVM is naive and is based on the name of the security provider as
we would control the testing infrastrtucture and so this would be
reliable enough.
Other cases of tests being muted are the ones that involve custom
TrustStoreManagers or KeyStoreManagers, null TLS Ciphers and the
SAMLAuthneticator class as we cannot sign XML documents in the
way we were doing. SAMLAuthenticator tests in a FIPS JVM can be
reenabled with precomputed and signed SAML messages at a later stage.
IT will be covered in a subsequent PR
We mistakenly enabled bundling of the default distribution's bin scripts
into the `integ-test-zip` artifact used by plugin authors to test plugins.
These didn't change the version of Elasticsearch used for testing but as
a side effect changed the LICENSE.txt from the Apache 2 license to the
Elastic license. We really didn't mean for that to happen. The bin script
and the elasticsearch-sql-cli jar file bundled into the distribution are
indeed governed by the Elastic license but we didn't intend for them to be
in the testing artifact in the first place. This removes them and fixes
the license of the `integ-test-zip` artifact.
* Upgrade bouncycastle
Required to fix
`bcprov-jdk15on-1.55.jar; invalid manifest format `
on jdk 11
* Downgrade bouncycastle to avoid invalid manifest
* Add checksum for new jars
* Update tika permissions for jdk 11
* Mute test failing on jdk 11
* Add JDK11 to CI
* Thread#stop(Throwable) was removed
http://mail.openjdk.java.net/pipermail/core-libs-dev/2018-June/053536.html
* Disable failing tests #31456
* Temprorarily disable doc tests
To see if there are other failures on JDK11
* Only blacklist specific doc tests
* Disable only failing tests in ingest attachment plugin
* Mute failing HDFS tests #31498
* Mute failing lang-painless tests #31500
* Fix backwards compatability builds
Fix JAVA version to 10 for ES 6.3
* Add 6.x to bwx -> java10
* Prefix out and err from buildBwcVersion for readability
```
> Task :distribution:bwc:next-bugfix-snapshot:buildBwcVersion
[bwc] :buildSrc:compileJava
[bwc] WARNING: An illegal reflective access operation has occurred
[bwc] WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/home/alpar/.gradle/wrapper/dists/gradle-4.5-all/cg9lyzfg3iwv6fa00os9gcgj4/gradle-4.5/lib/groovy-all-2.4.12.jar) to method java.lang.Object.finalize()
[bwc] WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
[bwc] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[bwc] WARNING: All illegal access operations will be denied in a future release
[bwc] :buildSrc:compileGroovy
[bwc] :buildSrc:writeVersionProperties
[bwc] :buildSrc:processResources
[bwc] :buildSrc:classes
[bwc] :buildSrc:jar
```
* Also set RUNTIME_JAVA_HOME for bwcBuild
So that we can make sure it's not too new for the build to understand.
* Align bouncycastle dependency
* fix painles array tets
closes#31500
* Update jar checksums
* Keep 8/10 runtime/compile untill consensus builds on 11
* Only skip failing tests if running on Java 11
* Failures are dependent of compile java version not runtime
* Condition doc test exceptions on compiler java version as well
* Disable hdfs tests based on runtime java
* Set runtime java to minimum supported for bwc
* PR review
* Add comment with ticket for forbidden apis
So the issue here is that we want to avoid setting vm.max_map_count if
it is already equal to the desired value (the bootstrap check requires
262144). The reason we want to avoid this is because in some use-cases
using sysctl to set this will fail. In this case, we want to enable
users to set this value externally and then allow that to cause using
sysctl to set the value to be skipped so that cases where using sysctl
will fail to no longer fail.
The package installation relies on java being in the path. If java is
not in the path, the tests fail at post-install time. This commit adds a
pre-install check to validate that java exists, and if it fails, the
package is never installed, and thus keeps a system clean, rather than
aborting at post-install and leaving behind a mess.
Closes#29665
With this commit we add the possibility to define further JVM options (and
system properties) based on the current environment. As a proof of concept, it
chooses Netty's allocator ergonomically based on the maximum defined heap size.
We switch to the unpooled allocator at 1GB heap size (value determined
experimentally, see #30684 for more details). We are also explicit about the
choice of the allocator in either case.
Relates #30684
This commit modifies the Sys V init startup scripts to only modify
vm.max_map_count if needed. In this case, needed means that the current
value is less than our default value of 262144 maps.
For 6.3 we renamed the `tar` and `zip` distributions to `oss-tar` and
`oss-zip`. Then we added new `tar` and `zip` distributions that contain
x-pack and are licensed under the Elastic License. Unfortunately we
accidentally generated POM files along side the new `tar` and `zip`
distributions that incorrectly claimed that they were Apache 2 licensed.
Oooops.
This fixes the license on the POMs generated for the `tar` and `zip`
distributions.
This was silly; Bouncy Castle has an armored input stream for reading
keys in ASCII armor format. This means that we do not need to strip the
header ourselves and base64 decode the key. This had problems anyway
because of discrepancies in the padding that Bouncy Castle would produce
and the JDK base64 decoder was expecting. Now that we armor input/output
the whole way during tests, we fix all random failures in test cases
too.
The java version checker requires being written with java 7 APIs.
In order to use java 8 apis in other launcher utilities, this commit
moves the java version checker back to its own jar.
This commit moves the default location of the full dependencies report
to be under the reports directory to align it with the location for the
dependenciesInfo task output.
A previous commit tried to add task dependencies for the
:distribution:generateDependenciesReport task so that a user did not
have to run "dependenciesInfo
:distribution:generateDependenciesReport". However this method did not
reliably add all task dependencies due to task ordering issues in
previous versions of Gradle and our build. This commit removes this for
now and a user will continue to have to run "dependenciesInfo
:distribution:generateDependenciesReport".
The goal of this commit is to address unknown licenses when producing
the dependencies info report. We have two different checks that we run
on licenses. The first check is whether or not we have stashed a copy of
the license text for a dependency in the repository. The second is to
map every dependency to a license type (e.g., BSD 3-clause). The problem
here is that the way we were handling licenses in the second check
differs from how we handle licenses in the first check. The first check
works by finding a license file with the name of the artifact followed
by the text -LICENSE.txt. Yet in some cases we allow mapping an artifact
name to another name used to check for the license (e.g., we map
lucene-.* to lucene, and opensaml-.* to shibboleth. The second check
understood the first way of looking for a license file but not the
second way. So in this commit we teach the second check about the
mappings from artifact names to license names. We do this by copying the
configuration from the dependencyLicenses task to the dependenciesInfo
task and then reusing the code from the first check in the second
check. There were some other challenges here though. For example,
dependenciesInfo was checking too many dependencies. For now, we should
only be checking direct dependencies and leaving transitive dependencies
from another org.elasticsearch artifact to that artifact (we want to do
this differently in a follow-up). We also want to disable
dependenciesInfo for projects that we do not publish, users only care
about licenses they might be exposed to if they use our assembled
products. With all of the changes in this commit we have eliminated all
unknown licenses. A follow-up will enforce that when we add a new
dependency it does not get mapped to unknown, these will be forbidden in
the future. Therefore, with this change and earlier changes are left
having no unknown licenses and two custom licenses; custom here means it
does not map to an SPDX license type. Those two licenses are xz and
ldapsdk. A future change will not allow additional custom licenses
unless they are explicitly whitelisted. This ensures that if a new
dependency is added it is mapped to an SPDX license or mapped to custom
because it does not have an SPDX license.
We no longer need animal sniffer because we use JDK functionality
(introduced in JDK 9) to target older versions of the JDK for
compilation. This functionality means that the JDK handles the problem
of ensuring that we do not use JDK APIs from the version that we are
compiling from that are not available in the version that we are
compiling to. A previous commit removed this for the REST client (where
we target JDK 7) but a few traces were left behind.
This commit adjusts the indentation in the CLI scripts to give a clear
visual indication that the line being indented is a continuation of the
previous line.
A previous refactoring of the CLI scripts migrated all of the CLI tools
to shell to a common script, elasticsearch-cli. This approach is fine in
Bash where it is easy to tear arguments apart but it doesn't work so
well on Windows where quoting is insane. To avoid having to tear the
arguments apart to separate the first argument to elasticsearch-cli from
the remaining arguments, we instead choose a strategy where we can avoid
tearing the arguments apart. To do this, we will instead pass the main
class by an environment variable and then we can pass the arguments
straight through. This will let us avoid awful quoting issues on
Windows. This is the Windows side of that effort and the Bash side was
in a previous commit.
A previous refactoring of the CLI scripts migrated all of the CLI tools
to shell to a common script, elasticsearch-cli. This approach is fine in
Bash where it is easy to tear arguments apart but it doesn't work so
well on Windows where quoting is insane. To avoid having to tear the
arguments apart to separate the first argument to elasticsearch-cli from
the remaining arguments, we instead choose a strategy where we can avoid
tearing the arguments apart. To do this, we will instead pass the main
class by an environment variable and then we can pass the arguments
straight through. This will let us avoid awful quoting issues on
Windows. This is the non-Windows side of that effort and the Windows
side will be in a follow-up.
If you invoke elasticsearch-plugin (or any other CLI script on Windows)
with a path that has a percent-encoded space (or any other
percent-encoded character) because the CLI scripts now shell into a
common shell script (elasticsearch-cli) the percent-encoded space ends
up being interpreted as a parameter. For example passing install --batch
file:/c:/encoded%20%space/analysis-icu-7.0.0.zip to elasticsearch-plugin
leads to the %20 being interpreted as %2 followed by a zero. Here, the
%2 is interpreted as the second parameter (--batch) and the
InstallPluginCommand class ends up seeing
file:/c/encoded--batch0space/analysis-icu-7.0.0.zip as the path which
will not exist. This commit addresses this by escaping the %* that is
used to pass the parameters to the common CLI script so that the common
script sees the correct parameters without the %2 being substituted.
Applies default file and directory permissions to zip distributions
similar to how they're set for the tar distributions. Previously zip
distributions would retain permissions they had on the build host's
working tree, which could vary depending on its umask
For #30799
A previous commit added the public key used for signing artifacts to the
plugin CLI. This commit is an iteration on that to add the header and
footer to the key so that it is clear what the key is. Instead, we strip
the header/footer on read. With this change we simplify our test where
keys already in this format are generated and we had to strip on the
test side.
We sign our official plugins yet this is not well-advertised and not at
all consumed during plugin installation. For plugins that are installed
over the intertubes, verifying that the downloaded artifact is signed by
our signing key would establish both integrity and validity of the
downloaded artifact. The chain of trust here is simple: our installable
artifacts (archive and package distributions) so that if a user trusts
our packages via their signatures, and our plugin installer (which would
be executing trusted code) verifies the downloaded plugin, then the user
can trust the downloaded plugin too. This commit adds verification of
official plugins downloaded during installation. We do not add
verification for offline plugin installs; a user can download our
signatures and verify the artifacts themselves.
This commit also needs to solve a few interesting challenges. One of
these is that we want the bouncy castle JARs on the classpath only for
the plugin installer, but not for the runtime
Elasticsearch. Additionally, we want these JARs to not be present for
the JAR hell checks. To address this, we shift these JARs into a
sub-directory of lib (lib/tools/plugin-cli) that is only loaded for the
plugin installer, and in the plugin installer we filter any JARs in this
directory from the JAR hell check.
If you have an unusual umask (e.g., 0002) and clone the GitHub
repository then files that we stick into our packages like the
README.textile and the license will have a file mode of 0664 on disk yet
we expect them to be 0644. Additionally, the same thing happens with
compiled artifacts like JARs. We try to set a default file mode yet it
does not seem to take everywhere. This commit adds explicit file modes
in some places that we were relying on the defaults to ensure that the
built artifacts have a consistent file mode regardless of the underlying
build host.
This commit reduces the Windows CLI scripts to one-liners by moving all
of the redundant logic to an elasticsearch-cli script. This commit is
only the Windows side, a previous commit covered the Linux side.
We post snapshot builds to snapshots.elastic.co yet the official plugin
installer will not let you install such plugins without manually
downloading them and installing them from a file URL. This commit adds
the ability for the plugin installer to use snapshots.elastic.co for
installing official plugins if a es.plugins.staging is set and the
current build is also a snapshot build. Otherwise, we continue to use
staging.elastic.co if the current build is a release build and
es.plugins.staging is set and, of course, use the release artifacts at
artifacts.elastic.co for release builds with es.plugins.staging unset.
This commit reduces the Linux CLI scripts to one-liners by moving all of
the redundant logic to an elasticsearch-cli script. This commit is only
the Linux side, a follow-up will do this for Windows too.
Meta plugins existed only for a short time, in order to enable breaking
up x-pack into multiple plugins. However, now that x-pack is no longer
installed as a plugin, the need for them has disappeared. This commit
removes the meta plugins infrastructure.
This commit removes xpack from being a meta-plugin-as-a-module.
It also fixes a couple tests which were missing task dependencies, which
failed once the gradle execution order changed.
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
With the opening of xpack, we still retained a run task within
:x-pack:plugin. However, the root level run task also runs with the
default distribution. This change removes the extra run task inside
xpack in favor of using the root level task, and moves the
license/configuration code for run into the main run configuration.
This commit adds setting the homedir for the elasticsearch user to the
adduser command in the packaging preinstall script. While the
elasticsearch user is a system user, it is sometimes conventient to have
an existing homedir (even if it is not writeable). For example, running
cron as the elasticsearch user will try to change dir to the homedir.
closes#14453
Systemd overrides should happen through /etc/systemd/system, not
directly editing the service file. This commit removes marking the
service file as configuration for rpm and deb packages.
If the elasticsearch-env bash script chooses $ES_TMPDIR
then it also creates the directory. This change makes
elasticsearch-env.bat do the same thing: if %ES_TMPDIR%
is chosen by the script then the script will ensure it
exists, but if %ES_TMPDIR% is already set then the user
is responsible for creating it.
Relates #27609
Relates #28217
The overall NOTICE file for the ML X-Pack module should
include the notices from the 3rd party C++ components as
well as the 3rd party Java components.
This commit converts the deb package to use tildes in place of dash in
the internal package version. This is only relevant for prerelease
versions of elasticsearch. Previously, this was not possible due to
problems with the underlying library used by the ospackage plugin, but
since a recent upgrade, it now works.
closes#21139
Adds tasks that check that the all jars that we build have LICENSE.txt
and NOTICE.txt files and that the files are correct. Sets check to
depend on these task.
This is mostly there for extra parnoia because we automatically
configure all Jar tasks to include the LICENSE.txt and NOTICE.txt
files anyway. But it is quite possible to add configuration to those
tasks that would override either file.
This causes check to depend on several more things than it used to.
Take, for example, javadoc:
check depends on the new verifyJavadocJarNotice which depends on
extractJavadocJar which depends on javadocJar which depends on
javadoc, this check now depends on javadoc.
This commit adds some build time checks that the archive distributions
and package distributions contain the appropriate license and notice
files, and the package distributions contain the appropriate license
metadata.
This commit uses the customFields setting of the Deb task in ospackage
to work around the fact it does not know anything about the License
attribute natively.
THe deb distribution has a special copyright file instead of
LICENSE.txt, but the distributions were including the template file
instead of the rendered file (which includes the license name and text).
This commit adds the distribution type to the startup scripts so that we
can discern from log output and the main response the type of the
distribution (deb/rpm/tar/zip).
This commit moves the apache and elastic license files into a new
root level `licenses` directory and rewrites the top level LICENSE.txt
to clarify the repository has a mix of apache and elastic licensed code.
This commit adds license metadata to rpm and deb packages. Additionally,
it makes the copyright file for deb files follow the machine readable
specification, and sets the correct license text based on the oss vs
default deb packages.
X-Pack can no longer be installed as a plugin. This commit adds special
handling for when a user attempts to install X-Pack. This special
handling informs the user of the oss distribution that they should
download the default distribution and the user of the default
distribution that X-Pack does not require installation as it is included
by default.
This commit adds the distribution flavor (default versus oss) to the
build process which is passed through the startup scripts to
Elasticsearch. This change will be used to customize the message on
attempting to install/remove x-pack based on the distribution flavor.
This commit makes x-pack a module and adds it to the default
distrubtion. It also creates distributions for zip, tar, deb and rpm
which contain only oss code.
This commit moves the checks on JAVAX_HOME (where X is the java version
number) existing to the end of gradle's configuration phase, and based
on whether the tasks needing the java home are configured to execute.
relates #29519
This commit fixes plugin warning confirmation to include native
controller confirmation when no security policy exists. The case was
already covered for meta plugins, but not for normal plugins. Tests are
also added for all cases.
Some build tasks require older JDKs. For example, the BWC build tasks
for older versions of Elasticsearch require older JDKs. It is onerous to
require these be configured when merely compiling Elasticsearch, the
requirement that they be strictly set to appropriate values should only
be enforced if these tasks are going to be executed. To address this, we
lazy configure these tasks.
Today we have JAVA_HOME for the compiler Java home and RUNTIME_JAVA_HOME
for the test Java home. However, when we compile BWC nodes and run them,
neither of these Java homes might be the version that was suitable for
that BWC node (e.g., 5.6 requires JDK 8 to compile and to run). This
commit adds support for the environment variables JAVA\d+_HOME and uses
the appropriate Java home based on the version of the node being
started. We even do this for reindex-from-old which requires JDK 7 for
these very old nodes. Note that these environment variables are not
required if not running BWC tests, and they are strictly required if
running BWC tests.
The BWC builds always fetch the latest from the elastic/elasticsearch
repository for the BWC branches. Yet, there are use-cases for using the
local checkout without fetching the latest. This commit enables these
use-cases by adding a tests.bwc.git.fetch.latest property to skip the
fetches.
Today we have a silent batch mode in the install plugin command when
standard input is closed or there is no tty. It appears that
historically this was useful when running tests where we want to accept
plugin permissions without having to acknowledge them. Now that we have
an explicit batch mode flag, this use-case is removed. The motivation
for removing this now is that there is another place where silent batch
mode arises and that is when a user attempts to install a plugin inside
a Docker container without keeping standard input open and attaching a
tty. In this case, the install plugin command will treat the situation
as a silent batch mode and therefore the user will never have the chance
to acknowledge the additional permissions required by a plugin. This
commit removes this silent batch mode in favor of using the --batch flag
when running tests and requiring the user to take explicit action to
acknowledge the additional permissions (either by leaving standard input
open and attaching a tty, or by passing the --batch flags themselves).
Note that with this change the user will now see a null pointer
exception when they try to install a plugin in a Docker container
without keeping standard input open and attaching a tty. This will be
addressed in an immediate follow-up, but because the implications of
that change are larger, they should be handled separately from this one.
This commit changes the sysprop for overriding the branch bwc builds use
to be branch specific. There are 3 different bwc branches built, but all
of them currently read the exact same sysprop. For example, with this change
and current branches, you can now specify eg `-Dtests.bwc.refspec.6.x=my_6x`
and it will build only next-minor-snapshot with that branch, while
next-bugfix-snapshot will continue to use 5.6.
This is a follow up to a previous change which set the error file path
for the package distributions. The observation here is that we always
set the working directory of Elasticsearch to the root of the
installation (i.e., Elasticsearch home). Therefore, we can specify the
error file path relative to this directory and default it to the logs
directory, similar to the package distributions.
This is a follow up to a previous change which set the heap dump path
for the package distributions. The observation here is that we always
set the working directory of Elasticsearch to to the root of
installation (i.e., Elasticsearch home). Therefore, we can specify the
heap dump path relative to this directory and default it to the data
directory, similar to the package distributions.
When upgrading via the RPM package, we can run into a problem where
the keystore fails to be created. This arises because the %post script
on RPM runs after the new package files are installed but before the
removal of the old package files. This means that the contents of the
lib folder can contain files from the old package and the new package
and thus running the create keystore tool can encounter JAR hell
issues and fail. To solve this, we move creating the keystore to the
%posttrans script which runs after the old package files are
removed. We only need to do this on the RPM package, so we add a
switch in the shared post-install script.
The cd command on Windows has an oddity regarding changing
directories. If the drive of the current directory is a different drive
than than of the directory that was passed to the cd command, cd acts in
query mode and does not change the current directory. Instead, a flag is
needed to put the cd command into set mode so that the directory
actually changes. This causes a problem when starting Elasticsearch from
a directory different than the one where it is installed and this commit
fixes the issue.
Today we allow any other method of starting Elastisearch to override
jvm.options via ES_JAVA_OPTS. Yet, for some settings in the Windows
service, we do not allow this. This commit removes this in favor of
being consistent with other packaging choices.
Provide more actionable error message when installing an offline plugin
in the plugins directory, and the `plugins` directory for the node
contains plugin distribution.
Closes#27401
This commit adds a JVM flag to ensure that the JVM fatal error logs land
in the default log directory. Users that wish to use an alternative
location should change the path configured here.
As we have factored Elasticsearch into smaller libraries, we have ended
up in a situation that some of the dependencies of Elasticsearch are not
available to code that depends on these smaller libraries but not server
Elasticsearch. This is a good thing, this was one of the goals of
separating Elasticsearch into smaller libraries, to shed some of the
dependencies from other components of the system. However, this now
means that simple utility methods from Lucene that we rely on are no
longer available everywhere. This commit copies IOUtils (with some small
formatting changes for our codebase) into the fold so that other
components of the system can rely on these methods where they no longer
depend on Lucene.
We no longer source the environment file in the packaging scripts yet we
had leftover references to variables defined by those environment
files. This commit cleans these up.
Previously we allowed a lot of customization of Elasticsearch during
package installation (e.g., the username and group). This customization
was achieved by sourcing the env script (e.g.,
/etc/sysconfig/elasticsearch) during installation. Since we no longer
allow such flexibility, we do not need to source these env scripts
during package installation and removal.
This commit removes the ability to specify that a plugin requires the
keystore and instead creates the keystore on package installation or
when Elasticsearch is started for the first time. The reason that we opt
to create the keystore on package installation is to ensure that the
keystore has the correct permissions (the package installation scripts
run as root as opposed to Elasticsearch running as the elasticsearch
user) and to enable removing the keystore on package removal if the
keystore is not modified.
This commit removes running rest tests on the full zip and tar
distributions in favor of doing a simple extraction check like is done
for rpm and deb files. The rest tests are still run on the integ test
zip, at least for now (this should eventually be moved out to a different
location).
This commit moves the distribution specific tasks into the respective
archives and packages builds. The collocation of common and distribution
specific tasks make it much easier to reason about what is expected in a
particular distribution.
There is a bug in the for statement where we execute the JVM options
parser. The bug manfiests in the handling of paths with ) in the
name. The problem is this: we use a for statement to capture the output
of the JVM options parser. A for statement that executes a command
defers execution to cmd. There is this gem from the help:
1. If all of the following conditions are met, then quote characters
on the command line are preserved:
- no /S switch
- exactly two quote characters
- no special characters between the two quote characters,
where special is one of: &<>()@^|
- there are one or more whitespace characters between the
two quote characters
- the string between the two quote characters is the name
of an executable file.
2. Otherwise, old behavior is to see if the first character is
a quote character and if so, strip the leading character and
remove the last quote character on the command line, preserving
any text after the last quote character.
This means that the ) causes the quotes to be stripped which ruins
everything. This commit fixes this by delaying expansion of the paths.
Relates #28753
Previously a user could set a custom config path to a relative directory
using ES_PATH_CONF. In a previous change related to enabling GC logging
by default, we forced the working directory for Elasticsearch to be
ES_HOME. This had the impact of causing all relative paths to be
relative to ES_HOME, against the intent of the user. This commit
addresses this by making ES_PATH_CONF absolute before we switch the
working directory to ES_HOME.
Relates #28700
This commit adds intermediate gradle projects for archive based
distributions (zip, tar) and package based distributions (rpm, deb). The
grouping allows the common distribution build file to be considerably
shorter and clearly separated from the common zip/tar and rpm/deb
configuration.
The remote check previously validated both the remote name and the
repository as well, meaning that if someone passed in a repository that
was not a github URL, it would fail. This meant that it was not possible
to fully test bwc out with multiple branches without first pushing to a
remote. Removing the full check allows a user to pass in the origin
remote as its remote, which is already added as a file based remote to
each bwc snapshot build. This will allow changes to be made locally
across all bwc branches, tested, and then pushed simultaneously.
The build.snapshot flag used by the main build was being propagated down
into the bwc snapshot builds, which is not correct. The bwc subprojects
are always meant to be snapshot builds, or null if they do not
exist. Marking these builds as non snapshots threw the release off as it
was looking for -SNAPSHOT builds.
Relates #28641
This commit moves the semantic validation (like which version a plugin
was built for or which java version it is compatible with) from reading
a plugin descriptor, leaving the checks on the format of the descriptor
intact.
relates #28540
This commit removes the extra layer of all plugin files existing under
"elasticsearch" within plugin zips. This simplifies building plugin zips
and removes the need for special logic of modules vs plugins.
When Elasticsearch is run as a service we should not use the console
logger otherwise we end up duplicating logging (to the Elasticsearch
logs and whereever standard output is captured). Previously we disabled
the console logger when started as a service using systemd (otherwise
the console logs are duplicated to the journal). This commit does the
same for the Windows service, starting Elasticsearch with the --quiet
flag to avoid standard output being written to the service stdout logs.
Relates #28618
Generalizing BWC building so that there is less code to modify for a release. This ensures we do not
need to think about what major or minor version is in the gradle code. It follows the general rules of the
elastic release structure. For more information on the rules, see the VersionCollection's javadoc.
This also removes the additional bwc snapshots that will never be released, such as 6.0.2, which were
being built and tested against every time we ran bwc tests.
Additionally, it creates 4 new projects that correspond to the different types of snapshots that may exist
for a given version. Its possible to now run those individual tasks to work out bwc logic whereas
previously it was impossible and the entire suite of bwc tests had to be run to work out any logic
changes in the build tools' bwc project. Please note that if the project does not make sense for the
version that is current, that an error will be thrown from that individual project if an attempt is made to
run it.
This should allow for automating the version bumps as well, since it removes all the hardcoded version
logic from the configs.
When elasticsearch was originally moved to gradle, the "provided" equivalent in maven had to be done through a plugin. Since then, gradle added the "compileOnly" configuration. This commit removes the provided plugin and replaces all uses with compileOnly.
Plugin descriptors currently contain an elasticsearch version,
which the plugin was built against, and a java version, which the plugin
was built with. These versions are read and validated, but not stored.
This commit keeps them in PluginInfo so they can be used later.
While seeing the elasticsearch version is less interesting (since it is
enforced to match that of the running elasticsearc node), the java
version is interesting since we only validate the format, not the actual
version. This also makes PluginInfo have full parity with the plugin
properties file.
We now read the plugin descriptor when removing an old plugin. This is
to check if we are removing a plugin that is extended by another
plugin. However, when reading the descriptor we enforce that it is of
the same version that we are. This is not the case when a user has
upgraded Elasticsearch and is now trying to remove an old plugin. This
commit fixes this by skipping the version enforcement when reading the
plugin descriptor only when removing a plugin.
Relates #28540
The `testMetaPluginPolicyConfirmation` needs to close the file streams it is
iterating over, otherwise some OSes (like Windows) might not be able to delete
all temporary folders, which in turn leads to test failures.
Closes#28415
This commit switches the internal format of the elasticsearch keystore
to no longer use java's KeyStore class, but instead encrypt the binary
data of the secrets using AES-GCM. The cipher key is generated using
PBKDF2WithHmacSHA512. Tests are also added for backcompat reading the v1
and v2 formats.
Currently meta plugins will ask for confirmation of security policy
exceptions for each bundled plugin. This commit collects the necessary
permissions of each bundled plugin, and asks for confirmation of all of
them at the same time.
In order to build a plugin that extends the painless whitelist, the spi
classes must be available to the plugin at compile time. This commit
moves the spi classes into a separate jar which will be published. Any
plugin authors whiching to extend painless through spi would then add a
compileOnly dependency on this jar.
Meta plugins move the unzipped plugin as is, but the inner plugins may
have a different directory name than their corresponding plugin
properties file specifies. This commit fixes installation to rename the
directory if necessary.
This commit modifies the build to require JDK 9 for
compilation. Henceforth, we will compile with a JDK 9 compiler targeting
JDK 8 as the class file format. Optionally, RUNTIME_JAVA_HOME can be set
as the runtime JDK used for running tests. To enable this change, we
separate the meaning of the compiler Java home versus the runtime Java
home. If the runtime Java home is not set (via RUNTIME_JAVA_HOME) then
we fallback to using JAVA_HOME as the runtime Java home. This enables:
- developers only have to set one Java home (JAVA_HOME)
- developers can set an optional Java home (RUNTIME_JAVA_HOME) to test
on the minimum supported runtime
- we can test compiling with JDK 9 running on JDK 8 and compiling with
JDK 9 running on JDK 9 in CI
* This change makes sure that we don't detect a file path containing a ':' as
a maven coordinate (e.g.: `file:C:\path\to\zip`)
* restore test muted on master
This change modifies the installation for a meta plugin,
the content of the config and bin directory inside each bundled plugins are now moved in the meta plugin directory.
So instead of `$configDir/meta-plugin-name/bundled_plugin/name/` the content of the config
for a bundled plugin is now in `$configDir/meta-plugin-name`. Same applies for the bin directory.
This commit adds the ability to package multiple plugins in a single zip.
The zip file for a meta plugin must contains the following structure:
|____elasticsearch/
| |____ <plugin1> <-- The plugin files for plugin1 (the content of the elastisearch directory)
| |____ <plugin2> <-- The plugin files for plugin2
| |____ meta-plugin-descriptor.properties <-- example contents below
The meta plugin properties descriptor is mandatory and must contain the following properties:
description: simple summary of the meta plugin.
name: the meta plugin name
The installation process installs each plugin in a sub-folder inside the meta plugin directory.
The example above would create the following structure in the plugins directory:
|_____ plugins
| |____ <name_of_the_meta_plugin>
| | |____ meta-plugin-descriptor.properties
| | |____ <plugin1>
| | |____ <plugin2>
If the sub plugins contain a config or a bin directory, they are copied in a sub folder inside the meta plugin config/bin directory.
|_____ config
| |____ <name_of_the_meta_plugin>
| | |____ <plugin1>
| | |____ <plugin2>
|_____ bin
| |____ <name_of_the_meta_plugin>
| | |____ <plugin1>
| | |____ <plugin2>
The sub-plugins are loaded at startup like normal plugins with the same restrictions; they have a separate class loader and a sub-plugin
cannot have the same name than another plugin (or a sub-plugin inside another meta plugin).
It is also not possible to remove a sub-plugin inside a meta plugin, only full removal of the meta plugin is allowed.
Closes#27316
Otherwise newer versions of Gradle will see the outputs as stale and
remove the directory between having created the directory and copying
files into the directory (leading to the directory being created again,
this time missing some sub-directories).
This commit modifies the BWC build to invoke the Gradle wrapper. The
motivation for this is two-fold:
- BWC versions might be dependent on a different version of Gradle than
the current version of Gradle
- in a follow-up we are going to need to be able to set JAVA_HOME to a
different value than the current value of JAVA_HOME
Relates #28138
Java 9 added some enhancements to the internationalization support that
impact our date parsing support. To ensure flawless BWC and consistent
behavior going forward Java 9 runtimes requrie the system property
`java.locale.providers=COMPAT` to be set.
Closes#10984
This commit adds the infrastructure to plugin building and loading to
allow one plugin to extend another. That is, one plugin may extend
another by the "parent" plugin allowing itself to be extended through
java SPI. When all plugins extending a plugin are finished loading, the
"parent" plugin has a callback (through the ExtensiblePlugin interface)
allowing it to reload SPI.
This commit also adds an example plugin which uses as-yet implemented
extensibility (adding to the painless whitelist).
* Adds task dependenciesInfo to BuildPlugin to generate a CSV file with dependencies information (name,version,url,license)
* Adds `ConcatFilesTask.groovy` to concatenates multiple files into one
* Adds task `:distribution:generateDependenciesReport` to concatenate `dependencies.csv` files into a single file (`es-dependencies.csv` by default)
# Examples:
$ gradle dependenciesInfo :distribution:generateDependenciesReport
## Use `csv` system property to customize the output file path
$ gradle dependenciesInfo :distribution:generateDependenciesReport -Dcsv=/tmp/elasticsearch-dependencies.csv
## When branch is not master, use `build.branch` system property to generate correct licenses URLs
$ gradle dependenciesInfo :distribution:generateDependenciesReport -Dbuild.branch=6.x -Dcsv=/tmp/elasticsearch-dependencies.csv
We document that users can set custom service names on Windows. Alas,
the functionality does not work. This commit fixes the issue by passing
the environment variable SERVICE_ID as the service name otherwise
defaulting to elasticsearch-service-x64.
Relates #25255
When running the release tests, we set build.snapshot to false and this
causes all version numbers to not have "-SNAPSHOT". This is true even
for the tips of the branches (e.g., currently 5.6.6 on the 5.6
branch). Yet, if we do not set snapshot to false, then we would still be
trying to find artifacts with "-SNAPSHOT" appended which would not have
been build since build.snapshot is false. To fix this, we have to push
build.snapshot into the version logic.
Relates #27778
This commit reorganizes some of the content in the configuring
Elasticsearch section of the docs. The changes are:
- move JVM options out of system configuration into configuring
Elasticsearch
- move JVM options to its own page of the docs
- move configuring the heap to important Elasticsearch settings
- move configuring the heap to its own page of the docs
- move all important settings to individual pages in the docs
- remove bootstrap.memory_lock from important settings, this is covered
in the swap section of system configuration
Relates #27755
We have tests that manually unpackage the RPM and Debian package
distributions and start a cluster manually (not from the service) and
run a basic suite of integration tests against them. This is problematic
because it is not how the packages are intended to be used (instead,
they are intended to be installed using the package installation tools,
and started as services) and so violates assumptions that we make about
directory paths. This commit removes these integration tests, instead
relying on the packaging tests to ensure the packages are not
broken. Additionally, we add a sanity check that the package
distributions can be unpackaged. Finally, with this change we can remove
some leniency from elasticsearch-env about checking for the existence of
the environment file which the leniency was there solely for these
integration tests.
Relates #27725
JDK 9 has removed JVM options that were valid in JDK 8 (e.g., GC logging
flags) and replaced them with new flags that are not available in JDK
8. This means that a single JVM options file can no longer apply to JDK
8 and JDK 9, complicating development, complicating our packaging story,
and complicating operations. This commit extends the JVM options syntax
to specify the range of versions the option applies to. If the running
JVM matches the range of versions, the flag will be used to start the
JVM otherwise the flag will be ignored.
We implement this parser in Java for simplicity, and with this we start
our first step towards a Java launcher.
Relates #27675
The RPM and Debian packages depend on coreutils (for mktemp among
others). This commit adds an explicit package dependency on coreutils.
Relates #27660
GNU mktemp and BSD mktemp have different command line flags. On some
macOS systems users have mktemp from coreutils in their PATH overriding
the system mktemp from BSD. This commit adds detection for the coreutils
mktemp versus the BSD mktemp and uses the appropriate syntax based on
the detection.
Relates #27659
The LimitMEMLOCK suggestion was removed from systemd service file and
instead users should use an override file, so a comment in the
environment file should be updated to reflect the same.
Relates #27630
For too long we have been groping around in the dark when faced with GC
issues because we rarely have GC logs at our disposal. This commit
enables GC logging by default out of the box.
Relates #27610
This change ensures that the temporary directory used for java.io.tmpdir
is a private temporary directory. To achieve this we use mktemp on macOS
and Linux to give us a private temporary directory and the value of the
environment variable TMP on Windows. For this to work with our
packaging, we add java.io.tmpdir=${ES_TMPDIR} to our packaged
jvm.options, we set ES_TMPDIR respectively in our startup scripts, and
resolve the value of the template ${ES_TMPDIR} at startup.
Relates #27609
Any CLI commands that depend on core Elasticsearch might touch classes
(directly or indirectly) that depends on logging. If they do this and
logging is not configured, Log4j will dump status error messages to the
console. As such, we need to ensure that any such CLI command configures
logging (with a trivial configuration that dumps log messages to the
console). Previously we did this in the base CLI command but with the
refactoring of this class out of core Elasticsearch, we no longer
configure logging there (since we did not want this class to depend on
settings and logging). However, this meant for some CLI commands (like
the plugin CLI) we were no longer configuring logging. This commit adds
base classes between the low-level command and multi-command classes
that ensure that logging is configured. Any CLI command that depends on
core Elasticsearch should use this infrastructure to ensure logging is
configured. There is one exception to this: Elasticsearch itself because
it takes reponsibility into its own hands for configuring logging from
Elasticsearch settings and log4j2.properties. We preserve this special
status.
Relates #27523
This commit removes the ability to use ${prompt.secret} and
${prompt.text} as valid config settings. Secure settings has obsoleted
the need for this, and it cleans up some of the code in Bootstrap.
Projects the depend on the CLI currently depend on core. This should not
always be the case. The EnvironmentAwareCommand will remain in :core,
but the rest of the CLI components have been moved into their own
subproject of :core, :core:cli.
The existing log rotation configuration allowed the index
and search slow log to grow unbounded. This commit removes the
date based rotation and adds the same size based rotation, that
the depreciation log already has.
We look for the remote by scanning the output of "git remote -v" but we
were not actually looking at the output since standard output was not
redirected anywhere. This commit fixes this issue.
Relates #27308
Only tests should use the single argument Environment constructor. To
enforce this the single arg Environment constructor has been replaced with
a test framework factory method.
Production code (beyond initial Bootstrap) should always use the same
Environment object that Node.getEnvironment() returns. This Environment
is also available via dependency injection.
This commit adjusts the format of the SHA-512 checksum files supported
by the plugin installer. In particular, we now require that the SHA-512
format be a single-line file containing the checksum followed by two
spaces followed by the filename. We continue to support the legacy
format for SHA-1.
Relates #27093
This commit fixes an issue with the handling of paths containing
parentheses on Windows. When such a path is used as a component of
Elasticsearch home, then a later echo statement that is guarded by an if
will fail because the parentheses in the path will be confused with the
parentheses defining the if block. This commit fixes the issue by
protecting this echo statement by wrapping the possibly offending path
in quotes.
Relates #26916
* Removes minimum master nodes default number
At the moment the elasticsearch.yml contains the minimum master node setting commented out but with a value of 3. This has lead to users uncommenting the value and assuming it is a good default without reading that they need to change it to a quorum of master eligible nodes causing split brain in their cluster and defeating the point of the setting.
The default of 3 is not even a good default for our recommended setup of 3 dedicated master eligible nodes.
This changes the value o fthe commented out setting to something that will not produce valid config and should highlight that the value needs to be changed so users no longer uncomment the line without considering what the correct value for their setup should be.
* Addresses review comment
The JVM defaults to dumping the heap to the working directory of
Elasticsearch. For the RPM and Debian packages, this location is
/usr/share/elasticsearch. This directory is not writable by the
elasticsearch user, so by default heap dumps in this situation are
lost. This commit modifies the packaging for the RPM and Debian packages
to set the heap dump path to /var/lib/elasticsearch as the default
location for dumping the heap. This location is writable by the
elasticsearch user by default. We add documentation of this important
setting if /var/lib/elasticsearch is not suitable for receiving heap
dumps.
Relates #26755
With 6.0 rc1 we now publish sha512 checksums for official plugins.
However, in order to ease the pain for plugin authors, this commit adds
backcompat to still allow sha1 checksums. Also added tests for
checksums.
Closes#26746
The output when building bwc versions is currently verbose, with git
warnings from doing git checkout of a hash. This commit changes this to
print the useful info before and after checking out. Note that due to
using LoggedExec, if the git task exits non-zero, the entire output will
still be dumped.
When creating the keystore explicitly (from executing
elasticsearch-keystore create) or implicitly (for plugins that require
the keystore to be created on install) on an Elasticsearch package
installation, we are running as the root user. This leaves
/etc/elasticsearch/elasticsearch.keystore having the wrong ownership
(root:root) so that the elasticsearch user can not read the keystore on
startup. This commit adds setgid to /etc/elasticsearch on package
installation so that when executing this directory (as we would when
creating the keystore), we will end up with the correct ownership
(root:elasticsearch). Additionally, we set the permissions on the
keystore to be 660 so that the elasticsearch user via its group can read
this file on startup.
Relates #26412
This commit adds files to the build output called build_metadata which
contain key/value pairs of metadata associated with the build. The first
use of this metadata are the git hashes associated with bwc checkouts.
These metadata files will be picked up by CI intake jobs and stored
along with last-good-commit, and then passed back in throug the
BUILD_METADATA env var on periodic jobs.
At current, we do not feel there is enough of a reason to shade the low
level rest client. It caused problems with commons logging and IDE's
during the brief time it was used. We did not know exactly how many
users will need this, and decided that leaving shading out until we
gather more information is best. Users can still shade the jar
themselves. For information and feeback, see issue #26366.
Closes#26328
This reverts commit 3a20922046.
This reverts commit 2c271f0f22.
This reverts commit 9d10dbea39.
This reverts commit e816ef89a2.
This commit removes the keystore creation on elasticsearch startup, and
instead adds a plugin property which indicates the plugin needs the
keystore to exist. It does still make sure the keystore.seed exists on
ES startup, but through an "upgrade" method that loading the keystore in
Bootstrap calls.
closes#26309
This commit makes the security code aware of the Java 9 FilePermission changes (see #21534) and allows us to remove the `jdk.io.permissionsUseCanonicalPath` system property.
When Elasticsearch starts up, it tries to create a keystore if one does
not exist; this is so the keystore can be seeded. With the RPM and
Debian packages, the keystore would be located in
/etc/elasticsearch. This configuration directory is typically not
writable by the elasticsearch user so the Elasticsearch process will not
have permission to create the keystore. Instead, the RPM and Debian
packages should create the keystore (if it does not exist) on package
installation. This commit enables these packages to do that in the
post-install routines.
Relates #26282
We need to check if JAVA_TOOL_OPTIONS, and JAVA_OPTS are set, and if
ES_PATH_CONF is not set. However, if these variables are defined and
contain quotes, the current mechanism busts on them. Instead, we should
use safer mechanism for checking if these variable are defined or
not. This commit does that.
Relates #26268
We previously explicitly set the HOSTNAME environment variable so that
${HOSTNAME} could be used a placeholder for defining the node.name in
elasticsearch.yml. We removed explicitly setting this because bash
defines HOSTNAME. The problem is that bash defines HOSTNAME as a bash
variable, not as an environment variable. Therefore, to restore the
previous behavior, we export the bash value for HOSTNAME as an
environment variable named HOSTNAME. For consistency between Windows and
the Unix-like systems, we also define HOSTNAME with a value equal to the
environment variable COMPUTERNAME on Windows.
Relates #26262
We quoted some strings in the Windows elasticsearch-env script but echo
on Windows includes these quotes in the output. This commit removes
these quotes, they do not need to be output and are noise. Note that one
of the commands is wrapped in parentheses, this is to make obvious that
the space at the end of the corresponding line is intentionally there.
The error message for warning about the use of JAVA_TOOL_OPTIONS on
Windows incorrectly uses $JAVA_TOOL_OPTIONS to dereference the
environment variable JAVA_TOOL_OPTIONS; on Windows it should be
%JAVA_TOOL_OPTIONS%.
This instruction tells systemd to create a directory /var/run/elasticsearch before starting Elasticsearch.
Without this change, the default PID_DIR (/var/run/elasticsearch) may not exist, and without it, Elasticsearch will fail to start.
The environment variable CONF_DIR was previously inconsistently used in
our packaging to customize the location of Elasticsearch configuration
files. The importance of this environment variable has increased
starting in 6.0.0 as it's now used consistently to ensure Elasticsearch
and all secondary scripts (e.g., elasticsearch-keystore) all use the
same configuration. The name CONF_DIR is there for legacy reasons yet
it's too generic. This commit renames CONF_DIR to ES_PATH_CONF.
Relates #26197
In bin/elasticsearch, we grep the command line looking for various flags
that indicate the process should be daemonized. To do this, we simply
test command status from the grep. Sadly, this is utterly broken
(unreleased) as instead we are testing the output of the command, not
the command status. This commit fixes this issue.
Relates #26196
The build was ignoring suffixes like "beta1" and "rc1" on the version numbers which was causing the backwards compatibility packaging tests to fail because they expected to be upgrading from 6.0.0 even though they were actually upgrading from 6.0.0-beta1. This adds the suffixes to the information that the build scrapes from Version.java. It then uses those suffixes when it resolves artifacts build from the bwc branch and for testing.
Closes#26017
Raw requests are supported only by the java yaml test runner and were introduced to test docs snippets. Some yaml tests ended up using them (see #23497) which causes failures for other language clients. This commit migrates those yaml tests to Java tests that send requests through the Java low-level REST client, and also moves the ability to send raw requests to a special client that's only available when testing docs snippets.
Closes#25694
Today our shell scripts march on if they encounter an error during
execution. One place that this actually causes a problem is with the
Java version checker. What can happen is this: if the user botches their
installation so that the JavaVersionChecker can not be found on the
classpath, when we attempt to run the Java version checker, first an
error message that the class can not be found is displayed, and then we
print a message that their version of Java is not compatible; this
happens even if they are using a Java 8 installation. The problem is
that we should have immediately aborted when the class could not be
loaded. Since we do not exit when the shell script encounters an error,
we end up conflating failue to run the version check with a failed
version check. Instead, we really should abort the moment that one of
our scripts encounters an error. To do this, we make the following
changes:
- enable set -e and set -o pipefail
- make the Java version checker responsible for printing the error
message to the console
- remove the exit status check from the scripts
- actually on Windows, we still have to check the exit status because
there is no equivalent of set -e
- when we check for daemonization, we can no longer check the exit
status from grep because a failed grep will abort the script;
instead, we move the grep execution to be the condition for the if as
this does not trip the set -e failure conditions
- we should source elasticsearch-env before doing anything, so we move
the definition of parse_jvm_options below sourcing elasticsearch-env
- we make consistent all places where we use a subshell to use
backticks
Relates #26057
We have a bootstrap check for the maximum size of the virtual memory
address space for the Elasticsearch process. We can set this in the
service file for Elasticsearch when installed as a service on
systemd-based systems for a better user experience than them fumbling
through thinking they should set this via /etc/security/limits.d (as a
lot of pages on the Internet would tell them) not realizing that systemd
completely ignores these for services and then trying to figure out how
to add a unit file for the Elasticsearch service.
Relates #25975
The systemd service file that ships with Elasticsearch installs on
systemd-based systems contains a suggestion for setting LimitMEMLOCK if
the user wants to enable bootstrap.memory_lock. However, this setting
this in the installed service file goes against best practices for
working with systemd, and goes against our existing documentation for
how to set this. Therefore, we should not have this suggestion in the
service file otherwise users might be led to think they should edit it
there.
Relates #25979
On non-Windows platforms, we ignore the environment variable
JAVA_TOOL_OPTIONS (this is an environment variable that the JVM respects
by default for picking up extra JVM options). The primary reason that we
ignore this because of the Jayatana agent on Ubuntu; a secondary reason
is that it produces an annoying "Picked up JAVA_TOOL_OPTIONS: ..."
output message. When the elasticsearch-env batch script was introduced
for Windows, ignoring this environment variable was deliberately not
carried over as the primary reason does not apply on Windows. However,
after additional thinking, it seems that we should simply be consistent
to the extent possible here (and also avoid that annoying "Picked up
JAVA_TOOL_OPTIONS: ..." on Windows too). This commit causes the Windows
version of elasticsearch-env to also ignore JAVA_TOOL_OPTIONS.
Relates #25968
This commit adds a bootstrap check for the maximum file size, and
ensures the limit is set correctly when Elasticsearch is installed as a
service on systemd-based systems.
Relates #25974
When invoking the elasticsearch-env.bat batch script on Windows, if the
script exits due to an error (e.g., Java can not be found, or the wrong
version of Java is found), then the script exits. Sadly, on Windows,
this does not also terminate the caller, instead returning control. This
means we have to explicitly exit so that is what we do in this commit.
Relates #25959