I disabled one branch a few hours ago because it failed in CI. It looks
like other branches can also fail so I'll disable them as well and look
more closely on Monday.
Change the logging infrastructure to handle when the node name isn't
available in `elasticsearch.yml`. In that case the node name is not
available until long after logging is configured. The biggest change is
that the node name logging no longer fixed at pattern build time.
Instead it is read from a `SetOnce` on every print. If it is unset it is
printed as `unknown` so we have something that fits in the pattern.
On normal startup we don't log anything until the node name is available
so we never see the `unknown`s.
The main benefit of the upgrade for users is the search optimization for top scored documents when the total hit count is not needed. However this optimization is not activated in this change, there is another issue opened to discuss how it should be integrated smoothly.
Some comments about the change:
* Tests that can produce negative scores have been adapted but we need to forbid them completely: #33309Closes#32899
Gradle triggers the build of artifacts even if assemble is disabled.
Most users will not need bwc distributions after running `./gradlew
assemble` so instead of forcing them to add `-x buildBwcVersion`, we
detect this and skip the configuration of the artifacts.
- third party audit detects jar hell with JDK so we disable it
- jdk non portable in forbiddenapis detects classes being used from the
JDK ( for fips ) that are not portable, this is intended so we don't
scan for it on fips.
- different exclusion rules for third party audit on fips
Closes#33179
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `client` and `distribution` projects to use
the new versions.
On some Linux distributions tmpfiles.d cleans files and
directories under /tmp if they haven't been accessed for
10 days.
This can cause problems for ML as ML is currently the only
component that uses the temp directory more than a few
seconds after startup. If you didn't open an ML job for
10 days and then tried to open one then the temp directory
would have been deleted.
This commit prevents the problem occurring in the case of
Elasticsearch being managed by systemd, as systemd private
temp directories are not subject to periodic cleanup (by
default).
Additionally there are now some docs to warn people about
the risk and suggest a manual mitigation for .tar.gz users.
First, some background: we have 15 different methods to get a logger in
Elasticsearch but they can be broken down into three broad categories
based on what information is provided when building the logger.
Just a class like:
```
private static final Logger logger = ESLoggerFactory.getLogger(ActionModule.class);
```
or:
```
protected final Logger logger = Loggers.getLogger(getClass());
```
The class and settings:
```
this.logger = Loggers.getLogger(getClass(), settings);
```
Or more information like:
```
Loggers.getLogger("index.store.deletes", settings, shardId)
```
The goal of the "class and settings" variant is to attach the node name
to the logger. Because we don't always have the settings available, we
often use the "just a class" variant and get loggers without node names
attached. There isn't any real consistency here. Some loggers get the
node name because it is convenient and some do not.
This change makes the node name available to all loggers all the time.
Almost. There are some caveats are testing that I'll get to. But in
*production* code the node name is node available to all loggers. This
means we can stop using the "class and settings" variants to fetch
loggers which was the real goal here, but a pleasant side effect is that
the ndoe name is now consitent on every log line and optional by editing
the logging pattern. This is all powered by setting the node name
statically on a logging formatter very early in initialization.
Now to tests: tests can't set the node name statically because
subclasses of `ESIntegTestCase` run many nodes in the same jvm, even in
the same class loader. Also, lots of tests don't run with a real node so
they don't *have* a node name at all. To support multiple nodes in the
same JVM tests suss out the node name from the thread name which works
surprisingly well and easy to test in a nice way. For those threads
that are not part of an `ESIntegTestCase` node we stick whatever useful
information we can get form the thread name in the place of the node
name. This allows us to keep the logger format consistent.
Explicitly include all subdirectories of these folders in
/usr/share/elasticsearch in package distributions so that they are
managed by the package manager. This change does really have an
effect in the 7.x series, where there are no subdirectories in bin, and
we were already doing this in lib and modules. It does have an effect in
the 6.x series where the bin/x-pack subdirectory was not previously
tracked by the package manager and could be left behind on removal in
rpm distributions.
* Remove BouncyCastle dependency from runtime
This commit introduces a new gradle project that contains
the classes that have a dependency on BouncyCastle. For
the default distribution, It builds a jar from those and
in puts it in a subdirectory of lib
(/tools/security-cli) along with the BouncyCastle jars.
This directory is then passed in the
ES_ADDITIONAL_CLASSPATH_DIRECTORIES of the CLI tools
that use these classes.
BouncyCastle is removed as a runtime dependency (remains
as a compileOnly one) from x-pack core and x-pack security.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `distribution/archives/integ-test-zip` project
to use the new versions.
The C2 compiler in JDK 10 appears to have an issue compiling to AVX-512
instructions (on hardware that supports such). As a workaround, this
commit adds a JVM flag on JDK 10+ to disable the use of AVX-512
instructions until a fix is introduced to the JDK. Instead, we use a
flag to enable AVX and AVX2 only.
Note: Based on my reading of the C2 code, this flag does not appear to
have any impact on hardware that does not support AVX2. I have tested
this manually on an Intel Atom C2538 processor that supports neither AVX
nor AVX2. I have also tested this manually on an Intel i5-3317U
processor that supports AVX but not AVX2.
Ensure our tests can run in a FIPS JVM
JKS keystores cannot be used in a FIPS JVM as attempting to use one
in order to init a KeyManagerFactory or a TrustManagerFactory is not
allowed.( JKS keystore algorithms for private key encryption are not
FIPS 140 approved)
This commit replaces JKS keystores in our tests with the
corresponding PEM encoded key and certificates both for key and trust
configurations.
Whenever it's not possible to refactor the test, i.e. when we are
testing that we can load a JKS keystore, etc. we attempt to
mute the test when we are running in FIPS 140 JVM. Testing for the
JVM is naive and is based on the name of the security provider as
we would control the testing infrastrtucture and so this would be
reliable enough.
Other cases of tests being muted are the ones that involve custom
TrustStoreManagers or KeyStoreManagers, null TLS Ciphers and the
SAMLAuthneticator class as we cannot sign XML documents in the
way we were doing. SAMLAuthenticator tests in a FIPS JVM can be
reenabled with precomputed and signed SAML messages at a later stage.
IT will be covered in a subsequent PR
We mistakenly enabled bundling of the default distribution's bin scripts
into the `integ-test-zip` artifact used by plugin authors to test plugins.
These didn't change the version of Elasticsearch used for testing but as
a side effect changed the LICENSE.txt from the Apache 2 license to the
Elastic license. We really didn't mean for that to happen. The bin script
and the elasticsearch-sql-cli jar file bundled into the distribution are
indeed governed by the Elastic license but we didn't intend for them to be
in the testing artifact in the first place. This removes them and fixes
the license of the `integ-test-zip` artifact.
* Upgrade bouncycastle
Required to fix
`bcprov-jdk15on-1.55.jar; invalid manifest format `
on jdk 11
* Downgrade bouncycastle to avoid invalid manifest
* Add checksum for new jars
* Update tika permissions for jdk 11
* Mute test failing on jdk 11
* Add JDK11 to CI
* Thread#stop(Throwable) was removed
http://mail.openjdk.java.net/pipermail/core-libs-dev/2018-June/053536.html
* Disable failing tests #31456
* Temprorarily disable doc tests
To see if there are other failures on JDK11
* Only blacklist specific doc tests
* Disable only failing tests in ingest attachment plugin
* Mute failing HDFS tests #31498
* Mute failing lang-painless tests #31500
* Fix backwards compatability builds
Fix JAVA version to 10 for ES 6.3
* Add 6.x to bwx -> java10
* Prefix out and err from buildBwcVersion for readability
```
> Task :distribution:bwc:next-bugfix-snapshot:buildBwcVersion
[bwc] :buildSrc:compileJava
[bwc] WARNING: An illegal reflective access operation has occurred
[bwc] WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/home/alpar/.gradle/wrapper/dists/gradle-4.5-all/cg9lyzfg3iwv6fa00os9gcgj4/gradle-4.5/lib/groovy-all-2.4.12.jar) to method java.lang.Object.finalize()
[bwc] WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
[bwc] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[bwc] WARNING: All illegal access operations will be denied in a future release
[bwc] :buildSrc:compileGroovy
[bwc] :buildSrc:writeVersionProperties
[bwc] :buildSrc:processResources
[bwc] :buildSrc:classes
[bwc] :buildSrc:jar
```
* Also set RUNTIME_JAVA_HOME for bwcBuild
So that we can make sure it's not too new for the build to understand.
* Align bouncycastle dependency
* fix painles array tets
closes#31500
* Update jar checksums
* Keep 8/10 runtime/compile untill consensus builds on 11
* Only skip failing tests if running on Java 11
* Failures are dependent of compile java version not runtime
* Condition doc test exceptions on compiler java version as well
* Disable hdfs tests based on runtime java
* Set runtime java to minimum supported for bwc
* PR review
* Add comment with ticket for forbidden apis
So the issue here is that we want to avoid setting vm.max_map_count if
it is already equal to the desired value (the bootstrap check requires
262144). The reason we want to avoid this is because in some use-cases
using sysctl to set this will fail. In this case, we want to enable
users to set this value externally and then allow that to cause using
sysctl to set the value to be skipped so that cases where using sysctl
will fail to no longer fail.
The package installation relies on java being in the path. If java is
not in the path, the tests fail at post-install time. This commit adds a
pre-install check to validate that java exists, and if it fails, the
package is never installed, and thus keeps a system clean, rather than
aborting at post-install and leaving behind a mess.
Closes#29665
With this commit we add the possibility to define further JVM options (and
system properties) based on the current environment. As a proof of concept, it
chooses Netty's allocator ergonomically based on the maximum defined heap size.
We switch to the unpooled allocator at 1GB heap size (value determined
experimentally, see #30684 for more details). We are also explicit about the
choice of the allocator in either case.
Relates #30684
This commit modifies the Sys V init startup scripts to only modify
vm.max_map_count if needed. In this case, needed means that the current
value is less than our default value of 262144 maps.
For 6.3 we renamed the `tar` and `zip` distributions to `oss-tar` and
`oss-zip`. Then we added new `tar` and `zip` distributions that contain
x-pack and are licensed under the Elastic License. Unfortunately we
accidentally generated POM files along side the new `tar` and `zip`
distributions that incorrectly claimed that they were Apache 2 licensed.
Oooops.
This fixes the license on the POMs generated for the `tar` and `zip`
distributions.
This was silly; Bouncy Castle has an armored input stream for reading
keys in ASCII armor format. This means that we do not need to strip the
header ourselves and base64 decode the key. This had problems anyway
because of discrepancies in the padding that Bouncy Castle would produce
and the JDK base64 decoder was expecting. Now that we armor input/output
the whole way during tests, we fix all random failures in test cases
too.
The java version checker requires being written with java 7 APIs.
In order to use java 8 apis in other launcher utilities, this commit
moves the java version checker back to its own jar.
This commit moves the default location of the full dependencies report
to be under the reports directory to align it with the location for the
dependenciesInfo task output.
A previous commit tried to add task dependencies for the
:distribution:generateDependenciesReport task so that a user did not
have to run "dependenciesInfo
:distribution:generateDependenciesReport". However this method did not
reliably add all task dependencies due to task ordering issues in
previous versions of Gradle and our build. This commit removes this for
now and a user will continue to have to run "dependenciesInfo
:distribution:generateDependenciesReport".
The goal of this commit is to address unknown licenses when producing
the dependencies info report. We have two different checks that we run
on licenses. The first check is whether or not we have stashed a copy of
the license text for a dependency in the repository. The second is to
map every dependency to a license type (e.g., BSD 3-clause). The problem
here is that the way we were handling licenses in the second check
differs from how we handle licenses in the first check. The first check
works by finding a license file with the name of the artifact followed
by the text -LICENSE.txt. Yet in some cases we allow mapping an artifact
name to another name used to check for the license (e.g., we map
lucene-.* to lucene, and opensaml-.* to shibboleth. The second check
understood the first way of looking for a license file but not the
second way. So in this commit we teach the second check about the
mappings from artifact names to license names. We do this by copying the
configuration from the dependencyLicenses task to the dependenciesInfo
task and then reusing the code from the first check in the second
check. There were some other challenges here though. For example,
dependenciesInfo was checking too many dependencies. For now, we should
only be checking direct dependencies and leaving transitive dependencies
from another org.elasticsearch artifact to that artifact (we want to do
this differently in a follow-up). We also want to disable
dependenciesInfo for projects that we do not publish, users only care
about licenses they might be exposed to if they use our assembled
products. With all of the changes in this commit we have eliminated all
unknown licenses. A follow-up will enforce that when we add a new
dependency it does not get mapped to unknown, these will be forbidden in
the future. Therefore, with this change and earlier changes are left
having no unknown licenses and two custom licenses; custom here means it
does not map to an SPDX license type. Those two licenses are xz and
ldapsdk. A future change will not allow additional custom licenses
unless they are explicitly whitelisted. This ensures that if a new
dependency is added it is mapped to an SPDX license or mapped to custom
because it does not have an SPDX license.
We no longer need animal sniffer because we use JDK functionality
(introduced in JDK 9) to target older versions of the JDK for
compilation. This functionality means that the JDK handles the problem
of ensuring that we do not use JDK APIs from the version that we are
compiling from that are not available in the version that we are
compiling to. A previous commit removed this for the REST client (where
we target JDK 7) but a few traces were left behind.
This commit adjusts the indentation in the CLI scripts to give a clear
visual indication that the line being indented is a continuation of the
previous line.
A previous refactoring of the CLI scripts migrated all of the CLI tools
to shell to a common script, elasticsearch-cli. This approach is fine in
Bash where it is easy to tear arguments apart but it doesn't work so
well on Windows where quoting is insane. To avoid having to tear the
arguments apart to separate the first argument to elasticsearch-cli from
the remaining arguments, we instead choose a strategy where we can avoid
tearing the arguments apart. To do this, we will instead pass the main
class by an environment variable and then we can pass the arguments
straight through. This will let us avoid awful quoting issues on
Windows. This is the Windows side of that effort and the Bash side was
in a previous commit.
A previous refactoring of the CLI scripts migrated all of the CLI tools
to shell to a common script, elasticsearch-cli. This approach is fine in
Bash where it is easy to tear arguments apart but it doesn't work so
well on Windows where quoting is insane. To avoid having to tear the
arguments apart to separate the first argument to elasticsearch-cli from
the remaining arguments, we instead choose a strategy where we can avoid
tearing the arguments apart. To do this, we will instead pass the main
class by an environment variable and then we can pass the arguments
straight through. This will let us avoid awful quoting issues on
Windows. This is the non-Windows side of that effort and the Windows
side will be in a follow-up.
If you invoke elasticsearch-plugin (or any other CLI script on Windows)
with a path that has a percent-encoded space (or any other
percent-encoded character) because the CLI scripts now shell into a
common shell script (elasticsearch-cli) the percent-encoded space ends
up being interpreted as a parameter. For example passing install --batch
file:/c:/encoded%20%space/analysis-icu-7.0.0.zip to elasticsearch-plugin
leads to the %20 being interpreted as %2 followed by a zero. Here, the
%2 is interpreted as the second parameter (--batch) and the
InstallPluginCommand class ends up seeing
file:/c/encoded--batch0space/analysis-icu-7.0.0.zip as the path which
will not exist. This commit addresses this by escaping the %* that is
used to pass the parameters to the common CLI script so that the common
script sees the correct parameters without the %2 being substituted.
Applies default file and directory permissions to zip distributions
similar to how they're set for the tar distributions. Previously zip
distributions would retain permissions they had on the build host's
working tree, which could vary depending on its umask
For #30799
A previous commit added the public key used for signing artifacts to the
plugin CLI. This commit is an iteration on that to add the header and
footer to the key so that it is clear what the key is. Instead, we strip
the header/footer on read. With this change we simplify our test where
keys already in this format are generated and we had to strip on the
test side.
We sign our official plugins yet this is not well-advertised and not at
all consumed during plugin installation. For plugins that are installed
over the intertubes, verifying that the downloaded artifact is signed by
our signing key would establish both integrity and validity of the
downloaded artifact. The chain of trust here is simple: our installable
artifacts (archive and package distributions) so that if a user trusts
our packages via their signatures, and our plugin installer (which would
be executing trusted code) verifies the downloaded plugin, then the user
can trust the downloaded plugin too. This commit adds verification of
official plugins downloaded during installation. We do not add
verification for offline plugin installs; a user can download our
signatures and verify the artifacts themselves.
This commit also needs to solve a few interesting challenges. One of
these is that we want the bouncy castle JARs on the classpath only for
the plugin installer, but not for the runtime
Elasticsearch. Additionally, we want these JARs to not be present for
the JAR hell checks. To address this, we shift these JARs into a
sub-directory of lib (lib/tools/plugin-cli) that is only loaded for the
plugin installer, and in the plugin installer we filter any JARs in this
directory from the JAR hell check.
If you have an unusual umask (e.g., 0002) and clone the GitHub
repository then files that we stick into our packages like the
README.textile and the license will have a file mode of 0664 on disk yet
we expect them to be 0644. Additionally, the same thing happens with
compiled artifacts like JARs. We try to set a default file mode yet it
does not seem to take everywhere. This commit adds explicit file modes
in some places that we were relying on the defaults to ensure that the
built artifacts have a consistent file mode regardless of the underlying
build host.
This commit reduces the Windows CLI scripts to one-liners by moving all
of the redundant logic to an elasticsearch-cli script. This commit is
only the Windows side, a previous commit covered the Linux side.
We post snapshot builds to snapshots.elastic.co yet the official plugin
installer will not let you install such plugins without manually
downloading them and installing them from a file URL. This commit adds
the ability for the plugin installer to use snapshots.elastic.co for
installing official plugins if a es.plugins.staging is set and the
current build is also a snapshot build. Otherwise, we continue to use
staging.elastic.co if the current build is a release build and
es.plugins.staging is set and, of course, use the release artifacts at
artifacts.elastic.co for release builds with es.plugins.staging unset.
This commit reduces the Linux CLI scripts to one-liners by moving all of
the redundant logic to an elasticsearch-cli script. This commit is only
the Linux side, a follow-up will do this for Windows too.
Meta plugins existed only for a short time, in order to enable breaking
up x-pack into multiple plugins. However, now that x-pack is no longer
installed as a plugin, the need for them has disappeared. This commit
removes the meta plugins infrastructure.
This commit removes xpack from being a meta-plugin-as-a-module.
It also fixes a couple tests which were missing task dependencies, which
failed once the gradle execution order changed.