* Add notice for bundled jdk
This commit adds the license/notice for the bundled openjdk.
* First draft
* iteration
* Fix package notices
* Iteration
* One more iteration
This commit adds a variant for every official distribution that omits
the bundled jdk. The "no-jdk" naming is conveyed through the package
classifier, alongside the platform. Package tests are also added for
each new distribution.
* Bundle java in distributions
Setting up a jdk is currently a required external step when installing
elasticsearch. This is particularly problematic for the rpm/deb packages
as installing a jdk in the same package installation command does not
guarantee any order, so must be done in separate steps. Additionally,
JAVA_HOME must be set and often causes problems in selecting a correct
jdk when, for example, the system java is an older unsupported version.
This commit bundles platform specific openjdks into each distribution.
In addition to eliminating the issues above, it also presents future
possible improvements like using jlink to build jdk images only
containing modules that elasticsearch uses.
closes#31845
This commit adds classifiers to the distributions indicating the
OS (for archives) and platform. The current OSes are for windows, darwin (ie
macos) and linux. This change will allow future OS/architecture specific
changes to the distributions. Note the docs using distribution links
have been updated, but will be reworked in a followup to make OS
specific instructions for the archives.
In order to support JSON log format, a custom pattern layout was used and its configuration is enclosed in ESJsonLayout. Users are free to use their own patterns, but if smooth Beats integration is needed, they should use ESJsonLayout. EvilLoggerTests are left intact to make sure user's custom log patterns work fine.
To populate additional fields node.id and cluster.uuid which are not available at start time,
a cluster state update will have to be received and the values passed to log4j pattern converter.
A ClusterStateObserver.Listener is used to receive only one ClusteStateUpdate. Once update is received the nodeId and clusterUUid are set in a static field in a NodeAndClusterIdConverter.
Following fields are expected in JSON log lines: type, tiemstamp, level, component, cluster.name, node.name, node.id, cluster.uuid, message, stacktrace
see ESJsonLayout.java for more details and field descriptions
Docker log4j2 configuration is now almost the same as the one use for ES binary.
The only difference is that docker is using console appenders, whereas ES is using file appenders.
relates: #32850
* Default include_type_name to false for get and put mappings.
* Default include_type_name to false for get field mappings.
* Add a constant for the default include_type_name value.
* Default include_type_name to false for get and put index templates.
* Default include_type_name to false for create index.
* Update create index calls in REST documentation to use include_type_name=true.
* Some minor clean-ups around the get index API.
* In REST tests, use include_type_name=true by default for index creation.
* Make sure to use 'expression == false'.
* Clarify the different IndexTemplateMetaData toXContent methods.
* Fix FullClusterRestartIT#testSnapshotRestore.
* Fix the ml_anomalies_default_mappings test.
* Fix GetFieldMappingsResponseTests and GetIndexTemplateResponseTests.
We make sure to specify include_type_name=true during xContent parsing,
so we continue to test the legacy typed responses. XContent generation
for the typeless responses is currently only covered by REST tests,
but we will be adding unit test coverage for these as we implement
each typeless API in the Java HLRC.
This commit also refactors GetMappingsResponse to follow the same appraoch
as the other mappings-related responses, where we read include_type_name
out of the xContent params, instead of creating a second toXContent method.
This gives better consistency in the response parsing code.
* Fix more REST tests.
* Improve some wording in the create index documentation.
* Add a note about types removal in the create index docs.
* Fix SmokeTestMonitoringWithSecurityIT#testHTTPExporterWithSSL.
* Make sure to mention include_type_name in the REST docs for affected APIs.
* Make sure to use 'expression == false' in FullClusterRestartIT.
* Mention include_type_name in the REST templates docs.
This commit adds a unique id to cluster blocks, so that they can be uniquely
identified if needed. This is important for the Close Index API where multiple
concurrent closing requests can be executed at the same time. By adding a
UUID to the cluster block, we can generate unique "closing block" that can
later be verified on shards and then checked again from the cluster state
before closing the index. When the verification on shard is done, the closing
block is replaced by the regular INDEX_CLOSED_BLOCK instance.
If something goes wrong, calling the Open Index API will remove the block.
Related to #33888
* Deprecate types in index API
- deprecate type-based constructors of IndexRequest
- update tests to use typeless IndexRequest constructors
- no yaml tests as they have been already added in #35790
Relates to #35190
The following updates were made:
* Add deprecation warnings to `RestUpdateAction`, plus a test in `RestUpdateActionTests`.
* Deprecate relevant methods on the Java HLRC requests/ responses.
* Add HLRC integration tests for the typed APIs.
* Update documentation (for both the REST API and Java HLRC).
* Fix failing integration tests.
Because of an earlier PR, the REST yml tests were already updated (one version without types, and another legacy version that retains types).
The commit changes how indices are closed in the MetaDataIndexStateService.
It now uses a 3 steps process where writes are blocked on indices to be closed,
then some verifications are done on shards using the TransportVerifyShardBeforeCloseAction
added in #36249, and finally indices states are moved to CLOSE and their routing
tables removed.
The closing process also takes care of using the pre-7.0 way to close indices if the
cluster contains mixed version of nodes and a node does not support the TransportVerifyShardBeforeCloseAction. It also closes unassigned indices.
Related to #33888
For each API, the following updates were made:
- Add deprecation warnings to `Rest*Action`, plus tests in `Rest*ActionTests`.
- For each REST yml test, make sure there is one version without types, and another legacy version that retains types (called *_with_types.yml).
- Deprecate relevant methods on the Java HLRC requests/ responses.
- Update documentation (for both the REST API and Java HLRC).
Back in #32983 I broke running the integ-test-zip tests against an
external cluster by adding a test that reads the contents of the log
file. This fixes running against an external cluster by explicitly
skipping that test if running against an external cluster.
When we implemented `refresh=wait_for` I added a test with the wrong
name. This caused us to not run it. The test asserted that running
several operations with `refresh=wait_for` did not fail if the index was
`_close`d while the operations were waiting. But to be honest, failure
here isn't that bad. The index being waited on is closed. You can't do
anything with it any way. The most important thing is actually that
these operations don't hang forever. Because hanging forever means that
the resources used by the operations aren't freed.
Anyway, when I noticed the error I reenabled the test. But they don't
pass consistently because *sometimes* the operations being tested fail.
They don't seem to hang and they always fail with "this index is closed
so you can't do anything with it" sorts of messages.
When the test started failing we disabled it again. This reenables the
test but causes it to ignore these "index is closed" failures. We'd
prefer they not happen at all but in the grand scheme of things they are
fine and making sure these operations don't hang is much more important.
This also updates the test to bring it more in line with my current
understanding of the "right" way to use the low level rest client.
Changes the default of the `node.name` setting to the hostname of the
machine on which Elasticsearch is running. Previously it was the first 8
characters of the node id. This had the advantage of producing a unique
name even when the node name isn't configured but the disadvantage of
being unrecognizable and not being available until fairly late in the
startup process. Of particular interest is that it isn't available until
after logging is configured. This forces us to use a volatile read
whenever we add the node name to the log.
Using the hostname is available immediately on startup and is generally
recognizable but has the disadvantage of not being unique when run on
machines that don't set their hostname or when multiple elasticsearch
processes are run on the same host. I believe that, taken together, it
is better to default to the hostname.
1. Running multiple copies of Elasticsearch on the same node is a fairly
advanced feature. We do it all the as part of the elasticsearch build
for testing but we make sure to set the node name then.
2. That the node.name defaults to some flavor of "localhost" on an
unconfigured box feels like it isn't going to come up too much in
production. I expect most production deployments to at least set the
hostname.
As a bonus, production deployments need no longer set the node name in
most cases. At least in my experience most folks set it to the hostname
anyway.
I created a test a few days ago and declared a package that doesn't line
up with the directory structure. Oops. I a little surprised nothing
complained. But this fixes it.
I disabled one branch a few hours ago because it failed in CI. It looks
like other branches can also fail so I'll disable them as well and look
more closely on Monday.
Change the logging infrastructure to handle when the node name isn't
available in `elasticsearch.yml`. In that case the node name is not
available until long after logging is configured. The biggest change is
that the node name logging no longer fixed at pattern build time.
Instead it is read from a `SetOnce` on every print. If it is unset it is
printed as `unknown` so we have something that fits in the pattern.
On normal startup we don't log anything until the node name is available
so we never see the `unknown`s.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `client` and `distribution` projects to use
the new versions.
* Remove BouncyCastle dependency from runtime
This commit introduces a new gradle project that contains
the classes that have a dependency on BouncyCastle. For
the default distribution, It builds a jar from those and
in puts it in a subdirectory of lib
(/tools/security-cli) along with the BouncyCastle jars.
This directory is then passed in the
ES_ADDITIONAL_CLASSPATH_DIRECTORIES of the CLI tools
that use these classes.
BouncyCastle is removed as a runtime dependency (remains
as a compileOnly one) from x-pack core and x-pack security.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `distribution/archives/integ-test-zip` project
to use the new versions.
We mistakenly enabled bundling of the default distribution's bin scripts
into the `integ-test-zip` artifact used by plugin authors to test plugins.
These didn't change the version of Elasticsearch used for testing but as
a side effect changed the LICENSE.txt from the Apache 2 license to the
Elastic license. We really didn't mean for that to happen. The bin script
and the elasticsearch-sql-cli jar file bundled into the distribution are
indeed governed by the Elastic license but we didn't intend for them to be
in the testing artifact in the first place. This removes them and fixes
the license of the `integ-test-zip` artifact.
For 6.3 we renamed the `tar` and `zip` distributions to `oss-tar` and
`oss-zip`. Then we added new `tar` and `zip` distributions that contain
x-pack and are licensed under the Elastic License. Unfortunately we
accidentally generated POM files along side the new `tar` and `zip`
distributions that incorrectly claimed that they were Apache 2 licensed.
Oooops.
This fixes the license on the POMs generated for the `tar` and `zip`
distributions.
Applies default file and directory permissions to zip distributions
similar to how they're set for the tar distributions. Previously zip
distributions would retain permissions they had on the build host's
working tree, which could vary depending on its umask
For #30799
We sign our official plugins yet this is not well-advertised and not at
all consumed during plugin installation. For plugins that are installed
over the intertubes, verifying that the downloaded artifact is signed by
our signing key would establish both integrity and validity of the
downloaded artifact. The chain of trust here is simple: our installable
artifacts (archive and package distributions) so that if a user trusts
our packages via their signatures, and our plugin installer (which would
be executing trusted code) verifies the downloaded plugin, then the user
can trust the downloaded plugin too. This commit adds verification of
official plugins downloaded during installation. We do not add
verification for offline plugin installs; a user can download our
signatures and verify the artifacts themselves.
This commit also needs to solve a few interesting challenges. One of
these is that we want the bouncy castle JARs on the classpath only for
the plugin installer, but not for the runtime
Elasticsearch. Additionally, we want these JARs to not be present for
the JAR hell checks. To address this, we shift these JARs into a
sub-directory of lib (lib/tools/plugin-cli) that is only loaded for the
plugin installer, and in the plugin installer we filter any JARs in this
directory from the JAR hell check.
This commit removes xpack from being a meta-plugin-as-a-module.
It also fixes a couple tests which were missing task dependencies, which
failed once the gradle execution order changed.
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
The overall NOTICE file for the ML X-Pack module should
include the notices from the 3rd party C++ components as
well as the 3rd party Java components.
Adds tasks that check that the all jars that we build have LICENSE.txt
and NOTICE.txt files and that the files are correct. Sets check to
depend on these task.
This is mostly there for extra parnoia because we automatically
configure all Jar tasks to include the LICENSE.txt and NOTICE.txt
files anyway. But it is quite possible to add configuration to those
tasks that would override either file.
This causes check to depend on several more things than it used to.
Take, for example, javadoc:
check depends on the new verifyJavadocJarNotice which depends on
extractJavadocJar which depends on javadocJar which depends on
javadoc, this check now depends on javadoc.
This commit adds some build time checks that the archive distributions
and package distributions contain the appropriate license and notice
files, and the package distributions contain the appropriate license
metadata.
This commit adds the distribution type to the startup scripts so that we
can discern from log output and the main response the type of the
distribution (deb/rpm/tar/zip).
This commit moves the apache and elastic license files into a new
root level `licenses` directory and rewrites the top level LICENSE.txt
to clarify the repository has a mix of apache and elastic licensed code.
This commit adds the distribution flavor (default versus oss) to the
build process which is passed through the startup scripts to
Elasticsearch. This change will be used to customize the message on
attempting to install/remove x-pack based on the distribution flavor.
This commit makes x-pack a module and adds it to the default
distrubtion. It also creates distributions for zip, tar, deb and rpm
which contain only oss code.
This commit removes running rest tests on the full zip and tar
distributions in favor of doing a simple extraction check like is done
for rpm and deb files. The rest tests are still run on the integ test
zip, at least for now (this should eventually be moved out to a different
location).
This commit moves the distribution specific tasks into the respective
archives and packages builds. The collocation of common and distribution
specific tasks make it much easier to reason about what is expected in a
particular distribution.
This commit adds intermediate gradle projects for archive based
distributions (zip, tar) and package based distributions (rpm, deb). The
grouping allows the common distribution build file to be considerably
shorter and clearly separated from the common zip/tar and rpm/deb
configuration.