Downgrading an Elasticsearch node to an earlier version is unsupported, because
we do not make any attempt to guarantee that a node can read any of the on-disk
data written by a future version. Yet today we do not actively prevent
downgrades, and sometimes users will attempt to roll back a failed upgrade with
an in-place downgrade and get into an unrecoverable state.
This change adds the current version of the node to the node metadata file, and
checks the version found in this file against the current version at startup.
If the node cannot be sure of its ability to read the on-disk data then it
refuses to start, preserving any on-disk data in its upgraded state.
This change also adds a command-line tool to overwrite the node metadata file
without performing any version checks, to unsafely bypass these checks and
recover the historical and lenient behaviour.
Rollup jobs can define how long they should wait before rolling up new documents.
However if the delay is smaller or if it's not a multiple of the rollup interval
the job can create incomplete buckets because the max boundary for a job is computed
from the time when the job started rounded to the interval minus the delay. This change
fixes this computation by applying the delay substraction before the rounding in order to ensure
that we never create a boundary that falls in a middle of a bucket.
Secure settings currently error if they exist inside elasticsearch.yml.
This commit adds validation that non-secure settings do not exist inside
the keystore.
closes#41831
The date_histogram accepts an interval which can be either a calendar
interval (DST-aware, leap seconds, arbitrary length of months, etc) or
fixed interval (strict multiples of SI units). Unfortunately this is inferred
by first trying to parse as a calendar interval, then falling back to fixed
if that fails.
This leads to confusing arrangement where `1d` == calendar, but
`2d` == fixed. And if you want a day of fixed time, you have to
specify `24h` (e.g. the next smallest unit). This arrangement is very
error-prone for users.
This PR adds `calendar_interval` and `fixed_interval` parameters to any
code that uses intervals (date_histogram, rollup, composite, datafeed, etc).
Calendar only accepts calendar intervals, fixed accepts any combination of
units (meaning `1d` can be used to specify `24h` in fixed time), and both
are mutually exclusive.
The old interval behavior is deprecated and will throw a deprecation warning.
It is also mutually exclusive with the two new parameters. In the future the
old dual-purpose interval will be removed.
The change applies to both REST and java clients.
This commit changes how access tokens and refresh tokens are stored
in the tokens index.
Access token values are now hashed before being stored in the id
field of the `user_token` and before becoming part of the token
document id. Refresh token values are hashed before being stored
in the token field of the `refresh_token`. The tokens are hashed
without a salt value since these are v4 UUID values that have
enough entropy themselves. Both rainbow table attacks and offline
brute force attacks are impractical.
As a side effect of this change and in order to support multiple
concurrent refreshes as introduced in #39631, upon refreshing an
<access token, refresh token> pair, the superseding access token
and refresh tokens values are stored in the superseded token doc,
encrypted with a key that is derived from the superseded refresh
token. As such, subsequent requests to refresh the same token in
the predefined time window will return the same superseding access
token and refresh token values, without hitting the tokens index
(as this only stores hashes of the token values). AES in GCM
mode is used for encrypting the token values and the key
derivation from the superseded refresh token uses a small number
of iterations as it needs to be quick.
For backwards compatibility reasons, the new behavior is only
enabled when all nodes in a cluster are in the required version
so that old nodes can cope with the token values in a mixed
cluster during a rolling upgrade.
If `keyedFilters` is null it assumes there are unkeyed filters...which
will NPE if the unkeyed filters was actually empty.
This refactors to simplify the filter assignment a bit, adds an empty
check and tidies up some formatting.
This commit updates the default ciphers and TLS protocols that are used
when the runtime JDK supports them. New cipher support has been
introduced in JDK 11 and 12 along with performance fixes for AES GCM.
The ciphers are ordered with PFS ciphers being most preferred, then
AEAD ciphers, and finally those with mainstream hardware support. When
available stronger encryption is preferred for a given cipher.
This is a backport of #41385 and #41808. There are known JDK bugs with
TLSv1.3 that have been fixed in various versions. These are:
1. The JDK's bundled HttpsServer will endless loop under JDK11 and JDK
12.0 (Fixed in 12.0.1) based on the way the Apache HttpClient performs
a close (half close).
2. In all versions of JDK 11 and 12, the HttpsServer will endless loop
when certificates are not trusted or another handshake error occurs. An
email has been sent to the openjdk security-dev list and #38646 is open
to track this.
3. In JDK 11.0.2 and prior there is a race condition with session
resumption that leads to handshake errors when multiple concurrent
handshakes are going on between the same client and server. This bug
does not appear when client authentication is in use. This is
JDK-8213202, which was fixed in 11.0.3 and 12.0.
4. In JDK 11.0.2 and prior there is a bug where resumed TLS sessions do
not retain peer certificate information. This is JDK-8212885.
The way these issues are addressed is that the current java version is
checked and used to determine the supported protocols for tests that
provoke these issues.
This commit fixes a test bug that ends up comparing the result of two consecutive calls to System.currentTimeMillis that can be different
on slow CIs.
Closes#42064
Muting a number of AutoDetectMemoryLimitIT tests to give CI a chance to
settle before easing in required backend changes.
relates elastic/ml-cpp#486
relates #42086
Improve the hard_limit memory audit message by reporting how many bytes
over the configured memory limit the job was at the point of the last
allocation failure.
Previously the model memory usage was reported, however this was
inaccurate and hence of limited use - primarily because the total
memory used by the model can decrease significantly after the models
status is changed to hard_limit but before the model size stats are
reported from autodetect to ES.
While this PR contains the changes to the format of the hard_limit audit
message it is dependent on modifications to the ml-cpp backend to
send additional data fields in the model size stats message. These
changes will follow in a subsequent PR. It is worth noting that this PR
must be merged prior to the ml-cpp one, to keep CI tests happy.
Due to a bug in JTS WKT parser, JTS cannot parse most of WKT shapes if
the shape type is written in the lower case. For examples `point (1 2)`
is causing JTS inside H2GIS to fail on tr-TR locale as a result of
case-insensitive comparison.
This commit removes the jdk11 download in vagrant provisioning and
converts it to using the jdk downloader for the system jdk, and sets up
a separate jdk for use by the test (which will be converted to running
gradle in a followup).
This commit adds a guard around reading the spooled LoggedExec output.
It is possible the exec command did not output anything, and failed,
which would trigger a failure to read the output file.
This commit fixes the logging in LoggedExec which uses an in memory
buffer to read from a local reference, instead of with
getStandardOutput() of the Exec task. This is due to gradle internally
wrapping with a TeeOutputStream, breaking our cast.
Adds a note that restarting half-or-more of the master-eligible nodes means
you're no longer doing a rolling upgrade, and may need to upgrade all the
things before the cluster returns to health.
This change replaces the extremely unfriendly message
"Number of messages analyzed must be positive" in the
case where the sample lines were incorrectly grouped
into just one message to an error that more helpfully
explains the likely root cause of the problem.
Configurations are stored in the .data-frame-internal-1
index, but users should not add configurations directly to
the index as additional information to enable access control
is added. This adds a warning against allowing access to the
internal index.
The migrate tool was added when the native realm was created, to aid
users in converting from file realms that were per node, into the
cluster managed native realm. While this tool was useful at the time,
users should now be using the native realm directly. This commit
deprecates the tool, to be removed in a followup for 8.0.
Previously we used LoggedExec for running the internal bwc builds.
However, this had bad performance implications as all the output was
buffered into memory, thus we changed back to normal Exec. This commit
adds a `spoolOutput` setting to LoggedExec which can be used for
commands with large amounts of output, and switches the bwc builds to
use this flag.
This test can create and shuffle 2*(3*5*7) = 210 shards which is quite
heavy for our CI. This commit reduces the load, so we don't timeout on
CI.
Closes#28153
Interval * integer number is a valid operation which previously was
only supported for foldables (literals) and not when a field was
involved. That was because:
1. There was no common type returned for that combination
2. The `BinaryArithmeticOperation` was permitting the multiplication
(called by fold()) but the BinaryArithmeticProcessor didn't allow it
Moreover the error message for invalid arithmetic operations was wrong
because of the issue with the overloading methods of
`LoggerMessageFormat.format`.
Fixes: #41239Fixes: #41200
(cherry picked from commit 91039bab12d3ef27d6eac9cdc891a3b3ad0c694d)
If a basic license enables security, then we should also enforce TLS
on the transport interface.
This was already the case for Standard/Gold/Platinum licenses.
For Basic, security defaults to disabled, so some of the process
around checking whether security is actuallY enabled is more complex
now that we need to account for basic licenses.
Adds an initial limited implementations of geo features to SQL. This implementation is based on the [OpenGIS® Implementation Standard for Geographic information - Simple feature access](http://www.opengeospatial.org/standards/sfs), which is the current standard for GIS system implementation. This effort is concentrate on SQL option AKA ISO 19125-2.
Queries that are supported as a result of this initial implementation
Metadata commands
- `DESCRIBE table` - returns the correct column types `GEOMETRY` for geo shapes and geo points.
- `SHOW FUNCTIONS` - returns a list that includes supported `ST_` functions
- `SYS TYPES` and `SYS COLUMNS` display correct types `GEO_SHAPE` and `GEO_POINT` for geo shapes and geo points accordingly.
Returning geoshapes and geopoints from elasticsearch
- `SELECT geom FROM table` - returns the geoshapes and geo_points as libs/geo objects in JDBC or as WKT strings in console.
- `SELECT ST_AsWKT(geom) FROM table;` and `SELECT ST_AsText(geom) FROM table;`- returns the geoshapes ang geopoints in their WKT representation;
Using geopoints to elasticsearch
- The following functions will be supported for geopoints in queries, sorting and aggregations: `ST_GeomFromText`, `ST_X`, `ST_Y`, `ST_Z`, `ST_GeometryType`, and `ST_Distance`. In most cases when used in queries, sorting and aggregations, these function are translated into script. These functions can be used in the SELECT clause for both geopoints and geoshapes.
- `SELECT * FROM table WHERE ST_Distance(ST_GeomFromText(POINT(1 2), point) < 10;` - returns all records for which `point` is located within 10m from the `POINT(1 2)`. In this case the WHERE clause is translated into a range query.
Limitations:
Geoshapes cannot be used in queries, sorting and aggregations as part of this initial effort. In order to fully take advantage of geoshapes we would need to have access to geoshape doc values, which is coming in #37206. `ST_Z` cannot be used on geopoints in queries, sorting and aggregations since we don't store altitude in geo_point doc values.
Relates to #29872
Backport of #42031
This change updates tests that use a CountDownLatch to synchronize the
running of threads when testing concurrent operations so that we ensure
the thread has been fully created and run by the scheduler. Previously,
these tests used a latch with a value of 1 and the test thread counted
down while the threads performing concurrent operations just waited.
This change updates the value of the latch to be 1 + the number of
threads. Each thread counts down and then waits. This means that each
thread has been constructed and has started running. All threads will
have a common start point now.
Today we do not expose the cluster UUID in any logs by default, but it would be
useful to see it. For instance if a user starts multiple nodes as separate
clusters then they will silently remain as separate clusters even if they are
subsequently reconfigured to look like a single cluster. This change logs the
committed cluster UUID the first time the node encounters it.
* Switch to using a list instead of a Set for the filters, so that the
order of these filters is kept.
(cherry picked from commit 74a743829799b64971e0ac5ae265f43f6c14e074)
The `toStepKeys()` method was only called in its own test case. The real
list of StepKeys that's used in action execution is generated from the
list of actual step objects returned by `toSteps()`.
This commit removes that method.
If remote recovery copies an index commit which has gaps in sequence
numbers to a follower; then these assertions (introduced in #40823)
don't hold for follower replicas.
Closes#41037
* [ML] adding pivot.size option for setting paging size
* Changing field name to address PR comments
* fixing ctor usage
* adjust hlrc for field name change
* [ML] properly nesting objects in document source
* Throw exception on agg extraction failure, cause it to fail df
* throwing error to stop df if unsupported agg is found
The elasticsearch-cli helper script does not use the tempdir created by
elasticsearch-env, yet the env script still creates it. This can lead to
lots of temp directories being created when running cli scripts in an
automated fashion. This commit passes a fake tmpdir to the env script to
avoid creation.
closes#34445
On debian-8, when trying to apt-get update, it currently (sometimes)
fails on one of the extra repositories. This failure to update causes
keys to not be updated, which later can cause some packages to not
install due to lack of key verification. This commit removes the
troublesome repository before we attemp to update.
closes#42017
This commit adds deletion of the bin directory to postrm cleanup. While
the package's bin files are cleaned up by the package manager, plugins
may have created subdirectories under bin. We already cleanup plugins,
but not the extra bin dirs their installation created.
closes#18109
Currently IndexAnalyzers keeps the three default as separate class members
although they should refer to the same analyzers held in the additional
analyzers map under the default names. This assumption should be made more
explicit by keeping all analyzers in the map. This change adapts the constructor
to check all the default entries are there and the getters to reach into the map
with the default names when needed.