This change adds an option to the `FieldSortBuilder` that allows to transform the type
of a numeric field into another. Possible values for this option are `long` that transforms
the source field into an integer and `double` that transforms the source field into a floating point.
This new option is useful for cross-index search when the sort field is mapped differently on some
indices. For instance if a field is mapped as a floating point in one index and as an integer in another
it is possible to align the type for both indices using the `numeric_type` option:
```
{
"sort": {
"field": "my_field",
"numeric_type": "double" <1>
}
}
```
<1> Ensure that values for this field are transformed to a floating point if needed.
This commit enables full-cluster-restart and rolling-upgrade tests
to run with nodes using a JVM in fips approved only node by using
PEM key material instead of a JKS for the transport layer in that
case.
With SUN security provider, a CertificateException is thrown when
attempting to parse a Certificate from a PEM file on disk with
`sun.security.provider.X509Provider#parseX509orPKCS7Cert`
When using the BouncyCastle Security provider (as we do in fips
tests) the parsing happens in
CertificateFactory#engineGenerateCertificates which doesn't throw
an exception but returns an empty list.
In order to have a consistent behavior, this change makes it so
that we throw a CertificateException when attempting to read
a PEM file from disk and failing to do so in either Security
Provider
Resolves: #39580
`SecurityIndexManager` is hardcoded to handle only the `.security`-`.security-7` alias-index pair.
This commit removes the hardcoded bits, so that the `SecurityIndexManager` can be reused
for other indices, such as the planned security tokens index (`.security-tokens-7`).
This change adjusts the LDAP connection timeout for retrieving
attributes while performing the SAML IT to 5 seconds, from 5 ms
that it previously was.
Resolves: #40025
When an auto-follower coordinator times out waiting for the remote
cluster state, we do not log any indication of this. While this is
expected behavior in quiet deployments, it is still useful to see this
information for tracing the behavior of the auto-follow
coordinator. This commit adds a trace log message indicating that the
timeout.
This commit removes the cluster state size field from the cluster state
response, and drops the backwards compatibility layer added in 6.7.0 to
continue to support this field. As calculation of this field was
expensive and had dubious value, we have elected to remove this field.
Since other classes besides intervals can be serialized as part of
the Cursor, the getNamedWritables method should be moved from Intervals
to a more generic class Literals.
Relates to #39973
Currently, we maintain a transport name ("mock-nio", "nio", "netty")
that is passed to a `TcpTransportChannel` when a request is received.
The value of this name is to associate with the task when we register a
task with the task manager. However, it is only possible to run ES with
one transport, so having an implementation specific name is unnecessary.
This commit removes the name and replaces it with the generic
"transport".
Currently if a field alias is updated, any percolator queries that contain the
alias will still refer to its old target. This PR documents the issue while we
look into addressing it.
Relates to #37212.
The `sampler` agg creates a BestDocsDeferringCollector, which internally
initializes a priority queue of size `shardSize`. This queue is
populated with empty `Object` sentinels, which is roughly 16b per
object.
Similarly, the Diversified samplers create a DiversifiedTopDocsCollectors
which internally track PQ slots with ScoreDocKeys, weighing in around
28kb
If the user sets a very abusive `shard_size`, this could easily OOM
a node or cluster since these PQ are allocated up-front without
any checks.
This commit makes sure that when we create the collector, it
cannot be larger than the maxDoc so that we don't accidentally blow
up the node. We ensure the size is not greater than the overall
index maxDoc. A similar treatment is done for `maxDocsPerValue`
parameter of the diversified samplers
For good measure, this also adds in some CB accounting to try and track
memory usage.
Finally, a redundant array creation is removed to reduce a bit of
temporary memory.
If multiple jobs are created together and the anomaly
results index does not exist then some of the jobs could
fail to update the mappings of the results index. This
lead them to fail to write their results correctly later.
Although this scenario sounds rare, it is exactly what
happens if the user creates their first jobs using the
Nginx module in the ML UI.
This change fixes the problem by updating the mappings
of the results index if it is found to exist during a
creation attempt.
Fixes#38785
We call `ensureConnections()` to undo the effects of a disruption. However, it
is possible that one or more targets are currently CONNECTING and have been
since the disruption was active, and that the connection attempt was thwarted
by a concurrent disruption to the connection. If so, we cannot simply add our
listener to the queue because it will be notified when this CONNECTING activity
completes even though it was disrupted. We must therefore wait for all the
current activity to finish and then go through and reconnect to any missing
nodes.
Closes#40030.
This commit adds a variant for every official distribution that omits
the bundled jdk. The "no-jdk" naming is conveyed through the package
classifier, alongside the platform. Package tests are also added for
each new distribution.
Today we load the shard history retention leases from disk whenever opening the
engine, and treat a missing file as an empty set of leases. However in some
cases this is inappropriate: we might be restoring from a snapshot (if the
target index already exists then there may be leases on disk) or
force-allocating a stale primary, and in neither case does it make sense to
restore the retention leases from disk.
With this change we write an empty retention leases file during recovery,
except for the following cases:
- During peer recovery the on-disk leases may be accurate and could be needed
if the recovery target is made into a primary.
- During recovery from an existing store, as long as we are not
force-allocating a stale primary.
Relates #37165
Today we test Zen1/Zen2 compatibility by running 7.x nodes with a "fake" Zen1
implementation. However this is not a truly faithful test because these nodes
do known how to properly deserialize a 7.x cluster state, voting configurations
and all, whereas a real Zen1 node is in 6.7 and ignores the coordination
metadata.
We only ever apply a cluster state that's been committed, which in Zen2
involves setting the last-committed configuration to equal the last-accepted
configuration. Zen1 knows nothing about this adjustment, so it is possible for
these to differ. This breaks the assertion that the cluster states are equal on
all nodes after integration tests.
This commit fixes this by implementing this adjustment in Zen1 before applying
a cluster state.
Fixes#40055.
This change ensures that we do not make assumptions about the length
of the input that we can read from the stdin. It still consumes only
one line, as the previous implementation
This PR adds an internal REST API for querying context information about
Painless whitelists.
Commands include the following:
GET /_scripts/painless/_context -- retrieves a list of contexts
GET /_scripts/painless/_context?context=%name% retrieves all available
information about the API for this specific context
As discovered in #40041, when parsing certificates from files, the
SUN Security Provider normalizes DNs from parsed certificates by
adding spaces between RDNs, while the BouncyCastle one (which we
use in FIPS tests) does not.
We could proceed to normalize the DNs in the same manner in this
test by using i.e. the Unbound LDAP SDK but since the goal of this
test is to validate that we do get to read these exact certificates
from our trust sources and not to validate subject DNs, this commit
changes the test to check the serial number instead
Resolves: #40041
* Handle UTF8 values in the keystore
Our current implementation uses CharBuffer#array to get the chars
that were decoded from the UTF-8 bytes. The backing array of
CharBuffer is created in CharsetDecoder#decode and gets an initial
length that is the same as the length of the ByteBuffer it decodes,
hence the number of UTF-8 bytes.
This works fine for the first 128 characters where each one needs
one bytes, but for the next UTF-8 characters (other latin alphabets
Greek, Cyrillic etc.) where we need 2 to 4 bytes per character, this
backing char array has a larger size than the number of the actual
chars this CharBuffer contains. Calling `array()` on it will return
a char array that can potentially have extra null chars so the
SecureString we get from the KeystoreWrapper, is not the same as the
one we entered.
This commit changes the behavior to use Arrays#copyOfRange to get
the necessary chars from the CharBuffer and adds a test with
random ( maybe not printable ) UTF-8 strings
org.elasticsearch.xpack.monitoring.action.MonitoringBulkRequestTests#testAddRequestContent
can still randomly use a defaultType for monitoring. The defaultType
support has been removed as of PR #39888. Prior to its's removal it
would default the type if one is not specified. The _type on the monitoring
bulk end point is currently required, though it is not used as the final index type
(which defaultType would have).
Closes#39980
Computing the compressed size of the cluster state on every invocation
of cluster:monitor/state action is expensive, and the value of this
field is dubious anyway. Therefore we want to remove computing this
field. As a first step, we stop computing and return this field by
default. To avoid breaking users, we will give them a system property to
use to tide them over until the next major release when we will actually
remove this field. This comes with a deprecation warning too, and the
backport to the appropriate minor will also include a note in the
migration guide. There will be a follow-up to remove this field in the
next major version.
Previously, JDBC's REST call to the server was always sending UTC
instead of the timezone passed through connection string/properties.
Moreover the conversion to java.sql.Date was problematic as a
calculation on the epoch millis was used to set the time to 00:00:00.000
and the timezone info was lost. This caused the resulting java.sql.Date
object which is always using the JVM's timezone (no matter what timezone
setting is used in the connection string/properties) to be wrongly created.
Fixes: #39915
* [ML] Refactor common utils out of ML plugin to XPack.Core
* implementing GET filters with abstract transport
* removing added rest param
* adjusting how defaults can be supplied
* [Data Frame] Refactor PUT transform such that:
* POST _start creates the task and starts it
* GET transforms queries docs instead of tasks
* POST _stop verifies the stored config exists before trying to stop
the task
* Addressing PR comments
* Refactoring DataFrameFeatureSet#usage, decreasing size returned getTransformConfigurations
* fixing failing usage test