This change tightens up the meaning of the "input_fields" field
in the file structure finder output. Previously it was permitted
but not calculated for JSON and XML files. Following this change
the field is called "column_names" and is only permitted for
delimited files.
Additionally the way the column names are set for headerless
delimited files is refactored to encapsulate the way they're
named to one line of the code rather than having the same
logic in two places.
When we see a settings value, it could be a list. Yet this should only
happen if the underlying setting type is a list setting type. This
commit adds validation that when we get a setting value that is a list,
that the setting that we are getting is a list setting. And similarly,
if we get a value for a list setting, the underlying value should be a
list.
This change copies and validates the soft-deletes setting during resize.
If the source enables soft-deletes, the target must also enable it.
Closes#33321
I created a test a few days ago and declared a package that doesn't line
up with the directory structure. Oops. I a little surprised nothing
complained. But this fixes it.
Previously, when an arithmetic function got applied on a
table column in the `SELECT` clause, the name of the result
column contained weird characters used internally when
processing the SQL statement e.g.:
SELECT CHAR(emp_no % 10000) FROM "test_emp"
returned:
CHAR((emp_no{f}#14) % 10000))
as the column name instead of:
CHAR((emp_no) % 10000))
Also, fix an issue that causes a ClassCastException to be thrown
when using functions where both arguments are literals.
Closes#31869Closes#33461
* LeafCollector.setScorer() now takes a Scorable
* Scorers may not have null Weights
* IndexWriter.getFlushingBytes() reports how much memory is being used by IW threads writing to disk
In some cases we want to skip wiping cluster settings after a REST
test. For example, one use-case would be in the full cluster restart
tests where want to test cluster settings before and after a full
cluster restart. If we wipe the cluster settings before the restart,
then it would not be possible to assert on them after the restart.
* [CCR] Delay auto follow license check
so that we're sure that there are auto follow patterns configured
Otherwise we log a warning in case someone is running with basic or gold
license and has not used the ccr feature.
This is a new index privilege that the user needs to have in the follow cluster.
This privilege is required in addition to the `manage_ccr` cluster privilege in
order to execute the create and follow api.
Closes#33555
Today the FilterRoutingTests take the belt-and-braces approach of excluding
some node attribute values and including some others. This means that we don't
really test that both inclusion and exclusion work correctly: as long as one of
them works as expected then the test will pass. This change improves these
tests by only using one approach at once, demonstrating that both do indeed
work, and adds tests for various other scenarios too.
* Correctly handle NONE keyword for system keystore
As defined in the PKCS#11 reference guide
https://docs.oracle.com/javase/8/docs/technotes/guides/security/p11guide.html
PKCS#11 tokens can be used as the JSSE keystore and truststore and
the way to indicate this is to set `javax.net.ssl.keyStore` and
`javax.net.ssl.trustStore` to `NONE` (case sensitive).
This commits ensures that we honor this convention and do not
attempt to load the keystore or truststore if the system property is
set to NONE.
* Handle password protected system truststore
When a PKCS#11 token is used as the system truststore, we need to
pass a password when loading it, even if only for reading
certificate entries. This commit ensures that if
`javax.net.ssl.trustStoreType` is set to `PKCS#11` (as it would
when a PKCS#11 token is in use) the password specified in
`javax.net.ssl.trustStorePassword` is passed when attempting to
load the truststore.
Relates #33459
In some cases we want to deprecate a setting, and then automatically
upgrade uses of that setting to a replacement setting. This commit adds
infrastructure for this so that we can upgrade settings when recovering
the cluster state, as well as when such settings are dynamically applied
on cluster update settings requests. This commit only focuses on cluster
settings, index settings can build on this infrastructure in a
follow-up.
When requesting job stats for `_all`, all ES tasks are accepted
resulting to loads of cluster traffic and a memory overhead.
This commit correctly filters out non ML job tasks.
Closes#33515
We may use different global checkpoints to validate/normalize the range
of a change request if the global checkpoint is advanced between these
calls. If this is the case, then we generate an invalid request range.
This commit reverses the logic for CCR license checks in a few
actions. This is done so that the successful case, which tends to be a
larger block of code, does not require indentation.
We have some listeners in the CCR license tests that invoke Assert#fail
if the onSuccess method for the listener is unexpectedly invoked. This
can leave the main test thread hanging until the test suite times out
rather than failing quickly. This commit adds some latch countdowns so
that we fail quickly if these cases are hit.
In the multi-cluster-with-non-compliant-license tests, we try to write
out a java.policy to a temporary directory. However, if this temporary
directory does not already exist then writing the java.policy file will
fail. This commit ensures that the temporary directory exists before we
attempt to write the java.policy file.
This commit adds license checks for the auto-follow implementation. We
check the license on put auto-follow patterns, and then for every
coordination round we check that the local and remote clusters are
licensed for CCR. In the case of non-compliance, we skip coordination
yet continue to schedule follow-ups.
This commit ensures that we bootstrap a new history_uuid when force
allocating a stale primary. A stale primary should never be the source
of an operation-based recovery to another shard which exists before the
forced-allocation.
Closes#26712
* CompoundProcessor is in the ingest package now
-> resolved
* Java generics don't offer type checking so nothing
can be done here -> remvoed TODO and test
* #16019 was closed and not acted on
-> todo can go away
Clean up on top of the last fix: if we skip the entire test case then
the test run would fail because we skipped all the tests. This adds a
dummy test case to prevent that. It is a fairly nasty work around I plan
to work on something that makes this not required any more anyway.
If we're running on a platform where we can't install syscall filters
Elasticsearch logs a message before it reads the data directory to get
the node name. Because that log message doesn't have a node name this
test will fail. Since we mostly run the test on OSes where we *can*
install the syscall filters we can fairly safely skip the test on OSes
where we can't install the syscall filters.
Closes#33540
I disabled one branch a few hours ago because it failed in CI. It looks
like other branches can also fail so I'll disable them as well and look
more closely on Monday.
Today when checking settings dependencies, we do not check if fallback
settings are present. This means, for example, that if
cluster.remote.*.seeds falls back to search.remote.*.seeds, and
cluster.remote.*.skip_unavailable and search.remote.*.skip_unavailable
depend on cluster.remote.*.seeds, and we have set search.remote.*.seeds
and search.remote.*.skip_unavailable, then validation will fail because
it is expected that cluster.ermote.*.seeds is set here. This commit
addresses this by also checking fallback settings when validating
dependencies. To do this, we adjust the settings exist method to also
check for fallback settings, a case that it was not handling previously.
Change the logging infrastructure to handle when the node name isn't
available in `elasticsearch.yml`. In that case the node name is not
available until long after logging is configured. The biggest change is
that the node name logging no longer fixed at pattern build time.
Instead it is read from a `SetOnce` on every print. If it is unset it is
printed as `unknown` so we have something that fits in the pattern.
On normal startup we don't log anything until the node name is available
so we never see the `unknown`s.
This change adds support for enable and disable user APIs to the high
level rest client. There is a common request base class for both
requests with specific requests that simplify the use of these APIs.
The response for these APIs is simply an empty object so a new response
class has been created for cases where we expect an empty response to
be returned.
Finally, the put user documentation has been moved to the proper
location that is not within an x-pack sub directory and the document
tags no longer contain x-pack.
See #29827
We invoke force merge twice in the test to verify that recovery sources
are pruned when the global checkpoint advanced. However, if the global
checkpoint equals to the local checkpoint in the first force-merge, the
second force-merge will be a noop because all deleted docs are expunged
in the first merge already. We need to flush a new segment to make merge
happen so we can verify that all recovery sources are pruned.
This endpoint accepts an arbitrary file in the request body and
attempts to determine the structure. If successful it also
proposes mappings that could be used when indexing the file's
contents, and calculates simple statistics for each of the fields
that are useful in the data preparation step prior to configuring
machine learning jobs.
In an effort to encapsulate the different clients, the request
converters are being shuffled around. This splits the MigrationClient
request converters.
In an effort to encapsulate the different clients, the request
converters are being shuffled around. This splits the SnapshotClient
request converters.