This change removes the use of 0-all for auto expand replicas for the
security index. The use of 0-all causes some unexpected behavior with
certain allocation settings. This change allows us to avoid these with
a default install. If necessary, the number of replicas can be tuned by
the user.
Closes#29933Closes#29712
This commit adds tracking and reporting for fetch exceptions. We track
fetch exceptions per fetch, keeping track of up to the maximum number of
concurrent fetches. With each failing fetch, we associate the from
sequence number with the exception that caused the fetch. We report
these in the CCR stats endpoint, and add some testing for this tracking.
Welp, I broke this. I merged a change to auto-discover the CCR QA tests
by making :x-pack:plugin:ccr:check auto-discover the check tasks in the
qa sub-project. Yet, the check tasks for these sub-projects did not
depend on the necessary test tasks (as we were previously doing this
directly from the ccr build file. This commit fixes this!
This commit addresses a race condition in the scheduler engine test that
a listener that throws an exception does not cause other listeners to be
skipped. The race here is that we were counting down a latch, and then
throwing an exception yet an assertion that expected the exception to
have been thrown already could execute after the latch was counted down
for the final time but before the exception was thrown and acted upon by
the scheduler engine. This commit addresses this by moving the counting
down of the latch to definitely be after the exception was acted upon by
the scheduler engine.
This committ removes the getMetadata() methods from the DateHistoGroupConfig
and HistoGroupConfig objects. This way the configuration objects do not rely on RollupField.formatMetaField() anymore and do not expose a getMetadata()
method that is tighlty coupled to the rollup indexer.
* es/master: (62 commits)
[DOCS] Add docs for Application Privileges (#32635)
Add versions 5.6.12 and 6.4.1
Do NOT allow termvectors on nested fields (#32728)
[Rollup] Return empty response when aggs are missing (#32796)
[TEST] Add some ACL yaml tests for Rollup (#33035)
Move non duplicated actions back into xpack core (#32952)
Test fix - GraphExploreResponseTests should not randomise array elements Closes#33086
Use `addIfAbsent` instead of checking if an element is contained
TESTS: Fix Random Fail in MockTcpTransportTests (#33061)
HLRC: Fix Compile Error From Missing Throws (#33083)
[DOCS] Remove reload password from docs cf. #32889
HLRC: Add ML Get Buckets API (#33056)
Watcher: Improve error messages for CronEvalTool (#32800)
Search: Support of wildcard on docvalue_fields (#32980)
Change query field expansion (#33020)
INGEST: Cleanup Redundant Put Method (#33034)
SQL: skip uppercasing/lowercasing function tests for AZ locales as well (#32910)
Fix the default pom file name (#33063)
Switch ml basic tests to new style Requests (#32483)
Switch some watcher tests to new style Requests (#33044)
...
If a search request doesn't contain aggs (or an empty agg object),
we should just retun an empty response. This is how the normal search
API works if you specify zero hits and empty aggs.
The existing behavior throws an exception because it tries to send
an empty msearch.
Closes#32256
These two tests compliment the existing unit tests which check Rollup's
ACL/security integration.
The first test creates to indices, puts a document in each one, and then
assigns a role to the test user that can only access one of the indices.
A rollup job is created with a pattern that would match both indices,
and we verify that only the allowed document was rolled up (e.g. verifying
that the unpermissioned index stays hidden).
The second test creates a single index with two documents tagged by
the keyword "public"/"private". An attribute-based role is created
that only allows viewing "public" documents. We then verify the rollup
job only rolled the "public" doc, and not the "private" one.
Most actions' request and response were moved from xpack core into
protocol. We have decided to instead duplicate the actions in the HLRC
instead of trying to reuse them. This commit moves the non duplicated
actions back into xpack core and severs the tie between xpack core and
protocol so no other actions can be moved and not duplicated.
CronEvalTool prints an error only for cron expressions that result in
no upcoming time events.
If a cron expression results in less than the specified count
(default 10) time events, now all the coming times are printed
without displaying error message.
Closes#32735
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/ml-basic-multi-node` project to use
the new versions.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/smoke-test-monitoring-with-watcher`,
`x-pack/qa/smoke-test-watcher`, and
`x-pack/qa/smoke-test-watcher-with-security` projects to use the new
versions.
In our Netty layer we have had to take extra precautions against Netty
catching throwables which prevents them from reaching the uncaught
exception handler. This code has taken on additional uses in NIO layer
and now in the scheduler engine because there are other components in
stack traces that could catch throwables and suppress them from reaching
the uncaught exception handler. This commit is a simple cleanup of the
iterative evolution of this code to refactor all uses into a single
method in ExceptionsHelper.
Add documentation for #31238
- Add documentation for the req_authn_context_class_ref setting
- Add a section in SAML Guide regarding the use of SAML
Authentication Context.
Changes to split tests for keytab file test cases instead of
randomized testing for testing branches in the code in the
same test.
On windows platform, for keytab file permission test, we
required additional security permissions for the test
framework. As this was the only test that required those
permissions, skipping that test on windows platform.
The same scenario gets tested in *nix environments.
Closes#32768
* HLRC: Adding GET ML Job info API
* HLRC: Adding GET Job ML API
* Fixing QueryPage license header
* Adding serialization tests, addressing minor issues
* Renaming querypage, changing the dependency on it
* Making things immutable
* Fixing build failure due to method rename
This reworks how we configure the `shadow` plugin in the build. The major
change is that we no longer bundle dependencies in the `compile` configuration,
instead we bundle dependencies in the new `bundle` configuration. This feels
more right because it is a little more "opt in" rather than "opt out" and the
name of the `bundle` configuration is a little more obvious.
As an neat side effect of this, the `runtimeElements` configuration used when
one project depends on another now contains exactly the dependencies needed
to run the project so you no longer need to reference projects that use the
shadow plugin like this:
```
testCompile project(path: ':client:rest-high-level', configuration: 'shadow')
```
You can instead use the much more normal:
```
testCompile "org.elasticsearch.client:elasticsearch-rest-high-level-client:${version}"
```
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/audit-tests`,
`x-pack/qa/ml-disabled`, and `x-pack/qa/multi-node` projects to use the
new versions.
Today we are by-hand maintaining a list of CCR QA sub-projects that the
check task depends on. This commit simplifies this by finding these
sub-projects automatically and adding their check task as dependencies
of the CCR check task.
This commit moves the ML QA tests to be a sub-project of ML. The purpose
of this refactoring is to enable ML developers to run
:x-pack:plugin:ml:check and run the vast majority of a ML tests with a
single command (this still does not contain the ML REST tests, nor the
upgrade tests). This simplifies local development for faster iteration.
When discussing this test, it made little sense that testMonitoringService
would fail but not testMonitoringBulk given their similarity. So we argeed to
enable it again.
Relates #29880
* Add relevant documentation for FIPS 140-2 compliance.
* Introduce `fips_mode` setting.
* Discuss necessary configuration for FIPS 140-2
* Discuss introduced limitations by FIPS 140-2
* [DOCS] Add configurable password hashing docs
Adds documentation about the newly introduced configuration option
for setting the password hashing algorithm to be used for the users
cache and for storing credentials for the native and file realm.
When the application privileges feature was backported to 6.x/6.4 the
BWC version checks on the backport were updated to 6.4.0, but master
was not updated.
This commit updates all relevant version checks, and adds tests.
This commit adds troubleshooting section for Kerberos.
Most of the times the problems seen are caused due to invalid
configurations like keytab missing principals or credentials
not up to date. Time synchronization is an important part for
Kerberos infrastructure and the time skew can cause problems.
To debug further documentation explains how to enable JAAS
Kerberos login module debugging and Kerberos/SPNEGO debugging
by setting JVM system properties.
This commit implements licensing for CCR. CCR will require a platinum
license, and administrative endpoints will be disabled when a license is
non-compliant.
There are two problems with the scheduler engine today. Both relate to
listeners that throw.
The first problem is that any triggered listener that throws a plain old
exception will cause no additional listeners to be triggered for the
event, and will also cause the scheduler to never be invoked again. This
leads to lost events and is bad.
The second problem is that any triggered listener that throws an error
of the fatal kind will not lead to that error because caught by the
uncaught exception handler. This is because the triggered listener is
executed as a future task under a scheduled thread pool executor. A
throwable there goes caught by the JDK framework and set as the outcome
on the future task. Since we never inspect these tasks for their
outcomes, nor is there a good place to do this, we have to handle these
errors ourselves. To do this, we catch them and dispatch them to the
uncaught exception handler via a forked thread. This is similar to our
handling in Netty.
* HLRC: Adding ML Close Job API
HLRC: Adding ML Close Job API
* reconciling request converters
* Adding serialization tests and addressing PR comments
* Changing constructor order
* master:
Generalize remote license checker (#32971)
Trim translog when safe commit advanced (#32967)
Fix an inaccuracy in the dynamic templates documentation. (#32890)
Logging: Use settings when building daemon threads (#32751)
All Translog inner closes should happen after tragedy exception is set (#32674)
HLREST: AwaitsFix ML Test
Pass DiscoveryNode to initiateChannel (#32958)
Add mzn and dz to unsupported locales (#32957)
Use settings from the context in BootstrapChecks (#32908)
Update docs for node specifications (#30468)
HLRC: Forbid all Elasticsearch logging infra (#32784)
Only configure publishing if it's applied externally (#32351)
Fixes libs:dissect when in eclipse
Protect ScriptedMetricIT test cases against failures on 0-doc shards (#32959) (#32968)
[Kerberos] Add documentation for Kerberos realm (#32662)
Watcher: Properly find next valid date in cron expressions (#32734)
Fix some small issues in the getting started docs (#30346)
Set forbidden APIs target compatibility to compiler java version (#32935)
Move connection listener to ConnectionManager (#32956)
Machine learning has baked a remote license checker for use in checking
license compatibility of a remote license. This remote license checker
has general usage for any feature that relies on a remote cluster. For
example, cross-cluster replication will pull changes from a remote
cluster and require that the local and remote clusters have platinum
licenses. This commit generalizes the remote cluster license check for
use in cross-cluster replication.
Subclasses of `EsIntegTestCase` run multiple Elasticsearch nodes in the
same JVM and when we log we look at the name of the thread to figure out
the node name. This makes sure that all calls to `daemonThreadFactory`
include the node name.
Closes#32574
I'd like to follow this up with more drastic changes that make it
impossible to do this incorrectly but that change is much larger than
this and I'd like to get these log lines fixed up sooner rather than
later.
This commit adds documentation for configuring Kerberos realm.
Configuring Kerberos realm documentation highlights important
terminology and requirements before creating Kerberos realm.
Most of the documentation is centered around configuration from
Elasticsearch rather than go deep into Kerberos implementation.
Kerberos realm settings are mentioned in the security settings
for Kerberos realm.
When a list/an array of cron expressions is provided, and one of those addresses
is already expired, the expired one will be considered as an option
instead of the valid next one.
This commit also reduces the visibility of the CronnableSchedule and
refactors a comparator to look like java 8.
This is a followup to #31886. After that commit the
TransportConnectionListener had to be propogated to both the
Transport and the ConnectionManager. This commit moves that listener
to completely live in the ConnectionManager. The request and response
related methods are moved to a TransportMessageListener. That listener
continues to live in the Transport class.
* elastic/master: (46 commits)
NETWORKING: Make RemoteClusterConn. Lazy Resolve DNS (#32764)
[DOCS] Splits the users API documentation into multiple pages (#32825)
[DOCS] Splits the token APIs into separate pages (#32865)
[DOCS] Creates redirects for role management APIs page
Bypassing failing test PainlessDomainSplitIT#testHRDSplit (#32966)
TEST: Mute testRetentionPolicyChangeDuringRecovery
[DOCS] Fixes more broken links to role management APIs
[Docs] Tweaks and fixes to rollup docs
[DOCS] Fixes links to role management APIs
[ML][TEST] Fix BasicRenormalizationIT after adding multibucket feature
[DOCS] Splits the roles API documentation into multiple pages (#32794)
[TEST] Run pre 6.4 nodes in non-FIPS JVMs (#32901)
Make Geo Context Mapping Parsing More Strict (#32821)
[ML] fix updating opened jobs scheduled events (#31651) (#32881)
Scripted metric aggregations: add deprecation warning and system property to control legacy params (#31597)
Tests: Fix timezone conversion in DateTimeUnitTests
Enable FIPS140LicenseBootstrapCheck (#32903)
Fix InternalAutoDateHistogram reproducible failure (#32723)
Remove assertion in testDocStats on deletedDocs counter (#32914)
HLRC: Move ML request converters into their own class (#32906)
...
* Lazy resolve DNS (i.e. `String` to `DiscoveryNode`) to not run into indefinitely caching lookup issues (provided the JVM dns cache is configured correctly as explained in https://www.elastic.co/guide/en/elasticsearch/reference/6.3/networkaddress-cache-ttl.html)
* Changed `InetAddress` type to `String` for that higher up the stack
* Passed down `Supplier<DiscoveryNode>` instead of outright `DiscoveryNode` from `RemoteClusterAware#buildRemoteClustersSeeds` on to lazy resolve DNS when the `DiscoveryNode` is actually used (could've also passed down the value of `clusterName = REMOTE_CLUSTERS_SEEDS.getNamespace(concreteSetting)` together with the `List<String>` of hosts, but this route seemed to introduce less duplication and resulted in a significantly smaller changeset).
* Closes#28858
- Missing links to new IndexCaps API
- Incorrect security permissions on IndexCaps API
- GetJobs API must supply a job (or `_all`), omitting throws error
- Link to search/agg limitations from RollupSearch API
- Tweak URLs in quick reference
- Formatting of overview page
As the multibucket feature was merged in, this test hit a side effect
which means buckets trailing an anomaly could become anomalous.
This commit fixes the problem by filtering low score records when
we request them.
Elasticsearch versions earlier than 6.4.0 cannot properly run in a
FIPS 140 JVM. This commit ensures that we use a non-FIPS JVM for
nodes that we spin up in BWC tests even when we're testing FIPS.
* ML: fix updating opened jobs scheduled events (#31651)
* Adding UpdateParamsTests license header
* Adding integration test and addressing PR comments
* addressing test and job names
This commit removes the put privilege API in favor of having a single API to
create and update privileges. If we see the need to have an API like this in
the future we can always add it back.
The Kibana settings docs that these watches rely on can sometimes
contain no xpack settings. When this is the case, we will end up with a
null pointer exception in the script. We need to guard against in these
scripts so this commit does that.
This change cleans up some methods in the CharArrays class from x-pack, which
includes the unification of char[] to utf8 and utf8 to char[] conversions that
intentionally do not use strings. There was previously an implementation in
x-pack and in the reloading of secure settings. The method from the reloading
of secure settings was adopted as it handled more scenarios related to the
backing byte and char buffers that were used to perform the conversions. The
cleaned up class is moved into libs/core to allow it to be used by requests
that will be migrated to the high level rest client.
Relates #32332
* master:
Fix global checkpoint listeners test
HLRC: adding machine learning open job (#32860)
[ML] Add log structure finder functionality (#32788)
INGEST: Add Configuration Except. Data to Metdata (#32322)
This change adds a library to ML that can be used to deduce a log
file's structure given only a sample of the log file.
Eventually this will be used to add an endpoint to ML to make the
functionality available to end users, but this will follow in a
separate change.
The functionality is split into a library so that it can also be
used by a command line tool without requiring the command line
tool to include all server code.
This removes custom Response classes that extend `AcknowledgedResponse` and do nothing, these classes are not needed and we can directly use the non-abstract super-class instead.
While this appears to be a large PR, no code has actually changed, only class names have been changed and entire classes removed.
[ML] Removing old per-partition normalization code
Per-partition normalization is an old, undocumented feature that was
never used by clients. It has been superseded by per-partition maximum
scoring.
To maintain communication compatibility with nodes prior to 6.5 it is
necessary to maintain/cope with the old wire format
This change removes the PasswordHashingBootstrapCheck and replaces it
with validation on the setting itself. This ensures we always get a
valid value from the setting when it is used.
This change moves the validation for values of usernames and passwords
from the request to the transport action. This is done to prevent
the need to move more classes into protocol once we add this API to the
high level rest client. Additionally, this resolves an issue where
validation depends on settings and we always pass empty settings
instead of the actual settings.
Relates #32332
The HipChatMessage#render is no longer used, and instead the
HipChatAccount#render is used in the ExecutableHipChatAction. Only a
test that validated the HttpProxy used this render method still. This
commit cleans it up.
The auth.basic package was an example of a single implementation
interface that leaked into many different classes. In order to clean
this up, the HttpAuth interface, factories, and Registries all were
removed and the single implementation, BasicAuth, was substituted in all
cases. This removes some dependenies between Auth and the Templates,
which can now use static methods on BasicAuth. BasicAuth was also moved
into the http package and all of the other classes were removed.
The PagerDuty v1 API is EOL and will stop accepting new accounts
shortly. This commit swaps out the watcher use of the v1 API with the
new v2 API. It does not change anything about the existing watcher
API.
Closes#32243
All Unit tests in this module are muted in FIPS 140 JVMs and
as such the CI run fails. This commit disables test task for the
module in a FIPS JVM and reverts adding a dummy test in
4cbcc1.
We only upgrade the ID when the state is saved in one of four scenarios:
- when we reach a checkpoint (every 50 pages)
- when we run out of data
- when explicitly stopped
- on failure
The test was relying on the pre-upgrade to finish, save state and then
the post-upgrade to start, hit the end of data and upgrade ID. THEN
get the new doc and apply the new ID.
But I think this is vulnerable to timing issues. If the pre-upgrade
portion shutdown before it saved the state, when restarting we would run
through all the data from the beginning with the old ID, meaning both
docs would still have the old scheme.
This change makes the pre-upgrade wait for the job to go back to STARTED
so that we know it persisted the end point. Post-upgrade, it stops and
restarts the job to ensure the state was persisted and the ID upgraded.
That _should_ rule out the above timing issue.
Closes#32773
Added infrastructure to push through the 'person name field value' to
the normalizer process. This is required by the normalizer to retrieve
the maximum scores for individual partitions.
The request and response classes have been extracted from `IndexUpgradeInfoAction` into top-level classes, and moved to the protocol jar. The `UpgradeActionRequired` enum is also moved.
Relates to #29827
This commit adds missing debug log statements for exceptions
that occur during ticket validation. I thought these
get logged somewhere else in authentication chain
but even after enabling trace logs I could not see them
logged. As the Kerberos exception messages are cryptic
adding full stack trace would help debugging faster.
* Clear Job#finished_time when it is opened (#32605)
* not returning failure when Job#finished_time is not reset
* Changing error log string and source string
Our rest testing framework has support for sniffing the host metadata on
startup and, before this change, it'd sniff that metadata before running
the first test. This prevents running these tests against
elasticsearch installations that won't support sniffing like Elastic
Cloud. This change allows tests to only sniff for metadata when they
encounter a test with a `node_selector`. These selectors are the things
that need the metadata anyway and they are super rare. Tests that use
these won't be able to run against installations that don't support
sniffing but we can just skip them. In the case of Elastic Cloud, these
tests were never going to work against Elastic Cloud anyway.
The upcoming ML log structure finder functionality will use these
libraries, and it makes sense to use the same versions that are
being used elsewhere in Elasticsearch. This is especially true
with icu4j, which is pretty big.
This commit modifies the test to handle file permission
tests in windows/dos environments. The test requires access
to UserPrincipal and so have modified the plugin-security policy
to access user information.
Closes#32637
The incorrect NodeInfo is created when the optional parameter is not used, leading to the incorrect constructor being used. Simplified LocateFunctionProcessorDefinition by using one constructor instead of two.
Fixes https://github.com/elastic/elasticsearch/issues/32554
Skip the comparative tests using lowercasing/uppercasing against H2 (which considers the Locale).
ES-SQL is, so far, ignoring the Locale.
Still, the same queries are executed against ES-SQL alone and results asserted to be correct.
We previously discussed moving the classes extending `AcknowledgedResponse` to
simply use `AcknowledgedResponse`, making the class non-abstract.
This moves the first class to do this, removing `WritePipelineResponse` in the
process.
If we like the way this looks, I will switch the remaining classes over to using
`AcknowledgedResponse`.
* master:
Cross-cluster search: preserve cluster alias in shard failures (#32608)
Handle AlreadyClosedException when bumping primary term
[TEST] Allow to run in FIPS JVM (#32607)
[Test] Add ckb to the list of unsupported languages (#32611)
SCRIPTING: Move Aggregation Scripts to their own context (#32068)
Painless: Use LocalMethod Map For Lookup at Runtime (#32599)
[TEST] Enhance failure message when bulk updates have failures
[ML] Add ML result classes to protocol library (#32587)
Suppress LicensingDocumentationIT.testPutLicense in release builds (#32613)
[Rollup] Update wire version check after backport
Suppress Wildfly test in FIPS JVMs (#32543)
[Rollup] Improve ID scheme for rollup documents (#32558)
ingest: doc: move Dot Expander Processor doc to correct position (#31743)
[ML] Add some ML config classes to protocol library (#32502)
[TEST]Split transport verification mode none tests (#32488)
Core: Move helper date formatters over to java time (#32504)
[Rollup] Remove builders from DateHistogramGroupConfig (#32555)
[TEST} unmutes SearchAsyncActionTests and adds debugging info
[ML] Add Detector config classes to protocol library (#32495)
[Rollup] Remove builders from MetricConfig (#32536)
Tests: Add rolling upgrade tests for watcher (#32428)
Fix race between replica reset and primary promotion (#32442)
Rest HL client: Add get license action
Continues to use String instead of a more complex License class to
hold the license text similarly to put license.
Relates #29827
The Apache Http components support for Spnego scheme
uses canonical name by default.
Also when resolving host name, on centos by default
there are other aliases so adding them to the
DelegationPermission.
Closes#32498
If a leader index is deleted while there is an active follower, the
follower will send shard changes requests bound for the leader
index. Today this will result in a null pointer exception because there
will not be an index routing table for the index. A null pointer
exception looks like a bug to a user so this commit addresses this by
throwing an index not found exception instead.
* Change SecurityNioHttpServerTransportTests to use PEM key and
certificate files instead of a JKS keystore so that this tests
can also run in a FIPS 140 JVM
* Do not attempt to run cases with ssl.verification_mode NONE in
SessionFactoryTests so that the tests can run in a FIPS 140 JVM
This commit addresses a race that can happen in the basic CCR stats REST
tests. Namely, peek reads can fire before the REST test client fires the
stats request. This means that we have to weaken our assertions about
the expected stats response.
This commit adds the ML results classes to the X-Pack protocol
library used by the high level REST client.
(Other commits will add the config classes and stats classes.)
These classes:
- Are publically immutable
- Are privately mutable - this is perhaps not as nice as the
config classes, but to do otherwise would require adding
builders and the corresponding server-side classes that the
old transport client used don't have builders
- Have little/no validation of field values beyond null checks
- Are convertible to and from X-Content, but NOT wire transportable
- Have lenient parsers to maximize compatibility across versions
- Have the same class names and getter names as the corresponding
classes in X-Pack core to ease migration for transport client
users
- Don't reproduce all the methods that do calculations or
transformations that the the corresponding classes in X-Pack core
have
Bumping down the version to 6.4 since the backport is complete. Also
adds some missing version checks to the bwc tests to make sure it
only runs on the correct versions
Previously, we were using a simple CRC32 for the IDs of rollup documents.
This is a very poor choice however, since 32bit IDs leads to collisions
between documents very quickly.
This commit moves Rollups over to a 128bit ID. The ID is a concatenation
of all the keys in the document (similar to the rolling CRC before),
hashed with 128bit Murmur3, then base64 encoded. Finally, the job
ID and a delimiter (`$`) are prepended to the ID.
This gurantees that there are 128bits per-job. 128bits should
essentially remove all chances of collisions, and the prepended
job ID means that _if_ there is a collision, it stays "within"
the job.
BWC notes:
We can only upgrade the ID scheme after we know there has been a good
checkpoint during indexing. We don't rely on a STARTED/STOPPED
status since we can't guarantee that resulted from a real checkpoint,
or other state. So we only upgrade the ID after we have reached
a checkpoint state during an active index run, and only after the
checkpoint has been confirmed.
Once a job has been upgraded and checkpointed, the version increments
and the new ID is used in the future. All new jobs use the
new ID from the start
This commit adds four ML config classes to the X-Pack protocol
library used by the high level REST client.
(Other commits will add the remaining config classes, plus results
and stats classes.)
These classes:
- Are immutable
- Have little/no validation of field values beyond null checks
- Are convertible to and from X-Content, but NOT wire transportable
- Have lenient parsers to maximize compatibility across versions
- Have the same class names, member names and getter/setter names
as the corresponding classes in X-Pack core to ease migration
for transport client users
- Don't reproduce all the methods that do calculations or
transformations that the the corresponding classes in X-Pack core
have
This commit splits SecurityNetty4TransportTests in two methods
one handling verification mode certificate and full and one
handling verification mode none. This is done so that the second
method can be muted in a FIPS 140 JVM where verification mode none
cannot be used.
Same motivation as #32507 but for the DateHistogramGroupConfig
configuration object. This pull request also changes the format of the
time zone from a Joda's DateTimeZone to a simple String.
It should help to port the API to the high level rest client and allows
clients to not be forced to use the Joda Time library. Serialization is
impacted but does not need a backward compatibility layer as
DateTimeZone are serialized as String anyway. XContent also expects
a String for timezone, so I found it easier to move everything to String.
Related to #29827
This commit adds the Detector class and its dependencies to the
X-Pack protocol library used by the high level REST client.
(Future commits will add the remaining config classes, plus results
and stats classes.)
These classes:
- Are immutable, with builders, but the builders do no validation
beyond null checks
- Are convertible to and from X-Content, but NOT wire transportable
- Have lenient parsers to maximize compatibility across versions
- Have the same class names, member names and getter/setter names
as the corresponding classes in X-Pack core to ease migration
for transport client users
- Don't reproduce all the methods that do calculations or
transformations that the the corresponding classes in X-Pack core
have
These tests ensure, that the basic watch APIs are tested in the rolling
upgrade tests. After initially adding a watch, the tests try to get,
execute, deactivate and activate a watch. Watcher stats are tested as
well, and an own java based test has been added for restarting, as that
requires waiting for a state change. Watcher history is also checked.
Closes#31216
* master:
HLRC: Move commercial clients from XPackClient (#32596)
Add cluster UUID to Cluster Stats API response (#32206)
Security: move User to protocol project (#32367)
[TEST] Test for shard failures, add debug to testProfileMatchesRegular
Minor fix for javadoc (applicable for java 11). (#32573)
Painless: Move Some Lookup Logic to PainlessLookup (#32565)
TEST: Avoid merges in testSeqNoAndCheckpoints
[Rollup] Remove builders from HistoGroupConfig (#32533)
Mutes failing SQL string function tests due to #32589
fixed elements in array of produced terms (#32519)
INGEST: Enable default pipelines (#32286)
Remove cluster state initial customs (#32501)
Mutes LicensingDocumentationIT due to #32580
[ML] Remove multiple_bucket_spans (#32496)
[ML] Rename JobProvider to JobResultsProvider (#32551)
Correct minor typo in explain.asciidoc for HLRC
Build: Add elastic maven to repos used by BuildPlugin (#32549)
Clarify the error message when a pipeline agg is used in the 'order' parameter. (#32522)
Revert "[test] turn on host io cache for opensuse (#32053)"
Enable packaging tests on suse boxes
[ML] Improve error when no available field exists for rule scope (#32550)
[ML] Improve error for functions with limited rule condition support (#32548)
Painless: Clean Up PainlessField (#32525)
Add @AwaitsFix for #32554
Remove broken @link in Javadoc
Scripting: Conditionally use java time api in scripting (#31441)
[ML] Fix thread leak when waiting for job flush (#32196) (#32541)
Add AwaitsFix to failing test - see #32546
Core: Minor size reduction for AbstractComponent (#32509)
SQL: Added support for string manipulating functions with more than one parameter (#32356)
[DOCS] Reloadable Secure Settings (#31713)
Watcher: Reenable HttpSecretsIntegrationTests#testWebhookAction test (#32456)
[Rollup] Remove builders from TermsGroupConfig (#32507)
Use hostname instead of IP with SPNEGO test (#32514)
Switch x-pack rolling restart to new style Requests (#32339)
NETWORKING: Fix Netty Leaks by upgrading to 4.1.28 (#32511)
[DOCS] Small fixes in rule configuration page (#32516)
Painless: Clean up PainlessMethod (#32476)
Build: Remove shadowing from benchmarks (#32475)
Docs: Add all JDKs to CONTRIBUTING.md
Add licensing enforcement for FIPS mode (#32437)
SQL: Add test for handling of partial results (#32474)
Mute testFilterCacheStats
[ML][DOCS] Fix typo applied_to => applies_to
Scripting: Fix painless compiler loader to know about context classes (#32385)
For a new feature like CCR we will go without this extra layer of
indirection. This commit replaces all /_xpack/ccr/_(\S+) endpoints by
/_ccr/$1 endpoints.
* Make cluster stats response contain cluster UUID
* Updating constructor usage in Monitoring tests
* Adding cluster_uuid field to Cluster Stats API reference doc
* Adding rest api spec test for expecting cluster_uuid in cluster stats response
* Adding missing newline
* Indenting do section properly
* Missed a spot!
* Fixing the test cluster ID
The User class has been moved to the protocol project for upcoming work
to add more security APIs to the high level rest client. As part of
this change, the toString method no longer uses a custom output method
from MetadataUtils and instead just relies on Java's toString
implementation.
This commit removes the never released multiple_bucket_spans
configuration parameter. This is now replaced with the new
multibucket feature that requires no configuration.
Added support for string manipulating functions with more than one parameter:
CONCAT, LEFT, RIGHT, REPEAT, POSITION, LOCATE, REPLACE, SUBSTRING, INSERT
The error message mentioned in #30094 does not link to to a cause by the
test itself, as there are still inflight requests according to the
circuit breaker.
I ran this test class 100k times on bare metal and could not reproduce
it. I will reenable the test for now.
Closes#30094
While working on adding the Create Rollup Job API to the
high level REST client (#29827), I noticed that the configuration
objects like TermsGroupConfig rely on the Builder pattern in
order to create or parse instances. These builders are doing
some validation but the same validation could be done within
the constructor itself or on the server side when appropriate.
This commit removes the builder for TermsGroupConfig,
removes some other methods that I consider not really usefull
once the TermsGroupConfig object will be exposed in the
high level REST client. It also simplifies the parsing logic.
Related to #29827
This change updates KerberosAuthenticationIT to resolve the host used
to connect to the test cluster. This is needed because the host could
be an IP address but SPNEGO requires a hostname to work properly. This
is done by adding a hook in ESRestTestCase for building the HttpHost
from the host and port.
Additionally, the project now specifies the IPv4 loopback address as
the http host. This is done because we need to be able to resolve the
address used for the HTTP transport before the node starts up, but the
http.ports file is not written until the node is started.
Closes#32498
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack:qa:rolling-upgrade*` projects to use
the new versions.
* Upgrade to `4.1.28` since the problem reported in #32487 is a bug in Netty itself (see https://github.com/netty/netty/issues/7337)
* Fixed other leaks in test code that now showed up due to fixes improvements in leak reporting in the newer version
* Needed to extend permissions for netty common package because it now sets a classloader at runtime after changes in 63bae0956a
* Adjusted forbidden APIs check accordingly
* Closes#32487
This commit adds licensing enforcement for FIPS mode through the use of
a bootstrap check, a node join validator, and a check in the license
service. The work done here is based on the current implementation of
the TLS enforcement with a production license.
The bootstrap check is always enforced since we need to enforce the
licensing and this is the best option to do so at the present time.
First, some background: we have 15 different methods to get a logger in
Elasticsearch but they can be broken down into three broad categories
based on what information is provided when building the logger.
Just a class like:
```
private static final Logger logger = ESLoggerFactory.getLogger(ActionModule.class);
```
or:
```
protected final Logger logger = Loggers.getLogger(getClass());
```
The class and settings:
```
this.logger = Loggers.getLogger(getClass(), settings);
```
Or more information like:
```
Loggers.getLogger("index.store.deletes", settings, shardId)
```
The goal of the "class and settings" variant is to attach the node name
to the logger. Because we don't always have the settings available, we
often use the "just a class" variant and get loggers without node names
attached. There isn't any real consistency here. Some loggers get the
node name because it is convenient and some do not.
This change makes the node name available to all loggers all the time.
Almost. There are some caveats are testing that I'll get to. But in
*production* code the node name is node available to all loggers. This
means we can stop using the "class and settings" variants to fetch
loggers which was the real goal here, but a pleasant side effect is that
the ndoe name is now consitent on every log line and optional by editing
the logging pattern. This is all powered by setting the node name
statically on a logging formatter very early in initialization.
Now to tests: tests can't set the node name statically because
subclasses of `ESIntegTestCase` run many nodes in the same jvm, even in
the same class loader. Also, lots of tests don't run with a real node so
they don't *have* a node name at all. To support multiple nodes in the
same JVM tests suss out the node name from the thread name which works
surprisingly well and easy to test in a nice way. For those threads
that are not part of an `ESIntegTestCase` node we stick whatever useful
information we can get form the thread name in the place of the node
name. This allows us to keep the logger format consistent.
This commit adds an assumption to two test methods in
SSLTrustRestrictionsTests that we are not on JDK 11 as the tests
currently fail there.
Relates #29989
This commit removes Kerberos bootstrap checks as they were more
validation checks and better done in Kerberos realm constructor
than as bootstrap checks. This also moves the check
for one Kerberos realm per node to where we initialize realms.
This commit adds few validations which were missing earlier
like missing read permissions on keytab file or if it is directory
to throw exception with error message.
Today ShardFollowNodeTask might fetch some operations more than once.
This happens because we ask the leading for up to max_batch_count
operations (instead of the left-over size) for the left-over request.
The leading then can freely respond up to the max_batch_count, and at
the same time, if one of the previous requests completed, we might issue
another read request whose range overlaps with the response of the
left-over request.
Closes#32453
The default behaviour for "GetPrivileges" is to get all application
privileges. This should only be allowed if the user has access to
the "*" application.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/plugin/security` project to use the new
versions.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/security-example-spi-extension`
project to use the new versions.
* master:
Tests: Fix convert error tests to use fixed value (#32415)
IndicesClusterStateService should replace an init. replica with an init. primary with the same aId (#32374)
REST high-level client: parse back _ignored meta field (#32362)
[CI] Mute DocumentSubsetReaderTests testSearch
Today we do not check if the `following_index` setting of the follower
is enabled or not when processing a follow-request. If that setting is
disabled, the follower will use the default engine, not the following
engine. This change checks and rejects such invalid follow requests.
Relates #30086
* master:
Remove reference to non-existent store type (#32418)
[TEST] Mute failing FlushIT test
Fix ordering of bootstrap checks in docs (#32417)
[TEST] Mute failing InternalEngineTests#testSeqNoAndCheckpoints
[TEST] Mute failing testConvertLongHexError
bump lucene version after backport
Upgrade to Lucene-7.5.0-snapshot-608f0277b0 (#32390)
[Kerberos] Avoid vagrant update on precommit (#32416)
TESTS: Move netty leak detection to paranoid level (#32354)
[DOCS] Fixes formatting of scope object in job resource
Copy missing segment attributes in getSegmentInfo (#32396)
AbstractQueryTestCase should run without type less often (#28936)
INGEST: Fix Deprecation Warning in Script Proc. (#32407)
Switch x-pack/plugin to new style Requests (#32327)
Docs: Correcting a typo in tophits (#32359)
Build: Stop double generating buildSrc pom (#32408)
TEST: Avoid triggering merges in FlushIT
Fix missing JavaDoc for @throws in several places in KerberosTicketValidator.
Switch x-pack full restart to new style Requests (#32294)
Release requests in cors handler (#32364)
Painless: Clean Up PainlessClass Variables (#32380)
Docs: Fix callouts in put license HL REST docs (#32363)
[ML] Consistent pattern for strict/lenient parser names (#32399)
Update update-settings.asciidoc (#31378)
Remove some dead code (#31993)
Introduce index store plugins (#32375)
Rank-Eval: Reduce scope of an unchecked supression
Make sure _forcemerge respects `max_num_segments`. (#32291)
TESTS: Fix Buf Leaks in HttpReadWriteHandlerTests (#32377)
Only enforce password hashing check if FIPS enabled (#32383)
Today it's possible to encounter an Index operation in Lucene whose
_source is disabled, and _recovery_source was pruned by the MergePolicy.
If it's the case, we create a Translog#Index without source and let the
caller validate it later. However, this approach is challenging for the
caller.
Deletes and No-Ops don't allow invoking "source()" method. The caller
has to make sure to call "source()" only on index operations. The
current implementation in CCR does not follow this and fail to replica
deletes or no-ops. Moreover, it's easier to reason if a Translog#Index
always has the source.
The main highlight is the removal of the reclaim_deletes_weight in the TieredMergePolicy.
The es setting index.merge.policy.reclaim_deletes_weight is deprecated in this commit and the value is ignored. The new merge policy setting setDeletesPctAllowed should be added in a follow up.
This commit avoids dependency during compile on copy keytab to
be present in the generated sources so pre-commit does not
stall for updating vagrant box.
Closes#32387
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/plugin` project to use the new versions.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack:qa:full-cluster-restart` project to use
the new versions.
Previously we had two patterns for naming of strict
and lenient parsers.
Some classes had CONFIG_PARSER and METADATA_PARSER,
and used an enum to pass the parser type to nested
parsers.
Other classes had STRICT_PARSER and LENIENT_PARSER
and used ternary operators to pass the parser type
to nested parsers.
This change makes all ML classes use the second of
the patterns described above.
Removing some dead code or supressing warnings where apropriate. Most of the
time the variable tested for null is dereferenced earlier or never used before.
Today we allow plugins to add index store implementations yet we are not
doing this in our new way of managing plugins as pull versus push. That
is, today we still allow plugins to push index store providers via an on
index module call where they can turn around and add an index
store. Aside from being inconsistent with how we manage plugins today
where we would look to pull such implementations from plugins at node
creation time, it also means that we do not know at a top-level (for
example, in the indices service) which index stores are available. This
commit addresses this by adding a dedicated plugin type for index store
plugins, removing the index module hook for adding index stores, and by
aggregating these into the top-level of the indices service.
* master:
[DOCS] Fix formatting error in Slack action
Painless: Fix documentation links to use existing refs (#32335)
Painless: Decouple PainlessLookupBuilder and Whitelists (#32346)
[DOCS] Adds recommendation for xpack.security.enabled (#32345)
[TEST] Mute ConvertProcessortTests.testConvertIntHexError
[TEST] Fix failure due to exception message in java11 (#32321)
[DOCS] Fixes typo in ML aggregations page
[DOCS] Adds link from bucket_span property to common time units
[ML][DOCS] Add documentation for detector rules and filters (#32013)
Add opaque_id to index audit logging (#32260)
Add 6.5.0 version to master
fixes broken build for third-party-tests (#32353)
Java 11 uses more verbose exceptions messages, causing this assertion
to fail. Changed the test to be less restrictive and only look
for the classes we care about.
* master:
Security: revert to old way of merging automata (#32254)
Networking: Fix test leaking buffer (#32296)
Undo a debugging change that snuck in during the field aliases merge.
Painless: Update More Methods to New Naming Scheme (#32305)
[TEST] Fix assumeFalse -> assumeTrue in SSLReloadIntegTests
Ingest: Support integer and long hex values in convert (#32213)
Introduce fips_mode setting and associated checks (#32326)
Add V_6_3_3 version constant
[DOCS] Removed extraneous callout number.
Rest HL client: Add put license action (#32214)
Add ERR to ranking evaluation documentation (#32314)
Introduce Application Privileges with support for Kibana RBAC (#32309)
Build: Shadow x-pack:protocol into x-pack:plugin:core (#32240)
[Kerberos] Add Kerberos authentication support (#32263)
[ML] Extract persistent task methods from MlMetadata (#32319)
Add Restore Snapshot High Level REST API
Register ERR metric with NamedXContentRegistry (#32320)
fixes broken build for third-party-tests (#32315)
Allow Integ Tests to run in a FIPS-140 JVM (#31989)
[DOCS] Rollup Caps API incorrectly mentions GET Jobs API (#32280)
awaitsfix testRandomClusterStateUpdates
[TEST] add version skip to weighted_avg tests
Consistent encoder names (#29492)
Add WeightedAvg metric aggregation (#31037)
Switch monitoring to new style Requests (#32255)
Rename ranking evaluation `quality_level` to `metric_score` (#32168)
Fix a test bug around nested aggregations and field aliases. (#32287)
Add new permission for JDK11 to load JAAS libraries (#32132)
Silence SSL reload test that fails on JDK 11
[test] package pre-install java check (#32259)
specify subdirs of lib, bin, modules in package (#32253)
Switch x-pack:core to new style Requests (#32252)
awaitsfix SSLConfigurationReloaderTests
Painless: Clean up add methods in PainlessLookup (#32258)
Fail shard if IndexShard#storeStats runs into an IOException (#32241)
AwaitsFix RecoveryIT#testHistoryUUIDIsGenerated
Remove unnecessary warning supressions (#32250)
CCE when re-throwing "shard not available" exception in TransportShardMultiGetAction (#32185)
Add new fields to monitoring template for Beats state (#32085)
This commit reverts to the pre-6.3 way of merging automata as the
change in 6.3 significantly impacts the performance for roles with a
large number of concrete indices. In addition, the maximum number of
states for security automata has been increased to 100,000 in order
to allow users to use roles that caused problems pre-6.3 and 6.3 fixed.
As an escape hatch, the maximum number of states is configurable with
a setting so that users with complex patterns in roles can increase
the states with the knowledge that there is more memory usage.
* Introduce fips_mode setting and associated checks
Introduce xpack.security.fips_mode.enabled setting ( default false)
When it is set to true, a number of Bootstrap checks are performed:
- Check that Secure Settings are of the latest version (3)
- Check that no JKS keystores are configured
- Check that compliant algorithms ( PBKDF2 family ) are used for
password hashing
In the HL REST client we replace the License object with a string, because of
complexity of this class. It is also not really needed on the client side since
end-users are not interacting with the license besides passing it as a string
to the server.
Relates #29827
This commit introduces "Application Privileges" to the X-Pack security
model.
Application Privileges are managed within Elasticsearch, and can be
tested with the _has_privileges API, but do not grant access to any
actions or resources within Elasticsearch. Their purpose is to allow
applications outside of Elasticsearch to represent and store their own
privileges model within Elasticsearch roles.
Access to manage application privileges is handled in a new way that
grants permission to specific application names only. This lays the
foundation for more OLS on cluster privileges, which is implemented by
allowing a cluster permission to inspect not just the action being
executed, but also the request to which the action is applied.
To support this, a "conditional cluster privilege" is introduced, which
is like the existing cluster privilege, except that it has a Predicate
over the request as well as over the action name.
Specifically, this adds
- GET/PUT/DELETE actions for defining application level privileges
- application privileges in role definitions
- application privileges in the has_privileges API
- changes to the cluster permission class to support checking of request
objects
- a new "global" element on role definition to provide cluster object
level security (only for manage application privileges)
- changes to `kibana_user`, `kibana_dashboard_only_user` and
`kibana_system` roles to use and manage application privileges
Closes#29820Closes#31559
This bundles the x-pack:protocol project into the x-pack:plugin:core
project because we'd like folks to consider it an implementation detail
of our build rather than a separate artifact to be managed and depended
on. It is now bundled into both x-pack:plugin:core and
client:rest-high-level. To make this work I had to fix a few things.
Firstly, I had to make PluginBuildPlugin work with the shadow plugin.
In that case we have to bundle only the `shadow` dependencies and the
shadow jar.
Secondly, every reference to x-pack:plugin:core has to use the `shadow`
configuration. Without that the reference is missing all of the
un-shadowed dependencies. I tried to make it so that applying the shadow
plugin automatically redefines the `default` configuration to mirror the
`shadow` configuration which would allow us to use bare project references
to the x-pack:plugin:core project but I couldn't make it work. It'd *look*
like it works but then fail for transitive dependencies anyway. I think
it is still a good thing to do but I don't have the willpower to do it
now.
Finally, I had to fix an issue where Eclipse and IntelliJ didn't properly
reference shadowed transitive dependencies. Neither IDE supports shadowing
natively so they have to reference the shadowed projects. We fix this by
detecting `shadow` dependencies when in "Intellij mode" or "Eclipse mode"
and adding `runtime` dependencies to the same target. This convinces
IntelliJ and Eclipse to play nice.
This commit adds support for Kerberos authentication with a platinum
license. Kerberos authentication support relies on SPNEGO, which is
triggered by challenging clients with a 401 response with the
`WWW-Authenticate: Negotiate` header. A SPNEGO client will then provide
a Kerberos ticket in the `Authorization` header. The tickets are
validated using Java's built-in GSS support. The JVM uses a vm wide
configuration for Kerberos, so there can be only one Kerberos realm.
This is enforced by a bootstrap check that also enforces the existence
of the keytab file.
In many cases a fallback authentication mechanism is needed when SPNEGO
authentication is not available. In order to support this, the
DefaultAuthenticationFailureHandler now takes a list of failure response
headers. For example, one realm can provide a
`WWW-Authenticate: Negotiate` header as its default and another could
provide `WWW-Authenticate: Basic` to indicate to the client that basic
authentication can be used in place of SPNEGO.
In order to test Kerberos, unit tests are run against an in-memory KDC
that is backed by an in-memory ldap server. A QA project has also been
added to test against an actual KDC, which is provided by the krb5kdc
fixture.
Closes#30243
* Complete changes for running IT in a fips JVM
- Mute :x-pack:qa:sql:security:ssl:integTest as it
cannot run in FIPS 140 JVM until the SQL CLI supports key/cert.
- Set default JVM keystore/truststore password in top level build
script for all integTest tasks in a FIPS 140 JVM
- Changed top level x-pack build script to use keys and certificates
for trust/key material when spinning up clusters for IT
Adds a new single-value metrics aggregation that computes the weighted
average of numeric values that are extracted from the aggregated
documents. These values can be extracted from specific numeric
fields in the documents.
When calculating a regular average, each datapoint has an equal "weight"; it
contributes equally to the final value. In contrast, weighted averages
scale each datapoint differently. The amount that each datapoint contributes
to the final value is extracted from the document, or provided by a script.
As a formula, a weighted average is the `∑(value * weight) / ∑(weight)`
A regular average can be thought of as a weighted average where every value has
an implicit weight of `1`.
Closes#15731
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/plugin/monitoring` project to use the new
versions.
The notion of "quality" is an overloaded term in the search ranking evaluation
context. Its usually used to decribe certain levels of "good" vs. "bad" of a
seach result with respect to the users information need. We currently report the
result of the ranking evaluation as `quality_level` which is a bit missleading.
This changes the response parameter name to `metric_score` which fits better.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack:core` project to use the new versions.
New data is reported from Beats to the monitoring endpoint. This PR adds the template change necessary for it. See https://github.com/elastic/beats/issues/7521 for more details.
Queue data is skipped for now as implementation is not finished yet.
Today we consider a read request is exhausted if from_seqno is equal to
or greater than the max_required_seqno. However, if we stop when
from_seqno equals to the max_required_seqno, we will miss an operation
whose seqno is max_required_seqno because we have not seen that
operation yet.
* es/master: (23 commits)
Switch full-cluster-restart to new style Requests (#32140)
[DOCS] Clarified that you must remove X-Pack plugin when upgrading from pre-6.3. (#32016)
Remove BouncyCastle dependency from runtime (#32193)
INGEST: Extend KV Processor (#31789) (#32232)
INGEST: Make a few Processors callable by Painless (#32170)
Add region ISO code to GeoIP Ingest plugin (#31669)
[Tests] Remove QueryStringQueryBuilderTests#toQuery class assertions (#32236)
Make sure that field aliases count towards the total fields limit. (#32222)
Switch rolling restart to new style Requests (#32147)
muting failing test for internal auto date histogram to avoid failure before fix is merged
MINOR: Remove unused `IndexDynamicSettings` (#32237)
Fix multi level nested sort (#32204)
Enhance Parent circuit breaker error message (#32056)
[ML] Use default request durability for .ml-state index (#32233)
Remove indices stats timeout from monitoring docs
Rename ranking evaluation response section (#32166)
Dependencies: Upgrade to joda time 2.10 (#32160)
Remove aliases resolution limitations when security is enabled (#31952)
Ensure that field aliases cannot be used in multi-fields. (#32219)
TESTS: Check for Netty resource leaks (#31861)
...
We modified the way we calculate to_seqno in #32121 but did not adjust
this test accordingly. If min_seqno equals to max_seqno, the size should be
one instead of zero.
Relates #32121
* Remove BouncyCastle dependency from runtime
This commit introduces a new gradle project that contains
the classes that have a dependency on BouncyCastle. For
the default distribution, It builds a jar from those and
in puts it in a subdirectory of lib
(/tools/security-cli) along with the BouncyCastle jars.
This directory is then passed in the
ES_ADDITIONAL_CLASSPATH_DIRECTORIES of the CLI tools
that use these classes.
BouncyCastle is removed as a runtime dependency (remains
as a compileOnly one) from x-pack core and x-pack security.
The initial decision to use async durability was made a long time ago
for performance reasons. That argument no longer applies and we
prefer the safety of request durability.
Normally translog operations will not be replayed on the primary.
Following engine is an exception where we replay translog on both
primary and replica as a non-primary strategy. Even though we won't use
the version_type in the following engine, we still need to pass a valid
value for the primary operation in order not to trip assertions in an
engine.
This commit passes version_type EXTERNAL for translog operation if its
origin is primary.
Relates #31945
The added tests are based on specific scenarios as described in the test plan.
Before this change the ShardFollowNodeTaskTests contained more random like tests,
but these have been removed and in a followup pr better random tests will
be added in a new test class as is described in the test plan.
Resolving wildcards in aliases expression is challenging as we may end
up with no aliases to replace the original expression with, but if we
replace with an empty array that means _all which is quite the opposite.
Now that we support and serialize the original requested aliases,
whenever aliases are replaced we will be able to know what was
initially requested. `MetaData#findAliases` can then be updated to not
return anything in case it gets empty aliases, but the original aliases
were not empty. That means that empty aliases are interpreted as _all
only if they were originally requested that way.
Relates to #31516
* master:
Painless: Simplify Naming in Lookup Package (#32177)
Handle missing values in painless (#32207)
add support for write index resolution when creating/updating documents (#31520)
ECS Task IAM profile credentials ignored in repository-s3 plugin (#31864)
Remove indication of future multi-homing support (#32187)
Rest test - allow for snapshots to take 0 milliseconds
Make x-pack-core generate a pom file
Rest HL client: Add put watch action (#32026)
Build: Remove pom generation for plugin zip files (#32180)
Fix comments causing errors with Java 11
Fix rollup on date fields that don't support epoch_millis (#31890)
Detect and prevent configuration that triggers a Gradle bug (#31912)
[test] port linux package packaging tests (#31943)
Revert "Introduce a Hashing Processor (#31087)" (#32178)
Remove empty @return from JavaDoc
Adjust SSLDriver behavior for JDK11 changes (#32145)
[test] use randomized runner in packaging tests (#32109)
Add support for field aliases. (#32172)
Painless: Fix caching bug and clean up addPainlessClass. (#32142)
Call setReferences() on custom referring tokenfilters in _analyze (#32157)
Fix BwC Tests looking for UUID Pre 6.4 (#32158)
Improve docs for search preferences (#32159)
use before instead of onOrBefore
Add more contexts to painless execute api (#30511)
Add EC2 credential test for repository-s3 (#31918)
A replica can be promoted and started in one cluster state update (#32042)
Fix Java 11 javadoc compile problem
Fix CP for namingConventions when gradle home has spaces (#31914)
Fix `range` queries on `_type` field for singe type indices (#31756)
[DOCS] Update TLS on Docker for 6.3 (#32114)
ESIndexLevelReplicationTestCase doesn't support replicated failures but it's good to know what they are
Remove versionType from translog (#31945)
Switch distribution to new style Requests (#30595)
Build: Skip jar tests if jar disabled
Painless: Add PainlessClassBuilder (#32141)
Build: Make additional test deps of check (#32015)
Disable C2 from using AVX-512 on JDK 10 (#32138)
Build: Move shadow customizations into common code (#32014)
Painless: Fix Bug with Duplicate PainlessClasses (#32110)
Remove empty @param from Javadoc
Re-disable packaging tests on suse boxes
Docs: Fix missing example script quote (#32010)
[ML] Wait for aliases in multi-node tests (#32086)
[ML] Move analyzer dependencies out of categorization config (#32123)
Ensure to release translog snapshot in primary-replica resync (#32045)
Handle TokenizerFactory TODOs (#32063)
Relax TermVectors API to work with textual fields other than TextFieldType (#31915)
Updates the build to gradle 4.9 (#32087)
Mute :qa:mixed-cluster indices.stats/10_index/Index - all’
Check that client methods match API defined in the REST spec (#31825)
Enable testing in FIPS140 JVM (#31666)
Fix put mappings java API documentation (#31955)
Add exclusion option to `keep_types` token filter (#32012)
[Test] Modify assert statement for ssl handshake (#32072)
Prior to 6.3 a trial license default to security enabled. Since 6.3
they default to security disabled. If a cluster is upgraded from <6.3
to >6.3, then we detect this and mimic the old behaviour with respect
to security.
Now write operations like Index, Delete, Update rely on the write-index associated with
an alias to operate against. This means writes will be accepted even when an alias points to multiple indices, so long as one is the write index. Routing values will be used from the AliasMetaData for the alias in the write-index. All read operations are left untouched.