* master:
Integrates soft-deletes into Elasticsearch (#33222)
Revert "Integrates soft-deletes into Elasticsearch (#33222)"
Add support for "authorization_realms" (#33262)
Authorization Realms allow an authenticating realm to delegate the task
of constructing a User object (with name, roles, etc) to one or more
other realms.
E.g. A client could authenticate using PKI, but then delegate to an LDAP
realm. The LDAP realm performs a "lookup" by principal, and then does
regular role-mapping from the discovered user.
This commit includes:
- authorization_realm support in the pki, ldap, saml & kerberos realms
- docs for authorization_realms
- checks that there are no "authorization chains"
(whereby "realm-a" delegates to "realm-b", but "realm-b" delegates to "realm-c")
Authorization realms is a platinum feature.
This PR removes the deprecated `Custom` class in `IndexMetaData`, in favor
of a `Map<String, DiffableStringMap>` that is used to store custom index
metadata. As part of this, there is now no way to set this metadata in a
template or create index request (since it's only set by plugins, or dedicated
REST endpoints).
The `Map<String, DiffableStringMap>` is intended to be a namespaced `Map<String,
String>` (`DiffableStringMap` implements `Map<String, String>`, so the signature
is more like `Map<String, Map<String, String>>`). This is so we can do things
like:
``` java
Map<String, String> ccrMeta = indexMetaData.getCustom("ccr");
```
And then have complete control over the metadata. This also means any
plugin/feature that uses this has to manage its own BWC, as the map is just
serialized as a map. It also means that if metadata is put in the map that isn't
used (for instance, if a plugin were removed), it causes no failures the way
an unregistered `Setting` would.
The reason I use a custom `DiffableStringMap` here rather than a plain
`Map<String, String>` is so the map can be diffed with previous cluster state
updates for serialization.
Supersedes #32683
This commit ensures that when `TriggerService.start()` is called,
we ensure in the trigger engine implementations that current watches are
removed instead of adding to the existing ones in
`TickerScheduleTriggerEngine.start()`
Two additional minor fixes, where the result remains the same but less code gets executed.
1. If the node is not a data node, we forgot to set the status to
STARTING when watcher is being started. This should not be a big issue,
because a non-data node does not spent a lot of time loading as there
are no watches which need loading.
2. If a new cluster state came in during a reload, we had two checks in
place to abort loading the current one. The first one before we load all
the watches of the local node and the second before watcher is starting
with those new watches. Turned out that the first check was not
returning, which meant we always tried to load all the watches, and then
would fail on the second check. This has been fixed here.
Ensure that the SSLConfigurationReloaderTests can run with JDK 11
by pinning the Server TLS version to TLS1.2. This can be revisited
while tackling the effort to full support TLSv1.3 in
https://github.com/elastic/elasticsearch/issues/32276Resolves#32124
Ran for all locales in system to find locales which caused
problems in tests due to incorrect generalized time handling
in simple kdc ldap server.
Closes#33228
We need to limit the search request aggregations to whole multiples
of the configured interval for both histogram and date_histogram.
Otherwise, agg buckets won't overlap with the rolled up buckets
and the results will be incorrect.
For histogram, the validation is very simple: request must be >= the config,
and modulo evenly.
Dates are more tricky.
- If both request and config are fixed dates, we can convert to millis
and treat them just like the histo
- If both are calendar, we make sure the request is >= the config with
a static lookup map that ranks the calendar values relatively. All
calendar units are "singles", so they are evenly divisible already
- We disallow any other combination (one fixed, one calendar, etc)
When a node dies that carries a watcher shard or a shard is relocated to
another node, then watcher needs not only trigger a reload on the node
where the shard relocation happened, but also on other nodes where
copies of this shard, as different watches may need to be loaded.
This commit takes the change of remote nodes into account by not only
storing the local shard allocation ids in the WatcherLifeCycleService,
but storing a list of ShardRoutings based on the local active shards.
This also fixes some tests, which had a wrong assumption. Using
`TestShardRouting.newShardRouting` in our tests for cluster state
creation led to the issue of always creating new allocation ids which
implicitely lead to a reload.
This extracts a super class out of the rollup indexer called the AsyncTwoPhaseIterator.
The implementor of it can define the query, transformation of the response,
indexing and the object to persist the position/state of the indexer.
The stats object used by the indexer to record progress is also now abstract, allowing
the implementation provide custom stats beyond what the indexer provides. It also
allows the implementation to decide how the stats are presented (leaves toXContent()
up to the implementation).
This should allow new projects to reuse the search-then-index persistent task that Rollup
uses, but without the restrictions/baggage of how Rollup has to work internally to
satisfy time-based rollups.
* master:
Painless: Add Bindings (#33042)
Update version after client credentials backport
Fix forbidden apis on FIPS (#33202)
Remote 6.x transport BWC Layer for `_shrink` (#33236)
Test fix - Graph HLRC tests needed another field adding to randomisation exception list
HLRC: Add ML Get Records API (#33085)
[ML] Fix character set finder bug with unencodable charsets (#33234)
TESTS: Fix overly long lines (#33240)
Test fix - Graph HLRC test was missing field name to be excluded from randomisation logic
Remove unsupported group_shard_failures parameter (#33208)
Update BucketUtils#suggestShardSideQueueSize signature (#33210)
Parse PEM Key files leniantly (#33173)
INGEST: Add Pipeline Processor (#32473)
Core: Add java time xcontent serializers (#33120)
Consider multi release jars when running third party audit (#33206)
Update MSI documentation (#31950)
HLRC: create base timed request class (#33216)
[DOCS] Fixes command page titles
HLRC: Move ML protocol classes into client ml package (#33203)
Scroll queries asking for rescore are considered invalid (#32918)
Painless: Fix Semicolon Regression (#33212)
ingest: minor - update test to include dissect (#33211)
Switch remaining LLREST usage to new style Requests (#33171)
HLREST: add reindex API (#32679)
This commit changes the serialization version from V_7_0_0_alpha1 to
V_6_5_0 for the create token request and response with a client
credentials grant type. The client credentials work has now been
backported to 6.x.
Relates #33106
- third party audit detects jar hell with JDK so we disable it
- jdk non portable in forbiddenapis detects classes being used from the
JDK ( for fips ) that are not portable, this is intended so we don't
scan for it on fips.
- different exclusion rules for third party audit on fips
Closes#33179
Some character sets cannot be encoded and this was tripping
up the binary data check in the ML log structure character
set finder.
The fix is to assume that if ICU4J identifies that some bytes
correspond to a character set that cannot be encoded and those
bytes contain zeroes then the data is binary rather than text.
Fixes#33227
Exclude classes meant for newer versions than what we are auditing against, those classes won't be found. There's no reason to exclude JDK classes from newer versions, with this PR, we will not extract them in the first place.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. In a
long series of PRs I've changed all of the old style requests that I
could find with `grep`. In this PR I change all requests that I could
find by *removing* the deprecated methods. Since this is a non-trivial
change I do not include actually removing the deprecated requests. I'll
do that in a follow up. But this should be the last set of usage
removals before the actual deprecated method removal. Yay!
* master:
[Rollup] Better error message when trying to set non-rollup index (#32965)
HLRC: Use Optional in validation logic (#33104)
Remove unused User class from protocol (#33137)
ingest: Introduce the dissect processor (#32884)
[Docs] Add link to es-kotlin-wrapper-client (#32618)
[Docs] Remove repeating words (#33087)
Minor spelling and grammar fix (#32931)
Remove support for deprecated params._agg/_aggs for scripted metric aggregations (#32979)
Watcher: Simplify finding next date in cron schedule (#33015)
Run Third party audit with forbidden APIs CLI (part3/3) (#33052)
Fix plugin build test on Windows (#33078)
HLRC+MINOR: Remove Unused Private Method (#33165)
Remove old unused test script files (#32970)
Build analysis-icu client JAR (#33184)
Ensure to generate identical NoOp for the same failure (#33141)
ShardSearchFailure#readFrom to set index and shardId (#33161)
We don't allow the user to configure a rollup index against an
existing index, but the exceptions that we return are not clear about
that. They indicate issues with metadata, instead of stating
the real reason (not allowed to use a non-rollup index to store
rollup data).
This makes the exception better, and adds a bit more testing
This commit removes the unused User class from the protocol project.
This class was originally moved into protocol in preparation for moving
more request and response classes, but given the change in direction
for the HLRC this is no longer needed. Additionally, this change also
changes the package name for the User object in x-pack/plugin/core to
its original name.
This commit makes primary-replica resyncer use Lucene as the source of
history operation instead of translog if soft-deletes is enabled. With
this change, we no longer expose translog snapshot directly in IndexShard.
Relates #29530
These were broken when fetch exceptions were introduced to the status
object but equals and hash code were not updated then. This commit
addresses that.
Today we fetch the mapping from the leader and apply it as a mapping
update whenever the index metadata version on the leader changes. Yet,
the index metadata can change for many reasons other than a mapping
update (e.g., settings updates, adding an alias, or a replica being
promoted to a primary among many other reasons). This commit builds on
the addition of a mapping version to the index metadata to only fetch
mapping updates when the mapping version increases. This reduces the
number of these fetches and application of mappings on the follower to
the bare minimum.
The code introduced in 3fa36807f8 to fix
an issue with crons always returning -1 was not very readable. This
implementation uses streams to improve readability.
The new implementation is functional equivalent with the old, ant based one.
It parses task standard error to get the missing classes and violations in the same way.
I considered re-using ForbiddenApisCliTask but Gradle makes it hard to build inheritance with tasks that have task actions , since the order of the task actions can't be controlled.
This inheritance isn't dully desired either as the third party audit task is much more opinionated and we don't want to expose some of the configuration.
We could probably extract a common base class without any task actions, but probably more trouble than it's worth.
Closes#31715
* master:
Adjust BWC version on mapping version
Token API supports the client_credentials grant (#33106)
Build: forked compiler max memory matches jvmArgs (#33138)
Introduce mapping version to index metadata (#33147)
SQL: Enable aggregations to create a separate bucket for missing values (#32832)
Fix grammar in contributing docs
SECURITY: Fix Compile Error in ReservedRealmTests (#33166)
APM server monitoring (#32515)
Support only string `format` in date, root object & date range (#28117)
[Rollup] Move toBuilders() methods out of rollup config objects (#32585)
Fix forbiddenapis on java 11 (#33116)
Apply publishing to genreate pom (#33094)
Have circuit breaker succeed on unknown mem usage
Do not lose default mapper on metadata updates (#33153)
Fix a mappings update test (#33146)
Reload Secure Settings REST specs & docs (#32990)
Refactor CachingUsernamePassword realm (#32646)
This change adds support for the client credentials grant type to the
token api. The client credentials grant allows for a client to
authenticate with the authorization server and obtain a token to access
as itself. Per RFC 6749, a refresh token should not be included with
the access token and as such a refresh token is not issued when the
client credentials grant is used.
The addition of the client credentials grant will allow users
authenticated with mechanisms such as kerberos or PKI to obtain a token
that can be used for subsequent access.
* Adding new MonitoredSystem for APM server
* Teaching Monitoring template utils about APM server monitoring indices
* Documenting new monitoring index for APM server
* Adding monitoring index template for APM server
* Copy pasta typo
* Removing metrics.libbeat.config section from mapping
* Adding built-in user and role for APM server user
* Actually define the role :)
* Adding missing import
* Removing index template and system ID for apm server
* Shortening line lengths
* Updating expected number of built-in users in integration test
* Removing "system" from role and user names
* Rearranging users to make tests pass
Refactors the logic of authentication and lookup caching in
`CachingUsernamePasswordRealm`. Nothing changed about
the single-inflight-request or positive caching.
* master:
Add proxy support to RemoteClusterConnection (#33062)
TEST: Skip assertSeqNos for closed shards (#33130)
TEST: resync operation on replica should acquire shard permit (#33103)
Switch remaining x-pack tests to new style Requests (#33108)
Switch remaining tests to new style Requests (#33109)
Switch remaining ml tests to new style Requests (#33107)
Build: Line up IDE detection logic
Security index expands to a single replica (#33131)
HLRC: request/response homogeneity and JavaDoc improvements (#33133)
Checkstyle!
[Test] Fix sporadic failure in MembershipActionTests
Revert "Do NOT allow termvectors on nested fields (#32728)"
[Rollup] Move toAggCap() methods out of rollup config objects (#32583)
Fix race condition in scheduler engine test
This adds support for connecting to a remote cluster through
a tcp proxy. A remote cluster can configured with an additional
`search.remote.$clustername.proxy` setting. This proxy will be used
to connect to remote nodes for every node connection established.
We still try to sniff the remote clsuter and connect to nodes directly
through the proxy which has to support some kind of routing to these nodes.
Yet, this routing mechanism requires the handshake request to include some
kind of information where to route to which is not yet implemented. The effort
to use the hostname and an optional node attribute for routing is tracked
in #32517Closes#31840
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/saml-idp-tests` and
`x-pack/qa/security-setup-password-tests` projects to use the new
versions.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/plugin/ml/qa/native-multi-node-tests`,
`x-pack/plugin/ml/qa/single-node-tests` projects to use the new
versions.
This change removes the use of 0-all for auto expand replicas for the
security index. The use of 0-all causes some unexpected behavior with
certain allocation settings. This change allows us to avoid these with
a default install. If necessary, the number of replicas can be tuned by
the user.
Closes#29933Closes#29712
This commit adds tracking and reporting for fetch exceptions. We track
fetch exceptions per fetch, keeping track of up to the maximum number of
concurrent fetches. With each failing fetch, we associate the from
sequence number with the exception that caused the fetch. We report
these in the CCR stats endpoint, and add some testing for this tracking.
Welp, I broke this. I merged a change to auto-discover the CCR QA tests
by making :x-pack:plugin:ccr:check auto-discover the check tasks in the
qa sub-project. Yet, the check tasks for these sub-projects did not
depend on the necessary test tasks (as we were previously doing this
directly from the ccr build file. This commit fixes this!
This commit addresses a race condition in the scheduler engine test that
a listener that throws an exception does not cause other listeners to be
skipped. The race here is that we were counting down a latch, and then
throwing an exception yet an assertion that expected the exception to
have been thrown already could execute after the latch was counted down
for the final time but before the exception was thrown and acted upon by
the scheduler engine. This commit addresses this by moving the counting
down of the latch to definitely be after the exception was acted upon by
the scheduler engine.
This committ removes the getMetadata() methods from the DateHistoGroupConfig
and HistoGroupConfig objects. This way the configuration objects do not rely on RollupField.formatMetaField() anymore and do not expose a getMetadata()
method that is tighlty coupled to the rollup indexer.
* es/master: (62 commits)
[DOCS] Add docs for Application Privileges (#32635)
Add versions 5.6.12 and 6.4.1
Do NOT allow termvectors on nested fields (#32728)
[Rollup] Return empty response when aggs are missing (#32796)
[TEST] Add some ACL yaml tests for Rollup (#33035)
Move non duplicated actions back into xpack core (#32952)
Test fix - GraphExploreResponseTests should not randomise array elements Closes#33086
Use `addIfAbsent` instead of checking if an element is contained
TESTS: Fix Random Fail in MockTcpTransportTests (#33061)
HLRC: Fix Compile Error From Missing Throws (#33083)
[DOCS] Remove reload password from docs cf. #32889
HLRC: Add ML Get Buckets API (#33056)
Watcher: Improve error messages for CronEvalTool (#32800)
Search: Support of wildcard on docvalue_fields (#32980)
Change query field expansion (#33020)
INGEST: Cleanup Redundant Put Method (#33034)
SQL: skip uppercasing/lowercasing function tests for AZ locales as well (#32910)
Fix the default pom file name (#33063)
Switch ml basic tests to new style Requests (#32483)
Switch some watcher tests to new style Requests (#33044)
...
If a search request doesn't contain aggs (or an empty agg object),
we should just retun an empty response. This is how the normal search
API works if you specify zero hits and empty aggs.
The existing behavior throws an exception because it tries to send
an empty msearch.
Closes#32256
These two tests compliment the existing unit tests which check Rollup's
ACL/security integration.
The first test creates to indices, puts a document in each one, and then
assigns a role to the test user that can only access one of the indices.
A rollup job is created with a pattern that would match both indices,
and we verify that only the allowed document was rolled up (e.g. verifying
that the unpermissioned index stays hidden).
The second test creates a single index with two documents tagged by
the keyword "public"/"private". An attribute-based role is created
that only allows viewing "public" documents. We then verify the rollup
job only rolled the "public" doc, and not the "private" one.
Most actions' request and response were moved from xpack core into
protocol. We have decided to instead duplicate the actions in the HLRC
instead of trying to reuse them. This commit moves the non duplicated
actions back into xpack core and severs the tie between xpack core and
protocol so no other actions can be moved and not duplicated.
CronEvalTool prints an error only for cron expressions that result in
no upcoming time events.
If a cron expression results in less than the specified count
(default 10) time events, now all the coming times are printed
without displaying error message.
Closes#32735
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/ml-basic-multi-node` project to use
the new versions.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/smoke-test-monitoring-with-watcher`,
`x-pack/qa/smoke-test-watcher`, and
`x-pack/qa/smoke-test-watcher-with-security` projects to use the new
versions.
In our Netty layer we have had to take extra precautions against Netty
catching throwables which prevents them from reaching the uncaught
exception handler. This code has taken on additional uses in NIO layer
and now in the scheduler engine because there are other components in
stack traces that could catch throwables and suppress them from reaching
the uncaught exception handler. This commit is a simple cleanup of the
iterative evolution of this code to refactor all uses into a single
method in ExceptionsHelper.
Add documentation for #31238
- Add documentation for the req_authn_context_class_ref setting
- Add a section in SAML Guide regarding the use of SAML
Authentication Context.
Changes to split tests for keytab file test cases instead of
randomized testing for testing branches in the code in the
same test.
On windows platform, for keytab file permission test, we
required additional security permissions for the test
framework. As this was the only test that required those
permissions, skipping that test on windows platform.
The same scenario gets tested in *nix environments.
Closes#32768
* HLRC: Adding GET ML Job info API
* HLRC: Adding GET Job ML API
* Fixing QueryPage license header
* Adding serialization tests, addressing minor issues
* Renaming querypage, changing the dependency on it
* Making things immutable
* Fixing build failure due to method rename
This reworks how we configure the `shadow` plugin in the build. The major
change is that we no longer bundle dependencies in the `compile` configuration,
instead we bundle dependencies in the new `bundle` configuration. This feels
more right because it is a little more "opt in" rather than "opt out" and the
name of the `bundle` configuration is a little more obvious.
As an neat side effect of this, the `runtimeElements` configuration used when
one project depends on another now contains exactly the dependencies needed
to run the project so you no longer need to reference projects that use the
shadow plugin like this:
```
testCompile project(path: ':client:rest-high-level', configuration: 'shadow')
```
You can instead use the much more normal:
```
testCompile "org.elasticsearch.client:elasticsearch-rest-high-level-client:${version}"
```
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/audit-tests`,
`x-pack/qa/ml-disabled`, and `x-pack/qa/multi-node` projects to use the
new versions.
Today we are by-hand maintaining a list of CCR QA sub-projects that the
check task depends on. This commit simplifies this by finding these
sub-projects automatically and adding their check task as dependencies
of the CCR check task.
This commit moves the ML QA tests to be a sub-project of ML. The purpose
of this refactoring is to enable ML developers to run
:x-pack:plugin:ml:check and run the vast majority of a ML tests with a
single command (this still does not contain the ML REST tests, nor the
upgrade tests). This simplifies local development for faster iteration.
When discussing this test, it made little sense that testMonitoringService
would fail but not testMonitoringBulk given their similarity. So we argeed to
enable it again.
Relates #29880
* Add relevant documentation for FIPS 140-2 compliance.
* Introduce `fips_mode` setting.
* Discuss necessary configuration for FIPS 140-2
* Discuss introduced limitations by FIPS 140-2
* [DOCS] Add configurable password hashing docs
Adds documentation about the newly introduced configuration option
for setting the password hashing algorithm to be used for the users
cache and for storing credentials for the native and file realm.
When the application privileges feature was backported to 6.x/6.4 the
BWC version checks on the backport were updated to 6.4.0, but master
was not updated.
This commit updates all relevant version checks, and adds tests.
This commit adds troubleshooting section for Kerberos.
Most of the times the problems seen are caused due to invalid
configurations like keytab missing principals or credentials
not up to date. Time synchronization is an important part for
Kerberos infrastructure and the time skew can cause problems.
To debug further documentation explains how to enable JAAS
Kerberos login module debugging and Kerberos/SPNEGO debugging
by setting JVM system properties.
This commit implements licensing for CCR. CCR will require a platinum
license, and administrative endpoints will be disabled when a license is
non-compliant.
There are two problems with the scheduler engine today. Both relate to
listeners that throw.
The first problem is that any triggered listener that throws a plain old
exception will cause no additional listeners to be triggered for the
event, and will also cause the scheduler to never be invoked again. This
leads to lost events and is bad.
The second problem is that any triggered listener that throws an error
of the fatal kind will not lead to that error because caught by the
uncaught exception handler. This is because the triggered listener is
executed as a future task under a scheduled thread pool executor. A
throwable there goes caught by the JDK framework and set as the outcome
on the future task. Since we never inspect these tasks for their
outcomes, nor is there a good place to do this, we have to handle these
errors ourselves. To do this, we catch them and dispatch them to the
uncaught exception handler via a forked thread. This is similar to our
handling in Netty.
* HLRC: Adding ML Close Job API
HLRC: Adding ML Close Job API
* reconciling request converters
* Adding serialization tests and addressing PR comments
* Changing constructor order
* master:
Generalize remote license checker (#32971)
Trim translog when safe commit advanced (#32967)
Fix an inaccuracy in the dynamic templates documentation. (#32890)
Logging: Use settings when building daemon threads (#32751)
All Translog inner closes should happen after tragedy exception is set (#32674)
HLREST: AwaitsFix ML Test
Pass DiscoveryNode to initiateChannel (#32958)
Add mzn and dz to unsupported locales (#32957)
Use settings from the context in BootstrapChecks (#32908)
Update docs for node specifications (#30468)
HLRC: Forbid all Elasticsearch logging infra (#32784)
Only configure publishing if it's applied externally (#32351)
Fixes libs:dissect when in eclipse
Protect ScriptedMetricIT test cases against failures on 0-doc shards (#32959) (#32968)
[Kerberos] Add documentation for Kerberos realm (#32662)
Watcher: Properly find next valid date in cron expressions (#32734)
Fix some small issues in the getting started docs (#30346)
Set forbidden APIs target compatibility to compiler java version (#32935)
Move connection listener to ConnectionManager (#32956)
Machine learning has baked a remote license checker for use in checking
license compatibility of a remote license. This remote license checker
has general usage for any feature that relies on a remote cluster. For
example, cross-cluster replication will pull changes from a remote
cluster and require that the local and remote clusters have platinum
licenses. This commit generalizes the remote cluster license check for
use in cross-cluster replication.
Subclasses of `EsIntegTestCase` run multiple Elasticsearch nodes in the
same JVM and when we log we look at the name of the thread to figure out
the node name. This makes sure that all calls to `daemonThreadFactory`
include the node name.
Closes#32574
I'd like to follow this up with more drastic changes that make it
impossible to do this incorrectly but that change is much larger than
this and I'd like to get these log lines fixed up sooner rather than
later.
This commit adds documentation for configuring Kerberos realm.
Configuring Kerberos realm documentation highlights important
terminology and requirements before creating Kerberos realm.
Most of the documentation is centered around configuration from
Elasticsearch rather than go deep into Kerberos implementation.
Kerberos realm settings are mentioned in the security settings
for Kerberos realm.
When a list/an array of cron expressions is provided, and one of those addresses
is already expired, the expired one will be considered as an option
instead of the valid next one.
This commit also reduces the visibility of the CronnableSchedule and
refactors a comparator to look like java 8.
This is a followup to #31886. After that commit the
TransportConnectionListener had to be propogated to both the
Transport and the ConnectionManager. This commit moves that listener
to completely live in the ConnectionManager. The request and response
related methods are moved to a TransportMessageListener. That listener
continues to live in the Transport class.
* elastic/master: (46 commits)
NETWORKING: Make RemoteClusterConn. Lazy Resolve DNS (#32764)
[DOCS] Splits the users API documentation into multiple pages (#32825)
[DOCS] Splits the token APIs into separate pages (#32865)
[DOCS] Creates redirects for role management APIs page
Bypassing failing test PainlessDomainSplitIT#testHRDSplit (#32966)
TEST: Mute testRetentionPolicyChangeDuringRecovery
[DOCS] Fixes more broken links to role management APIs
[Docs] Tweaks and fixes to rollup docs
[DOCS] Fixes links to role management APIs
[ML][TEST] Fix BasicRenormalizationIT after adding multibucket feature
[DOCS] Splits the roles API documentation into multiple pages (#32794)
[TEST] Run pre 6.4 nodes in non-FIPS JVMs (#32901)
Make Geo Context Mapping Parsing More Strict (#32821)
[ML] fix updating opened jobs scheduled events (#31651) (#32881)
Scripted metric aggregations: add deprecation warning and system property to control legacy params (#31597)
Tests: Fix timezone conversion in DateTimeUnitTests
Enable FIPS140LicenseBootstrapCheck (#32903)
Fix InternalAutoDateHistogram reproducible failure (#32723)
Remove assertion in testDocStats on deletedDocs counter (#32914)
HLRC: Move ML request converters into their own class (#32906)
...
* Lazy resolve DNS (i.e. `String` to `DiscoveryNode`) to not run into indefinitely caching lookup issues (provided the JVM dns cache is configured correctly as explained in https://www.elastic.co/guide/en/elasticsearch/reference/6.3/networkaddress-cache-ttl.html)
* Changed `InetAddress` type to `String` for that higher up the stack
* Passed down `Supplier<DiscoveryNode>` instead of outright `DiscoveryNode` from `RemoteClusterAware#buildRemoteClustersSeeds` on to lazy resolve DNS when the `DiscoveryNode` is actually used (could've also passed down the value of `clusterName = REMOTE_CLUSTERS_SEEDS.getNamespace(concreteSetting)` together with the `List<String>` of hosts, but this route seemed to introduce less duplication and resulted in a significantly smaller changeset).
* Closes#28858
- Missing links to new IndexCaps API
- Incorrect security permissions on IndexCaps API
- GetJobs API must supply a job (or `_all`), omitting throws error
- Link to search/agg limitations from RollupSearch API
- Tweak URLs in quick reference
- Formatting of overview page
As the multibucket feature was merged in, this test hit a side effect
which means buckets trailing an anomaly could become anomalous.
This commit fixes the problem by filtering low score records when
we request them.
Elasticsearch versions earlier than 6.4.0 cannot properly run in a
FIPS 140 JVM. This commit ensures that we use a non-FIPS JVM for
nodes that we spin up in BWC tests even when we're testing FIPS.
* ML: fix updating opened jobs scheduled events (#31651)
* Adding UpdateParamsTests license header
* Adding integration test and addressing PR comments
* addressing test and job names
This commit removes the put privilege API in favor of having a single API to
create and update privileges. If we see the need to have an API like this in
the future we can always add it back.
The Kibana settings docs that these watches rely on can sometimes
contain no xpack settings. When this is the case, we will end up with a
null pointer exception in the script. We need to guard against in these
scripts so this commit does that.
This change cleans up some methods in the CharArrays class from x-pack, which
includes the unification of char[] to utf8 and utf8 to char[] conversions that
intentionally do not use strings. There was previously an implementation in
x-pack and in the reloading of secure settings. The method from the reloading
of secure settings was adopted as it handled more scenarios related to the
backing byte and char buffers that were used to perform the conversions. The
cleaned up class is moved into libs/core to allow it to be used by requests
that will be migrated to the high level rest client.
Relates #32332
* master:
Fix global checkpoint listeners test
HLRC: adding machine learning open job (#32860)
[ML] Add log structure finder functionality (#32788)
INGEST: Add Configuration Except. Data to Metdata (#32322)
This change adds a library to ML that can be used to deduce a log
file's structure given only a sample of the log file.
Eventually this will be used to add an endpoint to ML to make the
functionality available to end users, but this will follow in a
separate change.
The functionality is split into a library so that it can also be
used by a command line tool without requiring the command line
tool to include all server code.
This removes custom Response classes that extend `AcknowledgedResponse` and do nothing, these classes are not needed and we can directly use the non-abstract super-class instead.
While this appears to be a large PR, no code has actually changed, only class names have been changed and entire classes removed.
[ML] Removing old per-partition normalization code
Per-partition normalization is an old, undocumented feature that was
never used by clients. It has been superseded by per-partition maximum
scoring.
To maintain communication compatibility with nodes prior to 6.5 it is
necessary to maintain/cope with the old wire format
This change removes the PasswordHashingBootstrapCheck and replaces it
with validation on the setting itself. This ensures we always get a
valid value from the setting when it is used.
This change moves the validation for values of usernames and passwords
from the request to the transport action. This is done to prevent
the need to move more classes into protocol once we add this API to the
high level rest client. Additionally, this resolves an issue where
validation depends on settings and we always pass empty settings
instead of the actual settings.
Relates #32332
The HipChatMessage#render is no longer used, and instead the
HipChatAccount#render is used in the ExecutableHipChatAction. Only a
test that validated the HttpProxy used this render method still. This
commit cleans it up.
The auth.basic package was an example of a single implementation
interface that leaked into many different classes. In order to clean
this up, the HttpAuth interface, factories, and Registries all were
removed and the single implementation, BasicAuth, was substituted in all
cases. This removes some dependenies between Auth and the Templates,
which can now use static methods on BasicAuth. BasicAuth was also moved
into the http package and all of the other classes were removed.
The PagerDuty v1 API is EOL and will stop accepting new accounts
shortly. This commit swaps out the watcher use of the v1 API with the
new v2 API. It does not change anything about the existing watcher
API.
Closes#32243
All Unit tests in this module are muted in FIPS 140 JVMs and
as such the CI run fails. This commit disables test task for the
module in a FIPS JVM and reverts adding a dummy test in
4cbcc1.
We only upgrade the ID when the state is saved in one of four scenarios:
- when we reach a checkpoint (every 50 pages)
- when we run out of data
- when explicitly stopped
- on failure
The test was relying on the pre-upgrade to finish, save state and then
the post-upgrade to start, hit the end of data and upgrade ID. THEN
get the new doc and apply the new ID.
But I think this is vulnerable to timing issues. If the pre-upgrade
portion shutdown before it saved the state, when restarting we would run
through all the data from the beginning with the old ID, meaning both
docs would still have the old scheme.
This change makes the pre-upgrade wait for the job to go back to STARTED
so that we know it persisted the end point. Post-upgrade, it stops and
restarts the job to ensure the state was persisted and the ID upgraded.
That _should_ rule out the above timing issue.
Closes#32773
Added infrastructure to push through the 'person name field value' to
the normalizer process. This is required by the normalizer to retrieve
the maximum scores for individual partitions.
The request and response classes have been extracted from `IndexUpgradeInfoAction` into top-level classes, and moved to the protocol jar. The `UpgradeActionRequired` enum is also moved.
Relates to #29827
This commit adds missing debug log statements for exceptions
that occur during ticket validation. I thought these
get logged somewhere else in authentication chain
but even after enabling trace logs I could not see them
logged. As the Kerberos exception messages are cryptic
adding full stack trace would help debugging faster.
* Clear Job#finished_time when it is opened (#32605)
* not returning failure when Job#finished_time is not reset
* Changing error log string and source string
Our rest testing framework has support for sniffing the host metadata on
startup and, before this change, it'd sniff that metadata before running
the first test. This prevents running these tests against
elasticsearch installations that won't support sniffing like Elastic
Cloud. This change allows tests to only sniff for metadata when they
encounter a test with a `node_selector`. These selectors are the things
that need the metadata anyway and they are super rare. Tests that use
these won't be able to run against installations that don't support
sniffing but we can just skip them. In the case of Elastic Cloud, these
tests were never going to work against Elastic Cloud anyway.
The upcoming ML log structure finder functionality will use these
libraries, and it makes sense to use the same versions that are
being used elsewhere in Elasticsearch. This is especially true
with icu4j, which is pretty big.
This commit modifies the test to handle file permission
tests in windows/dos environments. The test requires access
to UserPrincipal and so have modified the plugin-security policy
to access user information.
Closes#32637
The incorrect NodeInfo is created when the optional parameter is not used, leading to the incorrect constructor being used. Simplified LocateFunctionProcessorDefinition by using one constructor instead of two.
Fixes https://github.com/elastic/elasticsearch/issues/32554
Skip the comparative tests using lowercasing/uppercasing against H2 (which considers the Locale).
ES-SQL is, so far, ignoring the Locale.
Still, the same queries are executed against ES-SQL alone and results asserted to be correct.
We previously discussed moving the classes extending `AcknowledgedResponse` to
simply use `AcknowledgedResponse`, making the class non-abstract.
This moves the first class to do this, removing `WritePipelineResponse` in the
process.
If we like the way this looks, I will switch the remaining classes over to using
`AcknowledgedResponse`.
* master:
Cross-cluster search: preserve cluster alias in shard failures (#32608)
Handle AlreadyClosedException when bumping primary term
[TEST] Allow to run in FIPS JVM (#32607)
[Test] Add ckb to the list of unsupported languages (#32611)
SCRIPTING: Move Aggregation Scripts to their own context (#32068)
Painless: Use LocalMethod Map For Lookup at Runtime (#32599)
[TEST] Enhance failure message when bulk updates have failures
[ML] Add ML result classes to protocol library (#32587)
Suppress LicensingDocumentationIT.testPutLicense in release builds (#32613)
[Rollup] Update wire version check after backport
Suppress Wildfly test in FIPS JVMs (#32543)
[Rollup] Improve ID scheme for rollup documents (#32558)
ingest: doc: move Dot Expander Processor doc to correct position (#31743)
[ML] Add some ML config classes to protocol library (#32502)
[TEST]Split transport verification mode none tests (#32488)
Core: Move helper date formatters over to java time (#32504)
[Rollup] Remove builders from DateHistogramGroupConfig (#32555)
[TEST} unmutes SearchAsyncActionTests and adds debugging info
[ML] Add Detector config classes to protocol library (#32495)
[Rollup] Remove builders from MetricConfig (#32536)
Tests: Add rolling upgrade tests for watcher (#32428)
Fix race between replica reset and primary promotion (#32442)
Rest HL client: Add get license action
Continues to use String instead of a more complex License class to
hold the license text similarly to put license.
Relates #29827
The Apache Http components support for Spnego scheme
uses canonical name by default.
Also when resolving host name, on centos by default
there are other aliases so adding them to the
DelegationPermission.
Closes#32498
If a leader index is deleted while there is an active follower, the
follower will send shard changes requests bound for the leader
index. Today this will result in a null pointer exception because there
will not be an index routing table for the index. A null pointer
exception looks like a bug to a user so this commit addresses this by
throwing an index not found exception instead.
* Change SecurityNioHttpServerTransportTests to use PEM key and
certificate files instead of a JKS keystore so that this tests
can also run in a FIPS 140 JVM
* Do not attempt to run cases with ssl.verification_mode NONE in
SessionFactoryTests so that the tests can run in a FIPS 140 JVM
This commit addresses a race that can happen in the basic CCR stats REST
tests. Namely, peek reads can fire before the REST test client fires the
stats request. This means that we have to weaken our assertions about
the expected stats response.
This commit adds the ML results classes to the X-Pack protocol
library used by the high level REST client.
(Other commits will add the config classes and stats classes.)
These classes:
- Are publically immutable
- Are privately mutable - this is perhaps not as nice as the
config classes, but to do otherwise would require adding
builders and the corresponding server-side classes that the
old transport client used don't have builders
- Have little/no validation of field values beyond null checks
- Are convertible to and from X-Content, but NOT wire transportable
- Have lenient parsers to maximize compatibility across versions
- Have the same class names and getter names as the corresponding
classes in X-Pack core to ease migration for transport client
users
- Don't reproduce all the methods that do calculations or
transformations that the the corresponding classes in X-Pack core
have
Bumping down the version to 6.4 since the backport is complete. Also
adds some missing version checks to the bwc tests to make sure it
only runs on the correct versions
Previously, we were using a simple CRC32 for the IDs of rollup documents.
This is a very poor choice however, since 32bit IDs leads to collisions
between documents very quickly.
This commit moves Rollups over to a 128bit ID. The ID is a concatenation
of all the keys in the document (similar to the rolling CRC before),
hashed with 128bit Murmur3, then base64 encoded. Finally, the job
ID and a delimiter (`$`) are prepended to the ID.
This gurantees that there are 128bits per-job. 128bits should
essentially remove all chances of collisions, and the prepended
job ID means that _if_ there is a collision, it stays "within"
the job.
BWC notes:
We can only upgrade the ID scheme after we know there has been a good
checkpoint during indexing. We don't rely on a STARTED/STOPPED
status since we can't guarantee that resulted from a real checkpoint,
or other state. So we only upgrade the ID after we have reached
a checkpoint state during an active index run, and only after the
checkpoint has been confirmed.
Once a job has been upgraded and checkpointed, the version increments
and the new ID is used in the future. All new jobs use the
new ID from the start
This commit adds four ML config classes to the X-Pack protocol
library used by the high level REST client.
(Other commits will add the remaining config classes, plus results
and stats classes.)
These classes:
- Are immutable
- Have little/no validation of field values beyond null checks
- Are convertible to and from X-Content, but NOT wire transportable
- Have lenient parsers to maximize compatibility across versions
- Have the same class names, member names and getter/setter names
as the corresponding classes in X-Pack core to ease migration
for transport client users
- Don't reproduce all the methods that do calculations or
transformations that the the corresponding classes in X-Pack core
have
This commit splits SecurityNetty4TransportTests in two methods
one handling verification mode certificate and full and one
handling verification mode none. This is done so that the second
method can be muted in a FIPS 140 JVM where verification mode none
cannot be used.
Same motivation as #32507 but for the DateHistogramGroupConfig
configuration object. This pull request also changes the format of the
time zone from a Joda's DateTimeZone to a simple String.
It should help to port the API to the high level rest client and allows
clients to not be forced to use the Joda Time library. Serialization is
impacted but does not need a backward compatibility layer as
DateTimeZone are serialized as String anyway. XContent also expects
a String for timezone, so I found it easier to move everything to String.
Related to #29827
This commit adds the Detector class and its dependencies to the
X-Pack protocol library used by the high level REST client.
(Future commits will add the remaining config classes, plus results
and stats classes.)
These classes:
- Are immutable, with builders, but the builders do no validation
beyond null checks
- Are convertible to and from X-Content, but NOT wire transportable
- Have lenient parsers to maximize compatibility across versions
- Have the same class names, member names and getter/setter names
as the corresponding classes in X-Pack core to ease migration
for transport client users
- Don't reproduce all the methods that do calculations or
transformations that the the corresponding classes in X-Pack core
have
These tests ensure, that the basic watch APIs are tested in the rolling
upgrade tests. After initially adding a watch, the tests try to get,
execute, deactivate and activate a watch. Watcher stats are tested as
well, and an own java based test has been added for restarting, as that
requires waiting for a state change. Watcher history is also checked.
Closes#31216
* master:
HLRC: Move commercial clients from XPackClient (#32596)
Add cluster UUID to Cluster Stats API response (#32206)
Security: move User to protocol project (#32367)
[TEST] Test for shard failures, add debug to testProfileMatchesRegular
Minor fix for javadoc (applicable for java 11). (#32573)
Painless: Move Some Lookup Logic to PainlessLookup (#32565)
TEST: Avoid merges in testSeqNoAndCheckpoints
[Rollup] Remove builders from HistoGroupConfig (#32533)
Mutes failing SQL string function tests due to #32589
fixed elements in array of produced terms (#32519)
INGEST: Enable default pipelines (#32286)
Remove cluster state initial customs (#32501)
Mutes LicensingDocumentationIT due to #32580
[ML] Remove multiple_bucket_spans (#32496)
[ML] Rename JobProvider to JobResultsProvider (#32551)
Correct minor typo in explain.asciidoc for HLRC
Build: Add elastic maven to repos used by BuildPlugin (#32549)
Clarify the error message when a pipeline agg is used in the 'order' parameter. (#32522)
Revert "[test] turn on host io cache for opensuse (#32053)"
Enable packaging tests on suse boxes
[ML] Improve error when no available field exists for rule scope (#32550)
[ML] Improve error for functions with limited rule condition support (#32548)
Painless: Clean Up PainlessField (#32525)
Add @AwaitsFix for #32554
Remove broken @link in Javadoc
Scripting: Conditionally use java time api in scripting (#31441)
[ML] Fix thread leak when waiting for job flush (#32196) (#32541)
Add AwaitsFix to failing test - see #32546
Core: Minor size reduction for AbstractComponent (#32509)
SQL: Added support for string manipulating functions with more than one parameter (#32356)
[DOCS] Reloadable Secure Settings (#31713)
Watcher: Reenable HttpSecretsIntegrationTests#testWebhookAction test (#32456)
[Rollup] Remove builders from TermsGroupConfig (#32507)
Use hostname instead of IP with SPNEGO test (#32514)
Switch x-pack rolling restart to new style Requests (#32339)
NETWORKING: Fix Netty Leaks by upgrading to 4.1.28 (#32511)
[DOCS] Small fixes in rule configuration page (#32516)
Painless: Clean up PainlessMethod (#32476)
Build: Remove shadowing from benchmarks (#32475)
Docs: Add all JDKs to CONTRIBUTING.md
Add licensing enforcement for FIPS mode (#32437)
SQL: Add test for handling of partial results (#32474)
Mute testFilterCacheStats
[ML][DOCS] Fix typo applied_to => applies_to
Scripting: Fix painless compiler loader to know about context classes (#32385)
For a new feature like CCR we will go without this extra layer of
indirection. This commit replaces all /_xpack/ccr/_(\S+) endpoints by
/_ccr/$1 endpoints.
* Make cluster stats response contain cluster UUID
* Updating constructor usage in Monitoring tests
* Adding cluster_uuid field to Cluster Stats API reference doc
* Adding rest api spec test for expecting cluster_uuid in cluster stats response
* Adding missing newline
* Indenting do section properly
* Missed a spot!
* Fixing the test cluster ID
The User class has been moved to the protocol project for upcoming work
to add more security APIs to the high level rest client. As part of
this change, the toString method no longer uses a custom output method
from MetadataUtils and instead just relies on Java's toString
implementation.
This commit removes the never released multiple_bucket_spans
configuration parameter. This is now replaced with the new
multibucket feature that requires no configuration.
Added support for string manipulating functions with more than one parameter:
CONCAT, LEFT, RIGHT, REPEAT, POSITION, LOCATE, REPLACE, SUBSTRING, INSERT
The error message mentioned in #30094 does not link to to a cause by the
test itself, as there are still inflight requests according to the
circuit breaker.
I ran this test class 100k times on bare metal and could not reproduce
it. I will reenable the test for now.
Closes#30094
While working on adding the Create Rollup Job API to the
high level REST client (#29827), I noticed that the configuration
objects like TermsGroupConfig rely on the Builder pattern in
order to create or parse instances. These builders are doing
some validation but the same validation could be done within
the constructor itself or on the server side when appropriate.
This commit removes the builder for TermsGroupConfig,
removes some other methods that I consider not really usefull
once the TermsGroupConfig object will be exposed in the
high level REST client. It also simplifies the parsing logic.
Related to #29827
This change updates KerberosAuthenticationIT to resolve the host used
to connect to the test cluster. This is needed because the host could
be an IP address but SPNEGO requires a hostname to work properly. This
is done by adding a hook in ESRestTestCase for building the HttpHost
from the host and port.
Additionally, the project now specifies the IPv4 loopback address as
the http host. This is done because we need to be able to resolve the
address used for the HTTP transport before the node starts up, but the
http.ports file is not written until the node is started.
Closes#32498
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack:qa:rolling-upgrade*` projects to use
the new versions.
* Upgrade to `4.1.28` since the problem reported in #32487 is a bug in Netty itself (see https://github.com/netty/netty/issues/7337)
* Fixed other leaks in test code that now showed up due to fixes improvements in leak reporting in the newer version
* Needed to extend permissions for netty common package because it now sets a classloader at runtime after changes in 63bae0956a
* Adjusted forbidden APIs check accordingly
* Closes#32487
This commit adds licensing enforcement for FIPS mode through the use of
a bootstrap check, a node join validator, and a check in the license
service. The work done here is based on the current implementation of
the TLS enforcement with a production license.
The bootstrap check is always enforced since we need to enforce the
licensing and this is the best option to do so at the present time.
First, some background: we have 15 different methods to get a logger in
Elasticsearch but they can be broken down into three broad categories
based on what information is provided when building the logger.
Just a class like:
```
private static final Logger logger = ESLoggerFactory.getLogger(ActionModule.class);
```
or:
```
protected final Logger logger = Loggers.getLogger(getClass());
```
The class and settings:
```
this.logger = Loggers.getLogger(getClass(), settings);
```
Or more information like:
```
Loggers.getLogger("index.store.deletes", settings, shardId)
```
The goal of the "class and settings" variant is to attach the node name
to the logger. Because we don't always have the settings available, we
often use the "just a class" variant and get loggers without node names
attached. There isn't any real consistency here. Some loggers get the
node name because it is convenient and some do not.
This change makes the node name available to all loggers all the time.
Almost. There are some caveats are testing that I'll get to. But in
*production* code the node name is node available to all loggers. This
means we can stop using the "class and settings" variants to fetch
loggers which was the real goal here, but a pleasant side effect is that
the ndoe name is now consitent on every log line and optional by editing
the logging pattern. This is all powered by setting the node name
statically on a logging formatter very early in initialization.
Now to tests: tests can't set the node name statically because
subclasses of `ESIntegTestCase` run many nodes in the same jvm, even in
the same class loader. Also, lots of tests don't run with a real node so
they don't *have* a node name at all. To support multiple nodes in the
same JVM tests suss out the node name from the thread name which works
surprisingly well and easy to test in a nice way. For those threads
that are not part of an `ESIntegTestCase` node we stick whatever useful
information we can get form the thread name in the place of the node
name. This allows us to keep the logger format consistent.
This commit adds an assumption to two test methods in
SSLTrustRestrictionsTests that we are not on JDK 11 as the tests
currently fail there.
Relates #29989
This commit removes Kerberos bootstrap checks as they were more
validation checks and better done in Kerberos realm constructor
than as bootstrap checks. This also moves the check
for one Kerberos realm per node to where we initialize realms.
This commit adds few validations which were missing earlier
like missing read permissions on keytab file or if it is directory
to throw exception with error message.
Today ShardFollowNodeTask might fetch some operations more than once.
This happens because we ask the leading for up to max_batch_count
operations (instead of the left-over size) for the left-over request.
The leading then can freely respond up to the max_batch_count, and at
the same time, if one of the previous requests completed, we might issue
another read request whose range overlaps with the response of the
left-over request.
Closes#32453
The default behaviour for "GetPrivileges" is to get all application
privileges. This should only be allowed if the user has access to
the "*" application.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/plugin/security` project to use the new
versions.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/qa/security-example-spi-extension`
project to use the new versions.
* master:
Tests: Fix convert error tests to use fixed value (#32415)
IndicesClusterStateService should replace an init. replica with an init. primary with the same aId (#32374)
REST high-level client: parse back _ignored meta field (#32362)
[CI] Mute DocumentSubsetReaderTests testSearch
Today we do not check if the `following_index` setting of the follower
is enabled or not when processing a follow-request. If that setting is
disabled, the follower will use the default engine, not the following
engine. This change checks and rejects such invalid follow requests.
Relates #30086
* master:
Remove reference to non-existent store type (#32418)
[TEST] Mute failing FlushIT test
Fix ordering of bootstrap checks in docs (#32417)
[TEST] Mute failing InternalEngineTests#testSeqNoAndCheckpoints
[TEST] Mute failing testConvertLongHexError
bump lucene version after backport
Upgrade to Lucene-7.5.0-snapshot-608f0277b0 (#32390)
[Kerberos] Avoid vagrant update on precommit (#32416)
TESTS: Move netty leak detection to paranoid level (#32354)
[DOCS] Fixes formatting of scope object in job resource
Copy missing segment attributes in getSegmentInfo (#32396)
AbstractQueryTestCase should run without type less often (#28936)
INGEST: Fix Deprecation Warning in Script Proc. (#32407)
Switch x-pack/plugin to new style Requests (#32327)
Docs: Correcting a typo in tophits (#32359)
Build: Stop double generating buildSrc pom (#32408)
TEST: Avoid triggering merges in FlushIT
Fix missing JavaDoc for @throws in several places in KerberosTicketValidator.
Switch x-pack full restart to new style Requests (#32294)
Release requests in cors handler (#32364)
Painless: Clean Up PainlessClass Variables (#32380)
Docs: Fix callouts in put license HL REST docs (#32363)
[ML] Consistent pattern for strict/lenient parser names (#32399)
Update update-settings.asciidoc (#31378)
Remove some dead code (#31993)
Introduce index store plugins (#32375)
Rank-Eval: Reduce scope of an unchecked supression
Make sure _forcemerge respects `max_num_segments`. (#32291)
TESTS: Fix Buf Leaks in HttpReadWriteHandlerTests (#32377)
Only enforce password hashing check if FIPS enabled (#32383)
Today it's possible to encounter an Index operation in Lucene whose
_source is disabled, and _recovery_source was pruned by the MergePolicy.
If it's the case, we create a Translog#Index without source and let the
caller validate it later. However, this approach is challenging for the
caller.
Deletes and No-Ops don't allow invoking "source()" method. The caller
has to make sure to call "source()" only on index operations. The
current implementation in CCR does not follow this and fail to replica
deletes or no-ops. Moreover, it's easier to reason if a Translog#Index
always has the source.
The main highlight is the removal of the reclaim_deletes_weight in the TieredMergePolicy.
The es setting index.merge.policy.reclaim_deletes_weight is deprecated in this commit and the value is ignored. The new merge policy setting setDeletesPctAllowed should be added in a follow up.
This commit avoids dependency during compile on copy keytab to
be present in the generated sources so pre-commit does not
stall for updating vagrant box.
Closes#32387
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack/plugin` project to use the new versions.
In #29623 we added `Request` object flavored requests to the low level
REST client and in #30315 we deprecated the old `performRequest`s. This
changes all calls in the `x-pack:qa:full-cluster-restart` project to use
the new versions.
Previously we had two patterns for naming of strict
and lenient parsers.
Some classes had CONFIG_PARSER and METADATA_PARSER,
and used an enum to pass the parser type to nested
parsers.
Other classes had STRICT_PARSER and LENIENT_PARSER
and used ternary operators to pass the parser type
to nested parsers.
This change makes all ML classes use the second of
the patterns described above.
Removing some dead code or supressing warnings where apropriate. Most of the
time the variable tested for null is dereferenced earlier or never used before.