Currently, stored scripts use a namespace of (lang, id) to be put, get, deleted, and executed. This is not necessary since the lang is stored with the stored script. A user should only have to specify an id to use a stored script. This change makes that possible while keeping backwards compatibility with the previous namespace of (lang, id). Anywhere the previous namespace is used will log deprecation warnings.
The new behavior is the following:
When a user specifies a stored script, that script will be stored under both the new namespace and old namespace.
Take for example script 'A' with lang 'L0' and data 'D0'. If we add script 'A' to the empty set, the scripts map will be ["A" -- D0, "A#L0" -- D0]. If a script 'A' with lang 'L1' and data 'D1' is then added, the scripts map will be ["A" -- D1, "A#L1" -- D1, "A#L0" -- D0].
When a user deletes a stored script, that script will be deleted from both the new namespace (if it exists) and the old namespace.
Take for example a scripts map with {"A" -- D1, "A#L1" -- D1, "A#L0" -- D0}. If a script is removed specified by an id 'A' and lang null then the scripts map will be {"A#L0" -- D0}. To remove the final script, the deprecated namespace must be used, so an id 'A' and lang 'L0' would need to be specified.
When a user gets/executes a stored script, if the new namespace is used then the script will be retrieved/executed using only 'id', and if the old namespace is used then the script will be retrieved/executed using 'id' and 'lang'
The seq# base recovery logic relies on rolling back lucene to remove any operations above the global checkpoint. This part of the plan is not implemented yet but have to have these guarantees. Instead we should make the seq# logic validate that the last commit point (and the only one we have) maintains the invariant and if not, fall back to file based recovery.
This commit adds a test that creates situation where rollback is needed (primary failover with ops in flight) and fixes another issue that was surfaced by it - if a primary can't serve a seq# based recovery request and does a file copy, it still used the incoming `startSeqNo` as a filter.
Relates to #22484 & #10708
With the new secure settings, methods like getAsMap() no longer work
correctly as a means of checking for empty settings, or the total size.
This change converts the existing uses of that method to use methods
directly on Settings. Note this does not update the implementations to
account for SecureSettings, as that will require a followup which
changes how secure settings work.
* Integrate UnifiedHighlighter
This change integrates the Lucene highlighter called "unified" in the list of supported highlighters for ES.
This highlighter can extract offsets from either postings, term vectors, or via re-analyzing text.
The best strategy is picked automatically at query time and depends on the field and the query to highlight.
In #22762, settings preparation during bootstrap was changed slightly to
account for SecureSettings, by starting with a fresh settings builder
after reading the initial configuration. However, this the defaults from
system properties were never re-read. This change fixes that bug (which
was never released).
closes#22861
Usually the order in which we serialize sets and maps of things doesn't matter,
but since InnerHitBuilder is part of SearchSourceBuilder, which is in turn used
as a cache key in its bytes serialization, we need to ensure the order of all
these fields when writing them to an output stream.
This adds tests and makes sure we iterate over the scriptField set and the
childInnerHits map in a fixed order.
Closes#22808
Also adds many `equals` and `hashCode` implementations and moves
the failure printing in `MatchAssertion` into a common spot and
exposes it over `assertEqualsWithErrorMessageFromXContent` which
does an object equality test but then uses `toXContent` to print
the differences.
Relates to #22278
This moves the building blocks for delete by query into core. This
should enabled two thigns:
1. Plugins other than reindex to implement "bulk by scroll" style
operations.
2. Plugins to directly call delete by query. Those plugins should
be careful to make sure that task cancellation still works, but
this should be possible.
Notes:
1. I've mostly just moved classes and moved around tests methods.
2. I haven't been super careful about cohesion between these core
classes and reindex. They are quite interconnected because I wanted
to make the change as mechanical as possible.
Closes#22616
* S3 repository: Add named configurations
This change implements named configurations for s3 repository as
proposed in #22520. The access/secret key secure settings which were
added in #22479 are reverted, and the only secure settings are those
with the new named configs. All other previously used settings for the
connection are deprecated.
closes#22520
Also adds many `equals` and `hashCode` implementations and moves
the failure printing in `MatchAssertion` into a common spot and
exposes it over `assertEqualsWithErrorMessageFromXContent` which
does an object equality test but then uses `toXContent` to print
the differences.
Relates to #22278
This commit introduces sequence-number-based recovery. When a replica
has fallen out of sync, rather than performing a file-based recovery we
first attempt to replay operations since the last local checkpoint on
the replica. To do this, at the start of recovery the replica tells the
primary what its local checkpoint is. The primary will then wait for all
operations between that local checkpoint and the current maximum
sequence number to complete; this is to ensure that there are no gaps in
the operations that will be replayed from the primary to the
replica. This is a best-effort attempt as we currently have no
guarantees on the primary that these operations will be available; if we
are not able to replay all operations in the desired range, we just
fallback to file-based recovery. Later work will strengthen the
guarantees.
Relates #22484
At this point AbstractSearchAsyncAction is just a base-class for the first phase of a search where we have multiple replicas
for each shardID. If one of them is not available we move to the next one. Yet, once we passed that first stage we have to work with
the shards we succeeded on the initial phase.
Unfortunately, subsequent phases are not fully detached from the initial phase since they are all non-static inner classes.
In future changes this will be changed to detach the inner classes to test them in isolation and to simplify their creation.
The AbstractSearchAsyncAction should be final and it should just get a factory for the next phase instead of requiring subclasses
etc.
This commit adds a ElasticsearchException.failureFromXContent() that can be used to parse the result of ElasticsearchException.generateFailureXContent().
Today we cache query results even if the query timed out. This is obviously
problematic since results are not complete. Yet, the decision if a query timed
out or not happens too late to simply not cache the result since if we'd just throw
an exception all currently waiting requests with the same request / cache key would
fail with the same exception without the option to access the result or to re-execute.
Instead, this change will allow the request to enter the cache but invalidates it immediately.
Concurrent request might not get executed and return the timed out result which is not absolutely
correct but very likely since identical requests will likely timeout as well. As a side-effect
we won't hammer the node with concurrent slow searches but rather only execute one of them
and return shortly cached result.
Closes#22789
The output of the ElasticsearchException.generateThrowableXContent() method can be parsed back by the ElasticsearchException.fromXContent() method.
This commit adds unit tests in the style of the to-and-from-xcontent tests we already have for other parsing methods. It also relax the strict parsing of the ElasticsearchException.fromXContent() so that it does not throw an exception when custom metadata and headers are parsed, as long as they are either strings or arrays of strings. Every other type is ignored at parsing time.
Some tests verify that all connection have been closed but due to the
async / concurrent nature of `RemoteClusterConnection` there are situations
where we notify listeners that trigger tests to finish before we actually
closed all connections. The race is very very small and has no impact on the
code correctness. This commit documents and improves the way we close
connections to ensure test won't fail with false positives.
Closes#22803
This commit removes the search type `dfs_query_and_fetch` without a
replacement. We don't allow to use this type via REST since 2.x
but still keep it around for no particular reason. There we no users
complaining about the availability. This should now be removed from the
codebase. `query_and_fetch` is still used internally to safe a roundtrip
if there is only one shard but it can't be used via the rest interface.
This is related to #22116. URLRepository requires SocketPermission
connect. This commit introduces a new module called "repository-url"
where URLRepository will reside. With the new module, permissions can
be removed from core.
Add unit tests for `TopHitsAggregator` and convert some snippets in
docs for `top_hits` aggregation to `// CONSOLE`.
Relates to #22278
Relates to #18160
Previous to his change when the range query was rewritten to an unbounded range (`[* TO *]`) it maintained the timezone and format for the query. This means that queries with different timezones and format which are rewritten to unbounded range queries actually end up as different entries in the search request cache.
This is inefficient and unnecessary so this change nulls the timezone and format in the rewritten query so that regardless of the timezone or format the rewritten query will be the same.
Although this does not fix#22412 (since it deals with the WITHIN case rather than the INTERSECTS case) it is born from the same arguments
5.3.0 is the first version that contains the split from headers to metadata, updated the check to reflect that. It was previously after to be able to commit to master first, and only after that backport the change. Otherwise master tests would have failed until the change was backported.
Previously the behavior where the `OpType` byte was serialized was only in
master, but it was recently backported to 5.x, so the serialization version
checks need to be updated as well.
move "es." internal headers to separate metadata set in ElasticsearchException and stop returning them as response headers
Closes#17593
* [TEST] remove ESExceptionTests, move its methods to ElasticsearchExceptionTests or ExceptionSerializationTests
Docker cgroups are mounted in the wrong place (i.e., inconsistently with
/proc/self/cgroup). This commit adds an undocumented hack for working
around, for now.
Relates #22757
As a follow up to #22649, this changes the resent tests for parsing parts of search
responses to randomly set the humanReadable() flag of the XContentBuilder that
is used to render the responses. This should help to test that we can parse back
thoses classes if the user specifies `?human=true` in the request url.
There was a typo in the `ParseField` declaration. I know
we want to port these parsers to `ObjectParser` eventually
but I don't have the energy for that today and want to get
this fixed.
Closes#22722
* Add top hits collapsing to search request
The field collapsing is done with a custom top docs collector that "collapse" search hits with same field value.
The distributed aspect is resolve using the two passes that the regular search uses. The first pass "collapse" the top hits, then the coordinating node merge/collapse the top hits from each shard.
```
GET _search
{
"collapse": {
"field": "category",
}
}
```
This change also adds an ExpandCollapseSearchResponseListener that intercepts the search response and expands collapsed hits using the CollapseBuilder#innerHit} options.
The retrieval of each inner_hits is done by sending a query to all shards filtered by the collapse key.
```
GET _search
{
"collapse": {
"field": "category",
"inner_hits": {
"size": 2
}
}
}
```
To effectively allow a plugin to intercept a transport handler it needs
to know if the handler must be executed even if there is a rejection on the
thread pool in the case the wrapper forks a thread to execute the actual handler.
This commit adds a SpecialPermission constant and uses that constant
opposed to introducing new instances everywhere.
Additionally, this commit introduces a single static method to check that
the current code has permission. This avoids all the duplicated access
blocks that exist currently.
Today `InternalEngine#innerIndex` is a pretty big method (> 150 SLoC). This
commit merged `#index` and `#innerIndex` and splits it up into smaller contained
methods.
* Upgrade to Lucene 6.4.0
`ValueSource`s are now converted to `DoubleValueSource`s using the Lucene adapter made for the migration to the new API in 6.4.0.
Introduce CheckedSupplier and CheckedRunnable functional interfaces
into core. These offer a checked version of the Supplier and Runnable
interfaces for use with lambda apis.
This commit ensures that the index.latest blob is first examined to
determine the latest index-N blob id, before attempting to list all
index-N blobs and picking the blob with the highest N.
It also fixes the MockRepository#move so that tests are able to handle
non-atomic moves. This is done by adding a special setting to the
MockRepository that requires the test to specify if it can handle
non-atomic moves. If so, then the MockRepository#move operation will be
non-atomic to allow testing for against such repositories.
The weight factor function does not check if the delegate score function needs to access the score of the query.
This results in a _score equals to 0 for all score function that set a weight.
This change modifies the WeightFactorFunction#needsScore to delegate the call to its underlying score function.
Fix#21483
There are presently 7 ctor args used in any rest handlers:
* `Settings`: Every handler uses it to initialize a logger and
some other strange things.
* `RestController`: Every handler registers itself with it.
* `ClusterSettings`: Used by `RestClusterGetSettingsAction` to
render the default values for cluster settings.
* `IndexScopedSettings`: Used by `RestGetSettingsAction` to get
the default values for index settings.
* `SettingsFilter`: Used by a few handlers to filter returned
settings so we don't expose stuff like passwords.
* `IndexNameExpressionResolver`: Used by `_cat/indices` to
filter the list of indices.
* `Supplier<DiscoveryNodes>`: Used to fill enrich the response
by handlers that list tasks.
We probably want to reduce these arguments over time but
switching construction away from guice gives us tighter
control over the list of available arguments.
These parameters are passed to plugins using
`ActionPlugin#initRestHandlers` which is expected to build and
return that handlers immediately. This felt simpler than
returning an reference to the ctors given all the different
possible args.
Breaks java plugins by moving rest handlers off of guice.
Today we try to be smart and make a generic decision if an exception should
be treated as a document failure but in some cases concurrency in the index writer
make this decision very difficult since we don't have a consistent state in the case
another thread is currently failing the IndexWriter/InternalEngine due to a tragic event.
This change simplifies the exception handling and makes specific decisions about document failures
rather than using a generic heuristic. This prevent exceptions to be treated as document failures
that should have failed the engine but backed out of failing since since some other thread has
already taken over the failure procedure but didn't finish yet.
Similar to the Filters aggregation but only supports "keyed" filter buckets and automatically "ANDs" pairs of filters to produce a form of adjacency matrix.
The intersection of buckets "A" and "B" is named "A&B" (the choice of separator is configurable). Empty intersection buckets are removed from the final results.
Closes#22169
This is related to #22116. Core no longer needs SocketPermission
accept. This permission is relegated to the transport-netty4 module
and (for tests) to the mocksocket jar.
Those services validate their setting before submitting an AckedClusterStateUpdateTask to the cluster state service. An acked cluster state may be completed by a networking thread when the last acks as received. As such it needs special care to make sure that thread context headers are handled correctly.
This commit fixes an issue with deprecation logging for lenient
booleans. The underlying issue is that adding deprecation logging for
lenient booleans added a static deprecation logger to the Settings
class. However, the Settings class is initialized very early and in CLI
tools can be initialized before logging is initialized. This leads to
status logger error messages. Additionally, the deprecation logging for
a lot of the settings does not provide useful context (for example, in
the token filter factories, the deprecation logging only produces the
name of the setting, but gives no context which token filter factory it
comes from). This commit addresses both of these issues by changing the
call sites to push a deprecation logger through to the lenient boolean
parsing.
Relates #22696
In preparation for being able to parse SearchResponse from its rest representation
for the java rest client, this adds fromXContent to SearchProfileShardResults and its
nested classes.
Adds unit tests for the `filters` aggregation.
This change also adds an helper to search and reduce any aggregator in a unit test.
This is done by dividing a single searcher in sub-searcher, one for each segment.
Relates #22278
* Fix NPE on FieldStats with mixed cluster on version pre/post 5.2
In 5.2 the FieldStats API can return null min/max values.
These values cannot be deserialized by a node with version pre 5.2 so if this node
is pick to coordinate a FieldStats request in a mixed cluster an NPE can be thrown.
This change prevents the NPE by removing the non serializable FieldStats object directly in the field stats shard request.
The filtered fields will not be present in the response when a node pre 5.2 acts as a coordinating node.
This change is a simple adaptation of https://github.com/elastic/elasticsearch/pull/19587 for the current state of master.
It allows to define search response listener in the form of `BiConsumer<SearchRequest, SearchResponse>`s in a search plugin.
This PR removes all leniency in the conversion of Strings to booleans: "true"
is converted to the boolean value `true`, "false" is converted to the boolean
value `false`. Everything else raises an error.
Changes the error message when `action.auto_create_index` or
`index.mapper.dynamic` forbids automatic creation of an index
from `no such index` to one of:
* `no such index and [action.auto_create_index] is [false]`
* `no such index and [index.mapper.dynamic] is [false]`
* `no such index and [action.auto_create_index] contains [-<pattern>] which forbids automatic creation of the index`
* `no such index and [action.auto_create_index] ([all patterns]) doesn't match`
This should make it more clear *why* there is `no such index`.
Closes#22435
testAckedIndexing now waits for all nodes to stabilize in the cluster
state through an assertBusy before final validation that all documents
are found in tehir respective shards in the cluster. Before, what could
happen is that the ensureGreen check passes but only after that is a
ping failure from the network disruption processed by the master,
thereby rendering the cluster RED again. This assertBusy waits up to 30
seconds for all nodes to have stabilized and all get document actions to
succeed.
Today we do not preserve response headers if they are present on a transport protocol
response. While preserving these headers is not always desired, in the most cases we
should pass on these headers to have consistent results for depreciation headers etc.
yet, this hasn't been much of a problem since most of the deprecations are detected early
ie. on the coordinating node such that this bug wasn't uncovered until #22647
This commit allow to optionally preserve headers when a context is restored and also streamlines
the context restore since it leaked frequently into the callers thread context when the callers
context wasn't restored again.
Relates to #22024
On top of documentation, the PR adds deprecation loggers and deals with the resulting warning headers.
The yaml test is set exclude versions up to 6.0. This is need to make sure bwc tests pass until this is backported to 5.2.0 . Once that's done, I will change the yaml test version limits
This change makes it possible for custom routing values to go to a subset of shards rather than
just a single shard. This enables the ability to utilize the spatial locality that custom routing can
provide while mitigating the likelihood of ending up with an imbalanced cluster or suffering
from a hot shard.
This is ideal for large multi-tenant indices with custom routing that suffer from one or both of
the following:
- The big tenants cannot fit into a single shard or there is so many of them that they will likely
end up on the same shard
- Tenants often have a surge in write traffic and a single shard cannot process it fast enough
Beyond that, this should also be useful for use cases where most queries are done under the context
of a specific field (e.g. a category) since it gives a hint at how the data can be stored to minimize
the number of shards to check per query. While a similar solution can be achieved with multiple
concrete indices or aliases per value today, those approaches breakdown for high cardinality fields.
A partitioned index enforces that mappings have routing required, that the partition size does not
change when shrinking an index (the partitions will shrink proportionally), and rejects mappings
that have parent/child relationships.
Closes#21585
Instead of forcing each task to register all nodes where its children are running, this commit runs cancellation on all nodes. The task cancellation operation doesn't run too frequently, so this optimization doesn't seem to be worth additional complexity of the interface.
Previously, certain settings that could take multiple comma delimited
values would pick up incorrect values for all entries but the first if
each comma separated value was followed by a whitespace character. For
example, the multi-value "A,B,C" would be correctly parsed as
["A", "B", "C"] but the multi-value "A, B, C" would be incorrectly parsed
as ["A", " B", " C"].
This commit allows a comma separated list to have whitespace characters
after each entry. The specific settings that were affected by this are:
cluster.routing.allocation.awareness.attributes
index.routing.allocation.require.*
index.routing.allocation.include.*
index.routing.allocation.exclude.*
cluster.routing.allocation.require.*
cluster.routing.allocation.include.*
cluster.routing.allocation.exclude.*
http.cors.allow-methods
http.cors.allow-headers
For the allocation filtering related settings, this commit also provides
validation of each specified entry if the filtering is done by _ip,
_host_ip, or _publish_ip, to ensure that each entry is a valid IP
address.
Closes#22297
This commit tries to simplify the way ElasticsearchException are rendered to xcontent. It adds some documentation and renames and merges some methods. Current behavior is preserved, the goal is to be more readable and centralize everything in the ElasticsearchException class.
`EngineClosedException` is a ES level exception that is used to indicate that the engine is closed when operation starts. It doesn't really add much value and we can use `AlreadyClosedException` from Lucene (which may already bubble if things go wrong during operations). Having two exception can just add confusion and lead to bugs, like wrong handling of `EngineClosedException` when dealing with document level failures. The latter was exposed by `IndexWithShadowReplicasIT`.
This PR also removes the AwaitFix from the `IndexWithShadowReplicasIT` tests (which was what cause this to be discovered). While debugging the source of the issue I found some mismatches in document uid management in the tests. The term that was passed to the engine didn't correspond to the uid in the parsed doc - those are fixed as well.
Today we have quite some abstractions that are essentially providing a simple
dispatch method to the plugins defining a `HttpServerTransport`. This commit
removes `HttpServer` and `HttpServerAdaptor` and introduces a simple `Dispatcher` functional
interface that delegate to `RestController` by default.
Relates to #18482
#22025 deprecated this setting (pending it's removal) but it's frequent usage will spam the deprecation logs and also fails test. As temporary work around we should not use the setting object directly.
Currently both ProfileResult and CollectorResult print the time field in a human readable string format
(e.g. "time": "55.20315000ms"). When trying to parse this back to a long value, for example to use in
the planned high level java rest client, we can lose precision because of conversion and rounding issues.
This change adds a new additional field (`time_in_nanos`) to the profile response to be able to get the
original time value in nanoseconds back.
The old `time` field is only printed when the `?`human=true` flag in the url is set. This follow the behaviour for
all other stats-related apis. Also the format of the `time` field is slightly changed. Instead of always formatting
the output as a 10-digit ms value, by using the `XContentBuilder#timeValueField()` method we now print
the largest time unit present is used (e.g. "s", "ms", "micros").
An operation that completed successfully on a primary can result in a
version conflict on a replica due to the asynchronous nature of
operations. When a replica operation results in a version conflict, the
operation is not added to the translog. This leads to gaps in the
translog which is problematic as it can lead to situations where a
replica shard can never advance its local checkpoint. As such operations
are just normal course of business for a replica shard, these operations
should be treated as if they completed successfully. This commit adds
these operations to the translog.
Relates #22626
For certain situations, end-users need the base path for Elasticsearch
logs. Exposing this as a property is better than hard-coding the path
into the logging configuration file as otherwise the logging
configuration file could easily diverge from the Elasticsearch
configuration file. Additionally, Elasticsearch will only have
permissions to write to the log directory configured in the
Elasticsearch configuration file. This commit adds a property that
exposes this base path.
One use-case for this is configuring a rollover strategy to retain logs
for a certain period of time. As such, we add an example of this to the
documentation.
Additionally, we expose the property es.logs.cluster_name as this is
used as the name of the log files in the default configuration.
Finally, we expose es.logs.node_name in cases where node.name is
explicitly set in case users want to include the node name as part of
the name of the log files.
Relates #22625
When logger.level is set, we end up configuring a logger named "level"
because we look for all settings of the form "logger\..+" as configuring
a logger. Yet, logger.level is special and is meant to only configure
the default logging level. This commit causes is to avoid not
configuring a logger named level.
Relates #22624
The IndexingOperationListener interface did not provide any
information about the shard id when a document was indexed.
This commit adds the shard id as the first parameter to all methods
in the IndexingOperationListener.
This commit is a simple cleanup of the code related to cgroup stats:
- reduce visibility of a method
- remove an unneeded logger guard
- cleanup the formatting of comments
TransportInterceptors are commonly used to enrich requests with headers etc.
which requires access the the thread context. This is not always easily possible
since threadpools are hard to access for instance if the interceptor is used on a transport client.
This commit passes on the thread context to all the interceptors for further consumption.
Closes#22585
Deleting indices is an important event in a cluster and as such should
be logged at the info level. This commit changes the logging level on
index deletion to the info level.
Relates #22627
We have made the security manager non-optional, but the Javadocs for
Security.java imply that it still is. This commit fixes this issue.
Relates #16176
ClusterService and TransportService expect the local discovery node to be set
before they are started but this requires manual interaction and is error prone since
to work absolutely correct they should share the same instance (same ephemeral ID).
TransportService also has 2 modes of operation, mainly realted to transport client vs. internal
to a node. This change removes the mode where we don't maintain a local node and uses a dummy local
node in the transport client since we don't bind to any port in such a case.
Local discovery node instances are now managed by the node itself and only suppliers and factories that allow
creation only once are passed to TransportService and ClusterService.
There was still small race in MockTcpTransport where channesl that are concurrently
closing are not yet removed from the reference tracking causing tests to fail. Compared to
the other races before this is a rather small windown and requires very very short test durations.
```h
$ bin/elasticsearch-keystore create
Created elasticsearch keystore in /Users/dpilato/Documents/Elasticsearch/apps/elasticsearch/elasticsearch-6.0.0-alpha1/config
$ bin/elasticsearch-keystore add
Enter value for null: xyz
Exception in thread "main" java.lang.NullPointerException: invalid null input
at java.security.KeyStore.setEntry(KeyStore.java:1552)
at org.elasticsearch.common.settings.KeyStoreWrapper.setString(KeyStoreWrapper.java:264)
at org.elasticsearch.common.settings.AddStringKeyStoreCommand.execute(AddStringKeyStoreCommand.java:83)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:58)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122)
at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:69)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122)
at org.elasticsearch.cli.Command.main(Command.java:88)
at org.elasticsearch.common.settings.KeyStoreCli.main(KeyStoreCli.java:39)
```
Today there are several races / holes in TcpTransport and MockTcpTransport
that can allow connections to be opened and remain unclosed while the actual
transport implementation is closed. A recently added assertions in #22554 exposes
these problems. This commit fixes several issues related to missed locks or channel
creations outside of a lock not checking if the resource is still open.
This change disables the _all meta field by default.
Now that we have the "all-fields" method of query execution, we can save both
indexing time and disk space by disabling it.
_all can no longer be configured for indices created after 6.0.
Relates to #20925 and #21341Resolves#19784
TcpTransport has an actual mechanism to stop resources in subclasses.
Instead of overriding `doStop` subclasses should override `stopInternal`
that is executed under the connection lock guaranteeing that there is no
concurrency etc.
Relates to #22554
* Settings: Make s3 repository sensitive settings use secure settings
This change converts repository-s3 to use the new secure settings. In
order to support the multiple ways we allow aws creds to be configured,
it also moves the main methods for the keystore wrapper into a
SecureSettings interface, in order to allow settings prefixing to work.
* Fix Translog.Delete serialization for sequence numbers
Translog.Delete used `.writeVLong` instead of `.writeLong` for the sequence
number and primary term (and their respective "read" variants). This could lead
to issues where a 5.x node sent a translog operation with a negative sequence
number (-2 for unassigned seq no) that tripped an assertion serializing a
negative number and causing ES to exit.
Adds a unit test for serialization and a mixed-cluster REST test, since that was
how this was originally caught.
* Use more realistic values for random seqNum and primary term
* Add comment with TODO for removal in 7.0
* Change comment into an assert
The low level TCP handshake can cause channel / connection leaks if it's interrupted
since the caller doesn't close the channel / connection if the handshake was not successful.
This commit fixes the channel leak and adds general test infrastructure to detect channel leaks
in the future.
Instead of `search.remote.seeds.${clustername}` we now specify the seeds as:
`search.remote.${clustername}.seeds` which is a real list setting compared to an unvalidated
group setting before.
Today affix settings are not dynamic since it's required to know
it's namespace in order to pull a concrete setting from it. This is not possible
in practice since the namespaces are dynamic by design. This change allows to register
a specialized settings consumer that consumes the namespace and the actual value if
a setting gets updated.
Moves fetching the local node id into `NodeClient` which is a
fairly useful place to put it so you can generate task ids from
`NodeClient#executeLocally`.
This commit adds the parsing fromXContent() methods to the IndexResponse class. The method is based on a ObjectParser because it is easier to use when parsing parent abstract classes like DocWriteResponse.
It also changes the ReplicationResponse.ShardInfo so that it now implements ToXContentObject. This way, the ShardInfo.fromXContent() method can be used by the IndexResponse's ObjectParser.
Previously, we removed all unneeded backward compatibility logic
from the BlobStoreRepository because 6.0 does not need to support
2.x snapshot formats. During the process of removing this backward
compatibility logic, some code was leftover that is no longer
necessary. This commit removes all the remaining unnecessary
backwards compatibility code in BlobStoreRepository.
It is no longer needed. It used to contain a lot of strings
used by serialization but those have since been removed. Now
it is just another thing to pass around that we don't really
need.
Affix settings are useful to namespace a certain setting. Yet, affix settings
must be specialized for their concrete type which causes lot of code duplication.
This commit allows to reuse an existing setting with and affix setting as soon as
a concrete key is available.
One needs to close the higher level objects (like UnicastZenPing) before closing the transport service. The latter can throw assertions w.r.t open connections
This adds methods to parse InternalSearchHit and InternalSearchHits from their
xContent representation. Most of the information in the original object is
preserved when rendering the object to xContent and then parsing it back.
However, some pieces of information are lost which we currently cannot parse
back from the rest response, most notably:
* the "match" property of the lucene explanation is not rendered in the
"_explain" section and cannot be reconstructed on the client side
* the original "shard" information (SearchShardTarget) is only rendered if the
"explanation" is also set, also we loose the indexUUID of the contained
ShardId because we don't write it out. As a replacement we can use
ClusterState.UNKNOWN_UUID on the receiving side
The NodeConnectionsService currently determines which nodes to connect to / disconnect from by inspecting cluster state changes and connecting to added nodes / disconnecting from removed nodes. When a master steps down (for example due to another master-eligible node shutting down which brings the number of master-eligible nodes below minimum_master_master), and the connection to other existing nodes was dropped while pinging, however, the connection to these nodes is not re-established while publishing the first cluster state that establishes the node as master.
This commit changes the NodeConnectionsService connect / disconnect logic to always rely on the state that is to be / was published, looking not only at the added / removed nodes, but validating that exactly all nodes that are currently registered in NodeConnectionsService are connected (corresponds to a NOOP if the node is already connected).
The document in the randomized GetResult can exist with no source (like if the _source was disabled in mappings), that's why the test should not always expect a non null source when the doc exists.
* Promote longs to doubles when a terms agg mixes decimal and non-decimal number
This change makes the terms aggregation work when the buckets coming from different indices are a mix of decimal numbers and non-decimal numbers. In this case non-decimal number (longs) are promoted to decimal (double) which can result in a loss of precision for big numbers.
Fixes#22232
There is a bug in the error message that is thrown if the number of docs differs between the source and target shards when recovering a shard with a syncId. The source and target doc counts are swapped around.
Closes#21893