The 'Setting' constructor has some outdated Javadoc that suggested that it would automatically apply 'Property.NodeScope' if no scope is supplied, but no scope is added in that case.
We still maintain BWC for the translog operations back to 1.1 which is not
supported in the current version anyway. This commit drops the bwc and moves
the operations to the Writeable interface enforcing immutability.
Closed indices are already displayed when no indices are explicitly selected. This commit ensures that closed indices are also shown when wildcard filtering is used. It also addresses another issue that is caused by the fact that the cat action is based internally on 3 different cluster states (one when we query the cluster state to get all indices, one when we query cluster health, and one when we query indices stats). We currently fail the cat request when the user specifies a concrete index as parameter that does not exist. The implementation works as intended in that regard. It checks this not only for the first cluster state request, but also the subsequent indices stats one. This means that if the index is deleted before the cat action has queried the indices stats, it rightfully fails. In case the user provides wildcards (or no parameter at all), however, we fail the indices stats as we pass the resolved concrete indices to the indices stats request and fail to distinguish whether these indices have been resolved by wildcards or explicitly requested by the user. This means that if an index has been deleted before the indices stats request gets to execute, we fail the overall cat request. The fix is to let the indices stats request do the resolving again and not pass the concrete indices.
Closes#16419Closes#17395
SnapshotInfo had a toXContent and an externalToXContent, the former for
writing snapshot info to the snapshot blob and the latter for writing the
snapshot info to the APIs. This commit unifies writing x-content to one
method, toXContent, which distinguishes which format to write the
snapshot info in based on the Params parameter. In addition, it makes
use of the already existing snapshot specific params found in the
BlobStoreFormat.
Closes#18494
Significant changes:
* AbstractQueryTestCase has moved to the test framework module, in order for query builder tests in modules and plugins
* Added support to AbstractQueryTestCase to register plugins
* Lift the restriction that only one percolator could be added per index. This validation existed in MapperService, but because the percolator moved to a module it could no longer exist there. Instead of bringing it back it was removed. This validation existed since the percolator cache only supported one percolator query per document, since the percolator cache has been removed this restriction could removed as well.
* While moving percolator tests to the new module, also removed a couple of tests for the deprecated percolate and mpercolate api. These APIs are now sugar APIs for bwc and rediect to the searvh and msearvh APIs. Some tests were still testing as if percolate and mpercolate API did the percolation, but this no longer the case and these tests could be removed.
Named queries have a performance bug when they are used with expensive queries
that need to perform a lot of work up-front like fuzzy or range queries
(including with points). The reason is that they currently re-create the weight
and scorer for every hit. Instead we should create weights exactly once and
use a single Scorer for all documents that are on the same segment.
Before the query extraction would have been aborted and the percolator query would be marked as unknown.
This resulted in a situation that these queries always need to be evaluated by the memory index at search time.
By adding support for this query many more percolator query candidate hits can skip the expensive memory index verification step. For example the `match` query parser returns a MatchNoDocsQuery if the query terms are removed by text analysis (lets query text only contained stop words).
Remove the arbitrary limit on epoch_millis and epoch_seconds of 13 and 10
characters, respectively. Instead allow any character combination that can
be converted to a Java Long.
Update the docs to reflect this change.
This commit is a slight refactoring of the use of environment variables
in replacing property placeholders. In commit
115f983827 the constructor for
Settings.Builder was made package visible to provide a hook for tests to
mock obtaining environment variables. But we do not need to go that far
and can instead provide a small hook for this for tests without opening
up the constructor. Thus, in this commit we refactor
Settings.Builder#replacePropertyPlaceholders to a package-visible method
that accepts a function providing environment variables by names. The
public-visible method just delegates to this method passing in
System::getenv and tests can use the package-visible method to mock the
behavior they need without relying on external environment variables.
This change makes ES compile with java9 again, build 118.
* There are a handful of changes due to failure to determine types during compile.
* The attachment plugins which use tika needed to have tika upgraded in order to pickup fixes there for java 9.
* azure discovery and s3 repository indirectly depend on jaxb, which is no longer in the default modules. They now add a jaxb dependency externally, and make JarHell allow for this package.
This commit modifies the settings test for environment variables
placeholders so that it is reproducible. The underlying issue is that
the set of environment variables from system to system can vary, and
this means that's we can never be sure that a failing test will be
reproducible. This commit simplifies this test to not rely on external
forces that could influence reproducibility.
Relates #18501
This removes the ScriptMode class entirely, which was an enum with two
options (ON and OFF) which essentially boiled down to true and false.
Now the boolean values are used instead.
We write to Netty channels in an async fashion, but notify listeners via
a transport service adapter before we are certain that the channel write
succeeded. In particular, the tracer logs are implemented via a
transport service adapter and this means that we can write tracer logs
before a write was successful and in some cases the write might fail
leading to misleading logs. This commit attaches the transport service
adapters to channel writes as a listener so that the notification occurs
only after a successful write.
Relates #18500
Today if a shard fails during initialization phase due to misconfiguration, broken disks,
missing analyzers, not installed plugins etc. elasticsaerch keeps on trying to initialize
or rather allocate that shard. Yet, in the worst case scenario this ends in an endless
allocation loop. To prevent this loop and all it's sideeffects like spamming log files over
and over again this commit adds an allocation decider that stops allocating a shard that
failed more than N times in a row to allocate. The number or retries can be configured via
`index.allocation.max_retry` and it's default is set to `5`. Once the setting is updated
shards with less failures than the number set per index will be allowed to allocate again.
Internally we maintain a counter on the UnassignedInfo that is reset to `0` once the shards
has been started.
Relates to #18417
Today when sending a REST error to a client, we send the decoded
path. But decoding that path can already be the cause of the error in
which case decoding it again will just throw an exception leading to us
never sending an error back to the client. It would be better to send
the entire raw path to the client and that is what we do in this commit.
Relates #18477
When a snapshot initialization fails, the create snapshot method may return before the snapshot metadata in the cluster state is removed. This can cause follow up snapshot-API related calls to fail due to a snapshot still running. This is causing CI failures when we try to delete indices that were participating in failed snapshot to a read-only repository.
Closes#18121
This test fails spuriosly in CI and is not reproducible locally.
With this commit we temporarily increase the log level in a few
packages that are suspected to reveal the cause.
This commit fixes a test bug in the scaling thread pool configuration
test. In particular, the test randomization could select min and max for
a thread pool configuration where both are equal to zero. This is a
violation of the requirements of the ThreadPoolExecutor. With this
commit, we now ensure that the max is bounded below by one.
Before 5.0 for it was required that the percolator queries were cached in jvm heap as Lucene queries for two reasons:
1) Performance. The percolator evaluated all percolator queries all the time. There was no pre-selecting queries that are likely to match like we have today.
2) Updates made to percolator queries were visible in realtime, Today these changes are visible in near realtime. So updating no longer requires the percolator to have the queries in jvm heap.
So having the percolator queries in jvm heap via the percolator cache is now less attractive. Especially when there are many percolator queries then these queries can consume many GBs of jvm heap.
Removing the percolator cache does make the percolate query slower compared to how the execution time in 5.0.0-alpha1 and alpha2, but it is still faster compared to 2.x and before.
Currently the query builders expose the clauses of the span
query as a modifiable list. Instead we should make the that
getter return an unmodifiable list. Also renaming the method
used to add a clause from `clause(spanQuery)` to
`addClause(spanQuery)`.
#18360 introduced an extra lock in order to allow writes while syncing the translog. This caused a potential deadlock with snapshotting code where we first acquire the instance lock, followed by a sync (which acquires the syncLock). However, the sync logic acquires the syncLock first, followed by the instance lock.
I considered solving this by not syncing the translog on snapshot - I think we can get away with just flushing it. That however will create subtleties around snapshoting and whether operations in them are persisted. I opted instead to have slightly uglier code with nest synchronized, where the scope of the change is contained to the TranslogWriter class alone.
Today when parsing settings during bootstrap, we add a system property
for every Elasticsearch setting. Additionally, settings can be set via
system properties. This commit simplifies this situation.
- settings are no longer propogated to system properties
- system properties can not be used to set settings
- the "es." prefix on settings is no longer required (nor permitted)
- test logging has a dedicated system property (tests.logger.level)
Relates #18198
The preserve_original option to the ASCIIFoldingFilter doesn't
play well with the FingerprintFilter, as it ends up producing
fingerprints like:
"and consistent godel gödel is said sentence this yes"
The goal of the OpenRefine algorithm is to product a small normalized
ASCII fingerprint. There's no need to expose preserve_original.
We often require a random joda DateTimeZone in our tests. Currently
there are a few options for generating such a random DateTimeZone
from the set of available ids. Currently most random picks are not
really reproducable across different jvms because they rely on order
in the ids set implementation. The helper in DateProcessorFactoryTests
thus performs a sort on the set of ids before random picking from
the result, so I moved this to ESTestCase to make it publicly
available and changed all other tests to use that method.
From 2.0 adding child types to existing types was forbidden because the`_parent` field stores the join between parent and child at index time.
This is to protect from the fact that types that weren't a parent before become a parent while previously indexed documents would not have a join field.
This would break the parent/child queries.
The restriction was a bit too strict in the sense that also if a type was a parent type the restriction would forbid adding child types that point to a parent type (so child points already point to it).
This change make sure that the restriction only applies if that type isn't a parent type already.
Closes#17956
* Register `indices.query.bool.max_clause_count` setting
This commit registers `indices.query.bool.max_clause_count` as a node
level setting and removes support for its synonym setting
`index.query.bool.max_clause_count`.
Closes#18336
FSync translog outside of the writers global lock
Today we aquire a write global lock that blocks all modification to the
translog file while we fsync / checkpoint the file. Yet, we don't necessarily
needt to block concurrent operations here. This can lead to a lot of blocked
threads if the machine has high concurrency (lot os CPUs) but uses slow disks
(spinning disks) which is absolutely unnecessary. We just need to protect from
fsyncing / checkpointing concurrently but we can fill buffers and write to the
underlying file in a concurrent fashion.
This change introduces an additional lock that we hold while fsyncing but moves
the checkpointing code outside of the writers global lock.
Currently rounding intervals obtained by nextRoundingValue() for hour, minute and
second units can include an extra hour when happening at DST transitions that add
an extra hour (eg CEST -> CET). This changes the rounding logic for time units
smaller or equal to an hour to fix this.
Closes#18326
This commit adds simple GC overhead logging. This logging captures
intervals where the JVM is spending a lot of time performing GC but it
is not necessarily the case that each GC is large. For a start, this
logging is simple and does not attempt to incorporate whether or not the
collections were efficient (that is, we are only capturing that a lot of
GC is happening, not that a lot of useless GC is happening).
Relates #18419
We are currently only parsing the array-syntax for the rescore part
in SearchSourceBuilder ("rescore" : [ {...}, {...} ]) . We also need
to support "rescore" : {...}
Closes#18439
- Moves recovery logic into IndexShard
- Simplifies logic to cancel peer recovery of shard where recovery source node changed
- Ensures routing entry is set on initialization of IndexShard
With this commit we clear all caches after testing the parent circuit breaker.
This is necessary as caches hold on to circuit breakers internally. Additionally,
due to usage of CircuitBreaker#addWithoutBreaking() in caches, it's even possible
to go above the limit. As a consequence, all subsequent requests fall victim to
the limit.
Hence, right after the parent circuit breaker tripped, we clear all caches to
reduce these circuit breakers to 0 again. We also exclude the clear caches
transport request from limit check in order to ensure it will succeed. As this is
typically a very small and low-volume request, it is deemed ok to exclude it.
Closes#18325
This commit adds a variety of real disk metrics for the block devices
that back Elasticsearch data paths. A collection of statistics are read
from /proc/diskstats and are used to report the raw metrics for
operations and read/write bytes.
Relates #15915
This commit refactors the JvmGcMonitorService so that it can be
tested. In particular, hooks are added to verify that the
JvmMonitorService correctly observes slow GC events, and that the
JvmGcMonitorService logs the correct messages.
Relates #18378
Instead of re-exposing index metadata and blocks in RoutingNodes (which is part of the cluster state before rerouting), expose it as part of the RoutingAllocation which is known to be only temporarily used during reroute.
Sorts an array of values in ascending or descending order. If all elements are numerics, they will be sorted numerically. If values are strings, or mixtures of strings/numbers, the elements will be sorted lexicographically.
The change also renames fields and methods in the Profilers class.
Note that I had to make ProfileResult a public class (it was package private before) because now classes that call it are in a different package.
This change does the following:
- Queries that are currently unsupported such as prefix queries on numeric
fields or term queries on geo fields now throw an error rather than returning
a query that does not match anything.
- Fuzzy queries on numeric, date and ip fields are now unsupported: they used
to create range queries, we now expect users to use range queries directly.
Fuzzy, regexp and prefix queries are now only supported on text/keyword
fields (including `_all`).
- The `_uid` and `_id` fields do not support prefix or range queries anymore as
it would prevent us to store them more efficiently in the future, eg. by
using a binary encoding.
Note that it is still possible to ignore these errors by using the `lenient`
option of the `match` or `query_string` queries.
With this commit we add a precondition check to BulkRequest so
we fail early if users pass `null` for the request object.
For a more detailed discussion, see #12038.
This supersedes #12038.
Relates #12038.
Today, a race condition exists when retiring executors. Namely, if an
executor is retired and then the thread pool is terminated, the retiring
of the executor and the termination of the thread pool can race to
remove the retired executor from the queue of retired executors. More
precisely, when the executor is initially retired, it is placed on a
queue of retired executors, and then removed when it is successfully
shutdown. When the pool is terminated, it will also drain the queue of
retired executors. This leads to a time-of-check-time-of-use race where
the draining can see a retired executor on the queue but that retired
executor can be removed upon successful shutdown of that executor. This
leads to the draining attempting to remove an element from the queue
when there is none. This commit addresses this race condition by instead
safely polling the queue.
Relates #18333
Previously multiple extensions could be provided, however, this can lead
to confusion with on-disk scripts (ie, "foo.js" and "foo.javascript")
having different content. Only a single extension is now supported.
The only language currently supporting multiple extensions was the
Javascript engine ("js" and "javascript"). It now only supports the
`.js` extension.
Relates to #10598
Currently `fuzziness` is not supported for the `cross_fields` type
of the `multi_match` query since it complicates the logic that
blends the term queries that cross_fields uses internally. At the
moment using this combination is silently ignored, which can lead to
confusions. Instead we should throw an exception in this case.
The same is true for phrase and phrase_prefix type.
Closes#7764
This commit adds a test that a fixed executors rejected count behaves as
expected. In particular, we test that if we consume the executor, then
stuff the executor queue, further tasks will be rejected and the
rejected stats are updated appropriately. This test also asserts that if
we resize the queue the rejected count is reset to zero.
Relates #18301
This removes all the mentions of the sandbox from the script engine
services and permissions model. This means that the following settings
are no longer supported:
```yaml
script.inline: sandbox
script.stored: sandbox
```
Instead, only a `true` or `false` value can be specified.
Since this would otherwise break the default-allow parameter for
languages like expressions, painless, and mustache, all script engines
have been updated to have individual settings, for instance:
```yaml
script.engine.groovy.inline: true
```
Would enable all inline scripts for groovy. (they can still be
overridden on a per-operation basis).
Expressions, Painless, and Mustache all default to `true` for inline,
file, and stored scripts to preserve the old scripting behavior.
Resolves#17114
Currently terms on an ip address try to put their binary representation in the
json response. With this commit, they would return a formatted ip address:
```
"buckets": [
{
"key": "192.168.1.7",
"doc_count": 1
}
]
```
We add support to explicitly exclude specific transport actions
from the request size limit check.
We also exclude the following request types currently:
*MasterPingRequest
* PingRequest
As most of our log messages are not sentences and do not end with
periods, this commit removes a period from the end of the min master
node bootstrap check log message.
Rearranges the FingerprintAnalyzer so that AsciiFolding comes earlier in the chain (after lowercasing, before stop removal, for maximum deduping power)
Closes#18266
This commit ensures that if CORS is enabled, then Origin headers are
checked regardless of whether the request came from a browser or not.
In the past, we only proceeded with CORS checks if the User-Agent was a
browser.
(same name and UUID) exists in the cluster state. This resolves a
situation where if an index data folder was copied into a node's data
directory while the node is running and that index had a tombstone in
the cluster state, the index would still get imported.
Closes#18250Closes#18249
It will keep using the caching terms enum for keyword/text fields and falls back
to IndexSearcher.count for fields that do not use the inverted index for
searching (such as numbers and ip addresses). Note that this probably means that
significant terms aggregations on these fields will be less efficient than they
used to be. It should be ok under a sampler aggregation though.
This moves tests back to the state they were in before numbers started using
points, and also adds a new test that significant terms aggs fail if a field is
not indexed.
In the long term, we might want to follow the approach that Robert initially
proposed that consists in collecting all documents from the background filter in
order to compute frequencies using doc values. This would also mean that
significant terms aggregations do not require fields to be indexed anymore.
This commit modifies two logging statements in the
IndexingMemoryController to log the key for the setting
indices.memory.index_buffer_size instead of the object.
Relates #18191
An additional sanity check introduced by #17821 makes some tests fail. This check verifies that
only one shard with same shard id is allocated to a node. This commit fixes a bug in
ClusterStateCreationUtils which would construct a cluster state that allocated two shards with same
id to the same node.
Today when join validation fails, we log a warning but do not log the
exception that led to the join validation failing. This commit modifies
this so that we do log this exception.
All implementations of SearchParseElement have been removed since they are no longer used now that parsing is done on the coordinating node. The SearchParseElement and FetchSubPhaseParseElement classes are not removed as currently they are needed for plugins that add a custom fetch sub phase. These will be removed in a follow up PR that will allow fetch sub phase plugins to register a parser in a different way.
The current code tries to handle the case that document versions are either
missing or stored in payloads rather than doc values. Since none of the 2.x
releases allowed this, we can remove this logic.
This removes dead/duplicate code and makes the `_index` field not configurable.
(Configuration used to jus be ignored, now we would throw an exception if any
is provided.)
With this commit we eagerly evaluate content length in HttpServer
and also pass the same value to ResourceHandlingHttpChannel. With
this change it easier to reason about the content length that is
freed leaving no doubt that it must be identical to the reserved
amount.