Currently both `PUT` and `POST` can be used to create indices. This commit
removes support for `POST index_name` so that we can use it to index documents
with auto-generated ids once types are removed.
Relates #15613
In addition to be an allocation decider, DiskThresholdDecider also
monitors the used disk in order to trigger a reroute when the thresholds
are crossed. This change splits out the settings for disk thresholds
into DiskThresholdSettings, and moves the monitoring to a new
DiskThresholdMonitor. DiskThresholdDecider is then in line with other
allocation deciders, needing only Settings and ClusterSettings for
construction, which will allow deguicing allocation deciders.
`LobObtainFailedException` should be reserved for on-disk locks that
Lucene attempts (like `write.lock`). This switches our in-memory
semaphore locks for shards to use a different exception. Additionally,
ShardLockObtainFailedException no longer subclasses IOException, since
no IO is being done is this case.
Resolves#19978
Currently plugins can not inspect or upgrade custom
meta data on startup. This commit allow plugins
to check and/or upgrade global custom meta data on startup.
Plugins can stop a node if any custom meta data is not supported.
The big change here is cleaning up the `TaskListResponse` so it doesn't
have a breaky `toString` implementation. That was causing the reindex
tests to break.
Also removed `NetworkModule#registerTaskStatus` which is part of the
Plugin API. Use `Plugin#getNamedWriteables` instead.
Primary shard allocation observes limits in forcing allocation
Previously, during primary shards allocation of shards
with prior allocation IDs, if all nodes returned a
NO decision for allocation (e.g. the settings blocked
allocation on that node), we would chose one of those
nodes and force the primary shard to be allocated to it.
However, this meant that primary shard allocation
would not adhere to the decision of the MaxRetryAllocationDecider,
which would lead to attempting to allocate a shard
which has failed N number of times already (presumably
due to some configuration issue).
This commit solves this issue by introducing the
notion of force allocating a primary shard to a node
and each decider implementation must implement whether
this is allowed or not. In the case of MaxRetryAllocationDecider,
it just forwards the request to canAllocate.
Closes#19446
Parsing a search request is currently split up among a number of
classes, using multiple public static methods, which take multiple
regstries of elements that may appear in the search request like query
parsers and aggregations. This change begins consolidating all this code
by collapsing the registries normally used for parsing search requests
into a single SearchRequestParsers class. It is also made available to
plugin services to enable templating of search requests. Eventually all
of the actual parsing logic should move to the class, and the registries
should be hidden, but for now they are at least co-located to reduce the
number of objects that must be passed around.
Squashes all the subpackages of `org.elasticsearch.rest.action` down to
the following:
* `o.e.rest.action.admin` - Administrative actions
* `o.e.rest.action.cat` - Actions that make tables for `grep`ing
* `o.e.rest.action.document` - Actions that act on documents
* `o.e.rest.action.ingest` - Actions that act on ingest pipelines
* `o.e.rest.action.search` - Actions that search
I'm tempted to merge `search` into `document` but the `document`
package feels fairly complete as is and `Suggest` isn't actually always
about documents either....
I'm also tempted to merge `ingest` into `admin.cluster` because the
latter contains the actions for dealing with stored scripts.
I've moved the `o.e.rest.action.support` into `o.e.rest.action`.
I've also added `package-info.java`s to all packges in `o.e.rest`. I
figure if the package is too small to deserve a `package-info.java` file
then it is too small to deserve to be a package....
Also fixes checkstyle in all moved classes.
* Enable BoostingQuery with FVH highlighter
* apply boost with negativeBoost
* flatten boosting query with its own boost and update boost query to a single layer
* Assign scroll keepAlive when deserializing
The scroll time value was never assign when deserializing from the transport layer, meaning that it would always be null when received from another node, although the originating search request might have it set to some value.
* add tests for SearchRequest serialization and fail fast with illegal arguments
To ease testing, also introduced equals, hashcode and toString methods in SearchRequest and Scroll.
The serialization test brought up a few wrong assumptions about non null instance members, for which some null checks were needed to avoid NPEs when serializing.
* make Scroll implement Writeable rather than Streamable
* [TEST] add serialization test for ShardSearchTransportRequest
This also covers ShardSearchLocalRequest implicitly as most of the serialization code is in it.
that have analyzer aliases in their analysis settings will still work, but
any attempts to create an alias for analyzers in newly created indices
will result in an IllegalArgumentException.
As a result, the setting `index.analysis.analyzer.{analyzerName}.alias` is
no longer supported.
Closes#18244
The term persisted task was used to indicate that a task should store its results upon its completion. We would like to use this term to indicate that a task can survive restart of nodes instead. This commit removes usages of the term "persist" when it means store results.
This commit fixes the number of max local storage nodes setting used in
the discovery disruption tests. In some cases (randomly but rarely), the
acked indexing test can run with five nodes instead of three, breaching
the max local storage nodes configuration.
As the most complicated `FetchSubPhase` highlighting gets its own package
(`o.e.seach.fetch.subphase.highlight`. No other `FetchSubPhase`s get their
own package. Instead they all reside together in `o.e.search.fetch.subphase`.
Add package descriptions to `o.e.search.fetch` and subpackages.
This commit adds a function to shard-level query result to determine whether
there are any hits that needs fetching. Currently, a shard-level query result
can have hits when there are search hits and/or completion suggestion hits.
The newly added function encapsulates the checks to determine if a shard-level
query result has any fetchable hits, which is used in optimizing for sorting
documents and releasing search request contexts.
If a primary fails, an active replica is promoted to primary. Once we do the promotion, however, we are sure that the active replica is not relocating anymore. The reason is that when the primary fails, we first remove/cancel all initializing replicas (also if they are relocation targets). This is the only safe thing to do anyhow, because promoting relocating replica to primary would also mean that the replica recovery of the replica relocation target is suddenly promoted to primary relocation, which the recovery code treats in a different way.
This commit defaults the max local storage nodes to one. The motivation
for this change is that a default value greather than one is dangerous
as users sometimes end up unknowingly starting a second node and start
thinking that they have encountered data loss.
Relates #19964
ContextIndexSearcher#explain ignores the dfs data to create the normalized weight.
This change fixes this discrepancy by using the dfs data to create the normalized weight when needed.
This commit separates the description of the links in the network that are to be disrupted from the failure that is to be applied to the links (disconnect/unresponsive/delay). Previously we had subclasses for the various kind of network disruption schemes combining on one hand failure mode (disconnect/unresponsive/delay) as well as the network links to cut (two partitions / bridge partitioning) into a single class.
Reducing the ping timeouts on a test that does not simulate network failures can cause node disconnects within the test on a slow CI machine.
The test testSearchWithRelocationAndSlowClusterStateProcessing does not expect such disconnects, leading to shard relocation in the test to abort prematurely.
Today in the uncaught exception handler, we attempt to halt the virtual
machine on fatal errors. Yet, halting the virtual machine requires
privileges which might not be granted to the caller when the exception
is thrown for example from a scripting engine. This means that if an
OutOfMemoryError or another fatal error is hit inside a script, the
virtual machine will not exit because the halt call will be denied for
securiry privileges. In this commit, we mark this halt call as trusted
so that the virtual machine can be halted if a fatal error is
encountered in a script.
Relates #19923
I also reduced the visibility of a couple classes and renamed/consolidated some
test classes for consistency, eg. removing the `Simple` prefix or using the
`<Type>FieldMapperTests` convention for testing field mappers.
testUnknownObjectException used to generate malformed json objects in some cases, due to the existence of arrays as it was not closing the injected object correctly. That is why the test was catching JsonParseException among the exception that are expected to be thrown. That is fixed by tracking where the new object is placed and placing its end object marker to the right level rather than always at the end.
Also introduced a mechanism to explicitly declare objects that won't cause any exception when they get additional objects injected, so that there is no need to override the method anymore as that caused copy pasting of the whole test method. This also makes sure that changes are reflected in tests, as those inner objects are not skipped but we actually check that what is declared is true (no exceptions get thrown when an additional object is added within them.
This change adds support for treating dots in field names found in
mappings as path separators, like was previously done for dynamic
mappings and document parsing.
closes#19443
Currently, when attempting to delete a snapshot, we check
if a snapshot is in progress before proceeding with the
delete. However, we do not check if a restore is taking
place before deleting. This can lead to concurrency issues
where a restore is in progress but the snapshotted files
for the restore are being deleted underneath.
This commit first checks if a restore is in progress and
if so, it prevents the deletion of a snapshot with an
exception.
Note that this is not a complete solution because it is
still possible that a restore of the same snapshot is
started after the deletion commenced but before the
deletion finished. But there is a much smaller window
for this to occur and this commit is a quick way to
check for the common case.
When compiling many dynamically changing scripts, parameterized
scripts (<https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-scripting-using.html#prefer-params>)
should be preferred. This enforces a limit to the number of scripts that
can be compiled within a minute. A new dynamic setting is added -
`script.max_compilations_per_minute`, which defaults to 15.
If more dynamic scripts are sent, a user will get the following
exception:
```json
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "i",
"node" : "a5V1eXcZRYiIk8lecjZ4Jw",
"reason" : {
"type" : "general_script_exception",
"reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
}
}
],
"caused_by" : {
"type" : "general_script_exception",
"reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
}
},
"status" : 500
}
```
This also fixes a bug in `ScriptService` where requests being executed
concurrently on a single node could cause a script to be compiled
multiple times (many in the case of a powerful node with many shards)
due to no synchronization between checking the cache and compiling the
script. There is now synchronization so that a script being compiled
will only be compiled once regardless of the number of concurrent
searches on a node.
Relates to #19396
Slims the public interface of RoutingNodes down to 4 methods to update routing entries:
- initializeShard() -> initializes an unassigned shard
- startShard() -> starts an initializing shard / completes relocation of a shard
- relocateShard() -> starts relocation of a started shard
- failShard() -> fails/cancels an assigned shard
In the spirit of PR #19743, where deassociateDeadNodes was moved to its own public method to be only called when nodes have actually left the cluster and not on every reroute step, this commit also removes electPrimariesAndUnassignedDanglingReplicas from AllocationService and folds it into the shard failure logic. This means that an active replica is promoted to primary in the same method where the primary was failed. Previously we would scan in each reroute iteration for active replicas to be promoted to primary.
If a `keyword` field is both indexed and doc-valued, then we will convert the
input string to utf8 bytes twice: once for indexing/storing, and once for doc
values. This commit changes `keyword` fields to compute the utf8 representation
up-front and then feed both the inverted index and doc values with it.
Rather than adding version-based bw compat logic, I broke the `keyword` field
(they are now indexed/stored as a binary field rather than string), which is
fine since we are still on alpha releases for 5.0.
Previously, the engine would catch an out of memory error and would try
to handle the error (it would try to fail the engine, and then it would
swallow the out of memory error). Catching the out of memory errors was
removed in 3343ceeae4 so this code path is
not effectively dead. This commit removes this dead code from the
engine.
Relates #19881
The payload option was introduced with the new completion
suggester implementation in v5, as a stop gap solution
to return additional metadata with suggestions.
Now we can return associated documents with suggestions
(#19536) through fetch phase using stored field (_source).
The additional fetch phase ensures that we only fetch
the _source for the global top-N suggestions instead of
fetching _source of top results for each shard.
The recent changes to the Histogram Aggregator introduced a bug where
an exception would not be thrown if the maxBound of the extended bounds
is less that the minBound. This change fixes that bug.
Closes#19833
PR #19715 made AllocationService less lenient, requiring ShardRouting instances that are passed to its applyStartedShards and
applyFailedShards methods to exist in the routing table. As primary shard failures also fail initializing replica shards,
concurrent replica shard failures that are treated in the same cluster state update might not reference existing replica entries
in the routing table anymore. To solve this, PR #19715 ordered the failures by first handling replica before
primary failures. There are other failures that influence more than one routing entry, however. When we have a failed shard entry
for both a relocation source and target, then, depending on the order, either one or the other might point to an out-dated shard
entry. As finding a good order is more difficult than applying the failures, this commit re-adds parts of the ShardRouting
re-resolve logic so that the applyFailedShards method can properly treat shard failure batches.
GeoDistance is implemented using a crazy enum that causes issues with the scripting modules. This commit moves all distance calculations to arcDistance and planeDistance static methods in GeoUtils. It also removes unnecessary distance helper methods from ScriptDocValues.GeoPoints.
This commit enables completion suggester to return documents
associated with suggestions. Now the document source is returned
with every suggestion, which respects source filtering options.
In case of suggest queries spanning more than one shard, the
suggest is executed in two phases, where the last phase fetches
the relevant documents from shards, implying executing suggest
requests against a single shard is more performant due to the
document fetch overhead when the suggest spans multiple shards.
The method requires pairs of fieldnames and property arguments and will fail if
the varargs input is an uneven number. We should check this and fail with an
appropriate IllegalArgumentException instead.
```
Elasticsearch doesn't have any automatic mechanism to share these
components between indexes. If any component is heavy enough to
warrant such sharing then it is the Pugin's responsibility to do
it in their {@link AnalysisProvider} implementation. We recommend
against doing this unless absolutely necessary because it can be
difficult to get the caching right given things like behavior
changes across versions.
```
Closes#19814
This commit cleans up indices in a snapshot repository when all
snapshots containing the index are all deleted. Previously, empty
indices folders would lay around after all snapshots containing
them were deleted.
Plugins provide NamedWriteables that are added to the
NamedWriteableRegistry. Those are added on Nodes already, the same mechanism is
added to the setup for TransportClient.
This commit updates Jackson to the 2.8.1 version, which is more strict when it comes to build objects. It also adds the snakeyaml dependency that was previously shaded in jackson libs.
It also closes#18076
Instead of being lenient in QueryParseContext#parseInnerQueryBuilder we check that the token where the parser stopped reading was END_OBJECT, and throw error otherwise. This is a best effort to verify that the parsers read a whole object rather than stepping out in the middle of it due to malformed queries.
Fuzzy Query, like many other queries, used to parse even when the query referred to multiple fields and the first one would win. We rather throw an exception now instead.
Also added test for short prefix query variant and modified the parsing code to consume the whole query object.
Span term Query, like many other queries, used to parse even when the query referred to multiple fields and the first one would win. We rather throw an exception now instead.
Also modified the parsing code to consume the whole query object.
Common Terms Query, like many other queries, used to parse even when the query referred to multiple fields and the first one would win. We rather throw an exception now instead.
Also added test for short prefix query variant and modified the parsing code to consume the whole query object.
Match Query, like many other queries, used to parse even when the query referred to multiple fields and the first one would win. We rather throw an exception now instead.
Also added test for short prefix query variant and modified the parsing code to consume the whole query object.
Match phrase prefix Query, like many other queries, used to parse even when the query referred to multiple fields and the first one would win. We rather throw an exception now instead.
Also added test for short prefix query variant and modified the parsing code to consume the whole query object.
Geo distance Query, like many other queries, used to parse even when the query referred to multiple fields and the last one would win. We rather throw an exception now instead.
Match phrase Query, like many other queries, used to parse even when the query referred to multiple fields and the first one would win. We rather throw an exception now instead.
Also added test for short prefix query variant and modified the parsing code to consume the whole query object.
Wildcard Query, like many other queries, used to parse even when the query referred to multiple fields and the first one would win. We rather throw an exception now instead.
Also added test for short prefix query variant and modified the parsing code to consume the whole query object.
Regexp Query, like many other queries, used to parse even when the query referred to multiple fields and the last one would win. We rather throw an exception now instead.
Also added test for short prefix query variant.
Prefix Query, like many other queries, used to parse when the query refers to multiple fields and the last one would win. We rather throw an exception now instead.
Also added tests for short prefix quer variant.
Range Query, like many other queries, used to parse when the query refers to multiple fields and the last one would win. We rather throw an exception now instead.
Closes#19547
When we introduces [persistent node ids](https://github.com/elastic/elasticsearch/pull/19140) we were concerned that people may copy data folders from one to another resulting in two nodes competing for the same id in the cluster. To solve this we elected to not allow an incoming join if a different with same id already exists in the cluster, or if some other node already has the same transport address as the incoming join. The rationeel there was that it is better to prefer existing nodes and that we can rely on node fault detection to remove any node from the cluster that isn't correct any more, making room for the node that wants to join (and will keep trying).
Sadly there were two problems with this:
1) One minor and easy to fix - we didn't allow for the case where the existing node can have the same network address as the incoming one, but have a different ephemeral id (after node restart). This confused the logic in `AllocationService`, in this rare cases. The cluster is good enough to detect this and recover later on, but it's not clean.
2) The assumption that Node Fault Detection will clean up is *wrong* when the node just won an election (it wasn't master before) and needs to process the incoming joins in order to commit the cluster state and assume it's mastership. In those cases, the Node Fault Detection isn't active.
This PR fixes these two and prefers incoming nodes to existing node when finishing an election.
On top of the, on request by @ywelsch , `AllocationService` synchronization between the nodes of the cluster and it's routing table is now explicit rather than something we do all the time. The same goes for promotion of replicas to primaries.
AbstractQueryTestCase parses the main version of the query in strict mode, meaning that it will fail if any deprecated syntax is used. It should do the same for alternate versions (e.g. short versions). This is the way it is because the two alternate versions for ids query are both deprecated. Moved testing for those to a specific test method that isolates the deprecations and actually tests that the two are deprecated.
Factor rounding and Interval rounding (the non-date based rounding)
was no longer used so it has been removed. Offset rounding has been
retained for no since both date based rounding classes rely on it
This change does three things:
1. Makes PreBuiltTransportClientTests run since it was silently
failing on a missing dependency
2. Makes PreBuiltTransportClientTests pass
3. Removes the http.type and transport.type from being set in the
transport clients additional settings since these are set to `netty4` by
default anyway.
Fixes an issue where a node that receives a cluster state
update with a brand new cluster UUID but without an
initial persistence block could cause indices to be wiped out,
preventing them from being reimported as dangling indices.
This commit only removes the in-memory data structures and
thus, are subsequently reimported as dangling indices.
A primary shard currently instructs the master to fail a replica shard that it fails to replicate writes to before acknowledging the writes to the client. To ensure that the primary instructing the master to fail the replica is still the current primary in the cluster state on the master, it submits not only the identity of the replica shard to fail to the master but also its own shard identity. This can be problematic however when the primary is relocating. After primary relocation handoff but before the primary relocation target is activated, the primary relocation target is replicating writes through the authority of the primary relocation source. This means that the primary relocation target should probably send the identity of the primary relocation source as authority. However, this is not good enough either, as primary shard activation and shard failure instructions can arrive out-of-order. This means that the relocation target would have to send both relocation source and target identity as authority. Fortunately, there is another concept in the cluster state that represents this joint authority, namely primary terms. The primary term is only increased on initial assignment or when a replica is promoted. It stays the same however when a primary relocates.
This commit changes ShardStateAction to rely on primary terms for shard authority. It also changes the wire format to only transmit ShardId and allocation id of the shard to fail (instead of the full ShardRouting), so that the same action can be used in a subsequent PR to remove allocation ids from the active allocation set for which there exist no ShardRouting in the cluster anymore. Last but not least, this commit also makes AllocationService less lenient, requiring ShardRouting instances that are passed to its applyStartedShards and applyFailedShards methods to exist in the routing table. ShardStateAction, which is calling these methods, now has the responsibility to resolve the ShardRouting objects that are to be started / failed, and remove duplicates.
Today, when listing thread pools via the cat thread pool API, thread
pools are listed in a column-delimited format. This is unfriendly to
command-line tools, and inconsistent with other cat APIs. Instead,
thread pools should be listed in a row-delimited format.
Additionally, the cat thread pool API is limited to a fixed list of
thread pools that excludes certain built-in thread pools as well as all
custom thread pools. These thread pools should be available via the cat
thread pool API.
This commit improves the cat thread pool API by listing all thread pools
(built-in or custom), and by listing them in a row-delimited
format. Finally, for each node, the output thread pools are sorted by
thread pool name.
Relates #19721
Our parsing code accepted up until now queries in the following form (note that the query starts with `[`:
```
{
"bool" : [
{
"must" : []
}
]
}
```
This would lead to a null pointer exception as most parsers assume that the field name ("must" in this example) is the first thing that can be found in a query if its json is valid, hence always non null while parsing. Truth is that the additional array layer doesn't make the json invalid, hence the following code fragment would cause NPE within ParseField, because null gets passed to `parseContext.isDeprecatedSetting`:
```
if (token == XContentParser.Token.FIELD_NAME) {
currentFieldName = parser.currentName();
} else if (parseContext.isDeprecatedSetting(currentFieldName)) {
// skip
} else if (token == XContentParser.Token.START_OBJECT) {
```
We could add null checks in each of our parsers in lots of places, but we rely on `currentFieldName` being non null in all of our parsers, and we should consider it a bug when these unexpected situations are not caught explicitly. It would be best to find a way to prevent such queries altogether without changing all of our parsers.
The reason why such a query goes through is that we've been allowing a query to start with either `[` or `{`. The only reason I found is that we accept `match_all : []`. This seems like an undocumented corner case that we could drop support for. Then we can be stricter and accept only `{` as start token of a query. That way the only next token that the parser can encounter if the json is valid (otherwise the json parser would barf earlier) is actually a field_name, hence the assumption that all our parser makes hold.
The downside of this is simply dropping support for `match_all : []`
Relates to #12887
We just overwrite `toString()` method so it calls toXContent
with `group_by` = "whatever" so we don't try to group by nodes
which does not make sense in a toString() method.
We keep the old behavior for `toXContent()` method which
means that there is no impact in the REST layer but
only in logs and tests (where we call `toString()`).
Closes#19772.
Currently both aggregations really share the same implementation. This commit
splits the implementations so that regular histograms can support decimal
intervals/offsets and compute correct buckets for negative decimal values.
However the response API is still the same. So for intance both regular
histograms and date histograms will produce an
`org.elasticsearch.search.aggregations.bucket.histogram.Histogram`
aggregation.
The optimization to compute an identifier of the rounded value and the
rounded value itself has been removed since it was only used by regular
histograms, which now do the rounding themselves instead of relying on the
Rounding abstraction.
Closes#8082Closes#4847
In several places in our code we need to get a consistent list of files + metadata of the current index. We currently have a couple of ways to do in the `Store` class, which also does the right things and tries to verify the integrity of the smaller files. Sadly, those methods can run into trouble if anyone writes into the folder while they are busy. Most notably, the index shard's engine decides to commit half way and remove a `segment_N` file before the store got to checksum (but did already list it). This race condition typically doesn't happen as almost all of the places where we list files also happen to be places where the relevant shard doesn't yet have an engine. There is however an exception (of course :)) which is the API to list shard stores, used by the master when it is looking for shard copies to assign to.
I already took one shot at fixing this in #19416 , but it turns out not to be enough - see for example https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-os-compatibility/os=sles/822.
The first inclination to fix this was to add more locking to the different Store methods and acquire the `IndexWriter` lock, thus preventing any engine for accessing if if the a shard is offline and use the current index commit snapshotting logic already existing in `IndexShard` for when the engine is started. That turned out to be a bad idea as we create more subtleties where, for example, a store listing can prevent a shard from starting up (the writer lock doesn't wait if it can't get access, but fails immediately, which is good). Another example is running on a shared directory where some other engine may actually hold the lock.
Instead I decided to take another approach:
1) Remove all the various methods on store and keep one, which accepts an index commit (which can be null) and also clearly communicates that the *caller* is responsible for concurrent access. This also tightens up the API which is a plus.
2) Add a `snapshotStore` method to IndexShard that takes care of all the concurrency aspects with the engine, which is now possible because it's all in the same place. It's still a bit ugly but at least it's all in one place and we can evaluate how to improve on this later on. I also renamed the `snapshotIndex` method to `acquireIndexCommit` to avoid confusion and I think it communicates better what it does.
In 2.0, the ability to specify metadata fields like _routing and _ttl
inside a document was removed. However, the ability to break through
this restriction has lingered, and the check that enforced it is
completely broken.
This change fixes the check, and adds a parsing test.
Currently any code that wants to added NamedWriteables to the
NamedWriteableRegistry can do so via guice injection of the registry,
and registering at construction time. However, this makes the registry
complex: it has both get and register methods synchronized, and there is
likely contention on the read side from multiple threads. The
registration has mostly already been contained to guice modules at node
construction time.
This change makes the registry immutable, taking all of the
NamedWriteable readers at construction time. It also allows plugins to
added arbitrary named writables that it may use in its own transport
actions.
ActiveShardCount.ALL by checking for active shards,
not just started shards, as a shard could be active
but in the relocating state (i.e. not in the started
state).
conform with the requirements of the writeBlob method by
throwing a FileAlreadyExistsException if attempting to write
to a blob that already exists. This change means implementations
of BlobContainer should never overwrite blobs - to overwrite a
blob, it must first be deleted and then can be written again.
Closes#15579
With this commit we add documentation and additional checks to
enforce the cancellation policy of CancellableThreads (which is
disallow `Thread#interrupt()` on any of the threads managed by
it).
If the uuidBytes and ref are converted to utf8, it's possible they can
trip an assertion related to valid UTF-8/UTF-16 ranges, so display them
as hex, not as strings.
method to determine if a write consistency check should be performed
before proceeding with the action. This commit removes this method from
the transport replication actions in favor of setting the ActiveShardCount
on the request, with setting the value to ActiveShardCount.NONE if the
transport action's checkWriteConsistency() method returned false.
After #13834 many tests that used Groovy scripts (for good or bad reason) in their tests have been moved in the lang-groovy module and the issue #13837 has been created to track these messy tests in order to clean them up.
The work started with #19280, #19302 and #19336 and this PR moves the remaining messy tests back in core, removes the dependency on Groovy, changes the scripts in order to use the mocked script engine, and change the tests to integration tests.
It also moves IndexLookupIT test back (even if it has good chance to be removed soon) and fixes its tests.
It also changes AbstractQueryTestCase to use custom script plugins in tests.
closes#13837
* Rename operation to result and reworking responses
* Rename DocWriteResponse.Operation enum to DocWriteResponse.Result
These are just easier to interpret names.
Closes#19664
During our master elections, nodes "vote" for a master being issuing a join request to it. Since this is done in an async fashion, joins may arrive before the master itself has realized it had won the election. Therefore we start accumulating node joins on every node at election start (we don't know the result yet). When the election finish nodes that did not become the master (i.e., joined another node which won the election) need to potentially process and fail any incoming join request they may have received during the election. This is currently achieved by always issuing a cluster state update task that is doomed to fail, even if no pending joins are actually there. That aspect results in confusing (debug) log messages, making it seems like something is wrong. For example (note that `NotMasterException`)
```
[2016-07-30 22:25:53,040][DEBUG][cluster.service ] [node_t1] processing [zen-disco-process-pending-joins [{node_t0}{4SqBTyYNQ82J9c75Cs7jtg}{kutaNSYbTZCSybvqczgWCA}{127.0.0.1}{127.0.0.1:9400} elected]]: execute
[2016-07-30 22:25:53,041][DEBUG][transport ] [node_t1] connected to node [{node_t0}{4SqBTyYNQ82J9c75Cs7jtg}{kutaNSYbTZCSybvqczgWCA}{127.0.0.1}{127.0.0.1:9400}]
[2016-07-30 22:25:53,045][DEBUG][cluster.service ] [node_t1] cluster state update task [zen-disco-process-pending-joins [{node_t0}{4SqBTyYNQ82J9c75Cs7jtg}{kutaNSYbTZCSybvqczgWCA}{127.0.0.1}{127.0.0.1:9400} elected]] failed
NotMasterException[Node [{node_t1}{eAQts270TiGFpoCDE-0PQQ}{or5bsv2ET220su78DLJk5g}{127.0.0.1}{127.0.0.1:9401}] not master for join request]
[2016-07-30 22:25:53,048][DEBUG][cluster.service ] [node_t1] processing [zen-disco-process-pending-joins [{node_t0}{4SqBTyYNQ82J9c75Cs7jtg}{kutaNSYbTZCSybvqczgWCA}{127.0.0.1}{127.0.0.1:9400} elected]]: took [7ms] no change in cluster_state
```
This commit cleans up the logic a bit to only use failure where there are actual joins that are failed. The result is cleaner logs as well:
```
[2016-07-30 22:23:12,880][DEBUG][cluster.service ] [node_t1] processing [zen-disco-election-stop [{node_t0}{jMR5HCpOQnOM4pGeFkUjng}{B5WIZQAdQk2cWbjGZ21mvQ}{127.0.0.1}{127.0.0.1:9400} elected]]: execute
[2016-07-30 22:23:12,881][DEBUG][cluster.service ] [node_t1] processing [zen-disco-election-stop [{node_t0}{jMR5HCpOQnOM4pGeFkUjng}{B5WIZQAdQk2cWbjGZ21mvQ}{127.0.0.1}{127.0.0.1:9400} elected]]: took [0s] no change in cluster_state
[2016-07-30 22:23:12,881][DEBUG][transport ] [node_t1] connected to node [{node_t0}{jMR5HCpOQnOM4pGeFkUjng}{B5WIZQAdQk2cWbjGZ21mvQ}{127.0.0.1}{127.0.0.1:9400}]
```
Before this commit when an index pattern is used to filter the cluster state, only indices metadata are populated and routing table is just empty. This commit aligns the behavior of the filtering of cluster state's routing table with the filtering of cluster state's metadata so that coherent data are returned for both routing table & metadata when index pattern is requested.
In an effort to reduce the number of tiny packages we have in the
code base this moves all the files that were in subdirectories of
`org.elasticsearch.rest.action.admin.cluster` into
`org.elasticsearch.rest.action.admin.cluster`.
Also fixes line length in these packages.
This is cleanup work from #19566, where @nik9000 suggested trying to nuke the isCreated and isFound methods. I've combined nuking the two methods with removing UpdateHelper.Operation in favor of DocWriteResponse.Operation here.
Closes#19631.
In an effort to reduce the number of tiny packages we have in the
code base this moves all the files that were in subdirectories of
`org.elasticsearch.rest.action.admin.indices` into
`org.elasticsearch.rest.action.admin.indices`.
It also adds a `package-info.java` file explaining what the files in
the package *do*.
Also fixes line length in these packages. It makes a single non-checkstyle
change: implementing `ToXContent` on `GetIndexTemplatesResponse`. I did
this because it was the right thing to do and it fixed a line length
violation.
The plain highligher fails when it tries to select the fragments based on a query containing either a `has_child` or `has_parent` query.
The plain highligher should just ignore parent/child queries as it makes no sense to highligh a parent match with a has_child as the child documents are not available at highlight time. Instead if child document should be highlighed inner hits should be used.
Parent/child queries already have no effect when the `fvh` or `postings` highligher is used. The test added in this commit verifies that.
Closes#14999
Currently when the `fields` parameter used in a `multi_match` query contains a
wildcard expression that doesn't resolve to any field name in the target index,
MultiMatchQueryBuilder produces a `null` query. This change changes it to be a
MatchNoDocs query, since returning no documents for this case is already the
current behaviour. Also adding missing field names (with and without wildcards)
to the unit and integration test.
This change adds a second ParseField for the `aggs` field in the search
request so both `aggregations` and `aggs` are undeprecated allowed
fields in the search request
Closes#19504
The current heuristic to compute a default shard size is pretty aggressive,
it returns `max(10, number_of_shards * size)` as a value for the shard size.
I think making it less aggressive has the benefit that it would reduce the
likelyness of running into OOME when there are many shards (yearly
aggregations with time-based indices can make numbers of shards in the
thousands) and make the use of breadth-first more likely/efficient.
This commit replaces the heuristic with `size * 1.5 + 10`, which is enough
to have good accuracy on zipfian distributions.
upgrades if it determines the read data is in the legacy
format. It writes the upgraded version if it is not a
read-only repository and caches the repository data if
it is a read-only repository.
Add an assertion to the most popular way of turning the response object
into the actual http response. As it stands all places we return
`201 CREATED` we return the `Location` header. This will help to keep it
that way, though it won't catch all uses.
Followup to #19509
When you simulate a pipeline without specifying an id against a node where the request is redirected to a master node,
the request and the response is throwing a NPE:
```
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([3B9536AC6AA23C06:DD62280CF765DA1F]:0)
at org.elasticsearch.common.io.stream.StreamOutput.writeString(StreamOutput.java:300)
at org.elasticsearch.action.ingest.SimulatePipelineRequest.writeTo(SimulatePipelineRequest.java:92)
at org.elasticsearch.transport.local.LocalTransport.sendRequest(LocalTransport.java:222)
at org.elasticsearch.test.transport.AssertingLocalTransport.sendRequest(AssertingLocalTransport.java:95)
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:470)
at org.elasticsearch.action.TransportActionNodeProxy.execute(TransportActionNodeProxy.java:51)
at org.elasticsearch.client.transport.support.TransportProxyClient.lambda$execute$441(TransportProxyClient.java:63)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:233)
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:63)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:309)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403)
at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:67)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403)
at org.elasticsearch.client.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:710)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:62)
at org.elasticsearch.ingest.bano.BanoProcessorIntegrationTest.testSimulateProcessorConfigTarget(BanoProcessorIntegrationTest.java:139)
```
This patch fixes this and adds some random tests.
* fix explain in function_score if no function filter matches
When each function in function_score has a filter but none of them matches
we always assume 1 for the combined functions and then combine that with the
sub query score.
But the explanation did not reflect that because in case no function matched
we did not even use the actual score that was computed in the explanation.
The reason for this change is that currently if a user specifies e.g.`2M`
meaning 2 months as a time value instead of throwing an exception
explaining that time units in months are not supported (due to months
having variable time spans) we instead will parse this to 2 minutes.
This could be surprising to a user and could mean put a lot of load on
the cluster performing a task that was never intended and whose results
will be useless anyway.
It is generally accepted that `m` indicates minutes and `M` indicates
months with time values so this is consistent with the expectations a
user might have around specifying time units.
A concrete example of where this causes issues is in the decay score
function which uses TimeValue to parse the scale and offset parameters
of the decay into millisecond values to use in the calculation.
Relates to #19619
Currently to use the ResourceWatcherService to watch files, you
implement a FileChangesListener. However, this is a class, not an
interface, even though it has no base state or anything like that, just
defining a few methods. This change converts FileChangesListener to an
interface.
The tests for authentication extend ESIntegTestCase and use a mock
authentication plugin. This way the clients don't have to worry about
running it. Sadly, that means we don't really have good coverage on the
REST portion of the authentication.
This also adds ElasticsearchStatusException, and exception on which you
can set an explicit status. The nice thing about it is that you can
set the RestStatus that it returns to whatever arbitrary status you like
based on the status that comes back from the remote system.
reindex-from-remote then uses it to wrap all remote failures, preserving
the status from the remote Elasticsearch or whatever proxy is between us
and the remove Elasticsearch.
It's possible that the shard has been closed but the resources
associated with it have not yet been released. This waits until the
index lock can be obtained before running the tool.
This test fails because of an unknown exceptions in FsService.stats() method, which causes no stats to be returned. With this change the exception that is causing this issue is going to be logged.
Related to #19591 and #17964
This makes it obvious that these tests are for running the client yaml
suites. Now that there are other ways of running tests using the REST
client against a running cluster we can't go on calling the shared
client yaml tests "REST tests". They are rest tests, but they aren't
**the** rest tests.
This adds an extra REST handler for "_ingest/pipeline" so that users do not need to supply "_ingest/pipeline/*" to get all of them.
- Also adds a teardown section to related REST-tests for ingest.
Performing the bulk request shown in #19267 now results in the following:
```
{"_index":"test","_type":"test","_id":"1","_version":1,"_operation":"create","forced_refresh":false,"_shards":{"total":2,"successful":1,"failed":0},"status":201}
{"_index":"test","_type":"test","_id":"1","_version":1,"_operation":"noop","forced_refresh":false,"_shards":{"total":2,"successful":1,"failed":0},"status":200}
```
This adds the `bin/elasticsearch-translate` bin file that will be used
for CLI tasks pertaining to Elasticsearch. Currently it implements only
a single sub-command, `truncate-translog`, that creates a truncated
translog for a given folder.
Here's what running the tool looks like:
```
λ bin/elasticsearch-translog truncate -d data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/
Checking existing translog files
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! WARNING: Elasticsearch MUST be stopped before running this tool !
! !
! WARNING: Documents inside of translog files will be lost !
! !
! WARNING: The following files will be DELETED! !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-10.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-18.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-21.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-12.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-25.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-29.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-2.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-5.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-41.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-6.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-37.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-24.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-11.ckp
Continue and DELETE files? [y/N] y
Reading translog UUID information from Lucene commit from shard at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/index]
Translog Generation: 3
Translog UUID : AxqC4rocTC6e0fwsljAh-Q
Removing existing translog files
Creating new empty checkpoint at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog.ckp]
Creating new empty translog at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-3.tlog]
Done.
```
It also includes a `-b` batch operation that can be used to skip the
confirmation diaglog.
Resolves#19123
This change adds a new special path to the buckets_path syntax
`_bucket_count`. This new option will return the number of buckets for a
multi-bucket aggregation, which can then be used in pipeline
aggregations.
Closes#19553
When the request body is missing, all documents in the target index are counted.
As mentioned in #19422, the same should happen when the request body is an empty
json object. This is also the behaviour for the `_search` endpoint and the two
APIs should behave in the same way.
Before the aggregation tree was traversed to figure out what the parent level is, this commit
changes that by using `NestedScope` to figure out the nested depth level. The big upsides
are that this cleans up `NestedAggregator` (it used a hack to lazily figure out the nested parent filter)
and this is also what `nested` query uses and therefor the `nested` query can be included inside `nested`
aggregation and work correctly.
Closes#11749Closes#12410
This adds a header that looks like `Location: /test/test/1` to the
response for the index/create/update API. The requirement for the header
comes from https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.htmlhttps://tools.ietf.org/html/rfc7231#section-7.1.2 claims that relative
URIs are OK. So we use an absolute path which should resolve to the
appropriate location.
Closes#19079
This makes large changes to our rest test infrastructure, allowing us
to write junit tests that test a running cluster via the rest client.
It does this by splitting ESRestTestCase into two classes:
* ESRestTestCase is the superclass of all tests that use the rest client
to interact with a running cluster.
* ESClientYamlSuiteTestCase is the superclass of all tests that use the
rest client to run the yaml tests. These tests are shared across all
official clients, thus the `ClientYamlSuite` part of the name.
This adds new circuit breaking with the "request" breaker, which adds
circuit breaks based on the number of buckets created during
aggregations. It consists of incrementing during AggregatorBase creation
This also bumps the REQUEST breaker to 60% of the JVM heap now.
The output when circuit breaking an aggregation looks like:
```json
{
"shard" : 0,
"index" : "i",
"node" : "a5AvjUn_TKeTNYl0FyBW2g",
"reason" : {
"type" : "exception",
"reason" : "java.util.concurrent.ExecutionException: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<agg [otherthings]>] would be larger than limit of [104857600/100mb]];",
"caused_by" : {
"type" : "execution_exception",
"reason" : "QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<agg [myagg]>] would be larger than limit of [104857600/100mb]];",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[request] Data too large, data for [<agg [otherthings]>] would be larger than limit of [104857600/100mb]",
"bytes_wanted" : 104860781,
"bytes_limit" : 104857600
}
}
}
}
```
Relates to #14046
Messy tests with mustache were either moved to core, moved to a rest test or remained untouched if they actually tested mustache.
Also removed tests that were redundant.
After #13834 many tests that used Groovy scripts (for good or bad reason) in their tests have been moved in the lang-groovy module and the issue #13837 has been created to track these messy tests in order to clean them up.
This commit moves more tests back in core, removes the dependency on Groovy, changes the scripts in order to use the mocked script engine, and change the tests to integration tests.
This change renames the package org.elasticsearch.search.fetch.fielddata in org.elasticsearch.search.fetch.docvalues and renames the
FieldData* classes in DocValue*. This is a follow up of the renaming that happened in #18943
Azure and Google cloud blob containers, as the APIs for both return
a 404 in the case of a missing object, which we already handle through
a NoSuchFileFoundException.
With #19140 we started persisting the node ID across node restarts. Now that we have a "stable" anchor, we can use it to generate a stable default node name and make it easier to track nodes over a restarts. Sadly, this means we will not have those random fun Marvel characters but we feel this is the right tradeoff.
On the implementation side, this requires a bit of juggling because we now need to read the node id from disk before we can log as the node node is part of each log message. The PR move the initialization of NodeEnvironment as high up in the starting sequence as possible, with only one logging message before it to indicate we are initializing. Things look now like this:
```
[2016-07-15 19:38:39,742][INFO ][node ] [_unset_] initializing ...
[2016-07-15 19:38:39,826][INFO ][node ] [aAmiW40] node name set to [aAmiW40] by default. set the [node.name] settings to change it
[2016-07-15 19:38:39,829][INFO ][env ] [aAmiW40] using [1] data paths, mounts [[ /(/dev/disk1)]], net usable_space [5.5gb], net total_space [232.6gb], spins? [unknown], types [hfs]
[2016-07-15 19:38:39,830][INFO ][env ] [aAmiW40] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-07-15 19:38:39,837][INFO ][node ] [aAmiW40] version[5.0.0-alpha5-SNAPSHOT], pid[46048], build[473d3c0/2016-07-15T17:38:06.771Z], OS[Mac OS X/10.11.5/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_51/25.51-b03]
[2016-07-15 19:38:40,980][INFO ][plugins ] [aAmiW40] modules [percolator, lang-mustache, lang-painless, reindex, aggs-matrix-stats, lang-expression, ingest-common, lang-groovy, transport-netty], plugins []
[2016-07-15 19:38:43,218][INFO ][node ] [aAmiW40] initialized
```
Needless to say, settings `node.name` explicitly still works as before.
The commit also contains some clean ups to the relationship between Environment, Settings and Plugins. The previous code suggested the path related settings could be changed after the initial Environment was changed. This did not have any effect as the security manager already locked things down.
Makes deleting snapshots more robust by first deleting the
snapshot from the index generational file, then handling
individual deletion file errors with log messages instead of
failing the entire operation.
Added BlobContainer tests for HDFS storage
and caught a bug at the same time in which
deleteBlob was not raising an IOException
when the blobName did not exist.
Previously when trying to listen on virtual interfaces during
bootstrap the application would stop working - the interface
couldn't be found by the NetworkUtils class.
The NetworkUtils utilize the underlying JDK NetworkInterface
class which, when asked to lookup by name only takes physical
interfaces into account, failing at virtual (or subinterfaces)
ones (returning null).
Note that when interating over all interfaces, both physical and
virtual ones are taken into account.
This changeset asks for all known interfaces, iterates over them
and matches on the given name as part of the loop, allowing it
to catch both physical and virtual interfaces.
As a result, elasticsearch can now also serve on virtual
interfaces.
A test case has been added which at least makes sure that all
iterable interfaces can be found by their respective name. (It's
not easily possible in a unit test to "fake" virtual interfaces).
Relates #19537
Fixes CORS handling so that it uses the defaults for http.cors.allow-methods
and http.cors.allow-headers if none are specified in the config.
Closes#19520
#19096 introduced a generic TCPTransport base class so we can have multiple TCP based transport implementation. These implementations can vary in how they respond internally to situations where we concurrently send, receive and handle disconnects and can have different exceptions. However, disconnects are important events for the rest of the code base and should be distinguished from other errors (for example, it signals TransportMasterAction that it needs to retry and wait for the a (new) master to come back). Therefore, we should make sure that all the implementations do the proper translation from their internal exceptions into ConnectTransportException which is used externally.
Similarly we should make sure that the transport implementation properly recognize errors that were caused by a disconnect as such and deal with them correctly. This was, for example, the source of a build failure at https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-intake/1080 , where a concurrency issue cause SocketException to bubble out of MockTcpTransport.
This PR adds a tests which concurrently simulates connects, disconnects, sending and receiving and makes sure the above holds. It also fixes anything (not much!) that was found it.
creation timeout so they process the index creation cluster state update
before the test finishes and attempts to cleanup. Otherwise, the index
creation cluster state update could be processed after the test finishes
and cleans up, thereby leaking an index in the cluster state that could
cause issues for other tests that wouldn't expect the index to exist.
Closes#19530
In the lack of tests the analyzer.alias feature was pretty much not working
at all on current master. Issues like #19163 showed some serious problems for users
using this feature upgrading to an alpha version.
This change fixes the processing order and allows aliases to be set for
existing analyzers like `default`. This change also ensures that if `default`
is aliased the correct analyzer is used for `default_search` etc.
Closes#19163
Add parser for anonymous char_filters/tokenizer/token_filters
Using Settings in AnalyzeRequest for anonymous definition
Add breaking changes document
Closed#8878
* rethrow script compilation exceptions into ingest configuration exceptions
* update readProcessor to rethrow any exception as an ElasticsearchException
Remove `ParseField` constants used for names where there are no deprecated
names and just use the `String` version of the registration method instead.
This is step 2 in cleaning up the plugin interface for extending
search time actions. Aggregations are next.
This is breaking for plugins because those that register a new query should
now implement `SearchPlugin` rather than `onModule(SearchModule)`.
When index creation is not acknowledged (due to a very low request timeout) it is possible that the index is still created.
If a subsequent index-exists request completes before the cluster state of the index creation has been fully applied, it
might miss the newly created index.
* Remove outdated aggregation registration method
* Remove AggregationStreams
* Adds StreamInput#readNamedWriteableList and
StreamOutput#writeNamedWriteableList convenience methods. We strive to
make the reading and writing from the streams terse so they are easier
to scan visually.
* Remove PipelineAggregatorStreams
* Remove stream info from InternalAggreation.Type
* Remove InternalAggregation#type
* Remove Streamable from PipelineAggregator
* Remove Streamable from MultiBucketsAggregation.Bucket
During query refactoring the query string query parameter
'auto_generate_phrase_queries' was accidentally renamed
to 'auto_generated_phrase_queries'.
With this commit we restore the old name.
Closes#19512
Currently if a string field is not_analyzed, but a
position_increment_gap is set, it will lookup the default analyzer and
set it, along with the position_increment_gap, before the code which
handles setting the keyword analyzer for not_analyzed fields has a
chance to run. This change adds a parsing check and test for that case.
ThreadPool#schedule can throw a rejected execution exception. Yet, the
rejected execution exception that it throws comes from the EsAbortPolicy
which throws an EsRejectedExecutionException. This exception does not
inherit from RejectedExecutionException so instead we must catch the
former instead of the latter.
A self-rescheduling runnable can hit a rejected execution exception but
this exception goes uncaught. Instead, this exception should be caught
and passed to the onRejected handler. Not catching handling this
rejected execution exception can lead to test failures. Namely, a race
condition can arise between the shutting down of the thread pool and
cancelling of the rescheduling of the task. If another reschedule fires
right as the thread pool is being terminated, the rescheduled task will
be rejected leading to an uncaught exception which will cause a test
failure. This commit addresses these issues.
Relates #19505
The ThreadPool#scheduleWithFixedDelay method does not make it clear that all scheduled runnable instances
will be run on the scheduler thread. This becomes problematic if the actions being performed include
blocking operations since there is a single thread and tasks may not get executed due to a blocking task.
This change includes a few different aspects around trying to prevent this situation. The first is that
the scheduleWithFixedDelay method now requires the name of the executor that should be used to execute
the runnable. All existing calls were updated to use Names.SAME to preserve the existing behavior.
The second aspect is the removal of using ScheduledThreadPoolExecutor#scheduleWithFixedDelay in favor of
a custom runnable, ReschedulingRunnable. This runnable encapsulates the logic to deal with rescheduling a
runnable with a fixed delay and mimics the behavior of executing using a ScheduledThreadPoolExecutor and
provides a ScheduledFuture implementation that also mimics that of the typed returned by a
ScheduledThreadPoolExecutor.
Finally, an assertion was added to BaseFuture to detect blocking calls that are being made on the scheduler
thread.
When looking at the logstash template, I noticed that it has definitions for
dynamic temilates with `match_mapping_type` equal to `byte` for instance.
However elasticsearch never tries to find templates that match the byte type
(only long or double as far as numbers are concerned). This commit changes
template parsing in order to ignore bad values of `match_mapping_type` (given
how the logstash template is popular, this would break many upgrades
otherwise). Then I hope to fail the parsing on bad values in 6.0.
We used to mutate it as part of building the aggregation. That
caused assertVersionSerializable to fail because it assumes that
requests aren't mutated after they are sent.
Closes#19481
Primary relocation violates two invariants that ensure proper interaction between document replication and peer recoveries, ultimately leading to documents not being properly replicated.
Invariant 1: Document writes must be replicated based on the routing table of a cluster state that includes all shards which have ongoing or finished recoveries. This is ensured by the fact that do not start a recovery that is not reflected by the cluster state available on the primary node and we always sample a fresh cluster state before starting to replicate write operations.
Invariant 2: Every operation that is not part of the snapshot taken for phase 2, must be succesfully indexed on the target replica (pending shard level errors which will cause the target shard to be failed). To ensure this, we start replicating to the target shard as soon as the recovery start and open it's engine before we take the snapshot. All operations that are indexed after the snapshot was taken are guaranteed to arrive to the shard when it's ready to index them. Note that this also means that the replication doesn't fail a shard if it's not yet ready to recieve operations - it's a normal part of a recovering shard.
With primary relocations, the two invariants can be possibly violated. Let's consider a primary relocating while there is another replica shard recovering from the primary shard.
Invariant 1 can be violated if the target of the primary relocation is so lagging on cluster state processing that it doesn't even know about the new initializing replica. This is very rare in practice as replica recoveries take time to copy all the index files but it is a theoretical gap that surfaces in testing scenarios.
Invariant 2 can be violated even if the target primary knows about the initializing replica. This can happen if the target primary replicates an operation to the intializing shard and that operation arrives to the initializing shard before it opens it's engine but arrives to the primary source after it has taken the snapshot of the translog. Those operations will be currently missed on the new initializing replica.
The fix to reestablish invariant 1 is to ensure that the primary relocation target has a cluster state with all replica recoveries that were successfully started on primary relocation source. The fix to reestablish invariant 2 is to check after opening engine on the replica if the primary has been relocated in the meanwhile and fail the recovery.
Closes#19248
RecoveryTarget increments a reference on the store once it's
created. If we fail to return the instance from the reset method
we leak a reference causing shard locks to not be released. This
change creates the reference in the return statement to ensure no
references are leaked