When a client sends a request to fail a shard to the master, the current
behavior is that the master will submit the cluster state update task
and then immediately send a successful response back to the client;
additionally, if there are any failures while processing the cluster
state update task to fail the shard, then the client will never be
notified of these failures.
This commit modifies the master behavior when handling requests to fail
a shard. In particular, the master will now wait until successful
publication of the cluster state update before notifying the request
client that the shard is marked as failed; additionally, the client is
now notified of any failures during the execution of the cluster state
update task.
Relates #14252
Today we only test this when writing sequentially. Yet, in practice we mainly
write concurrently, this commit adds a test that tests that concurrent writes with
sudden fatal failure will not corrupt our translog.
Relates to #15420
Before we only evaluated segments that yielded matches in parent aggs, which caused us to miss to evaluate child docs in segments we didn't have parent matches for.
The fix for this is stop remember in what segments we have matches for and simply evaluate all segments. This makes the code simpler and we can still quickly see if a segment doesn't hold child docs like we did before.
This commit is a trivial reorganization of
o/e/c/a/s/ShardStateAction.java. The primary motive is have all of the
shard failure handling grouped together, and all of the shard started
handling grouped together.
The `path` option allowed to index/store a field `a.b.c` under just `c` when
set to `just_name`. This "feature" has been removed in 2.0 in favor of `copy_to`
so we can remove the back compat in 3.x.
There are two ways that a field can be defined twice:
- by reusing the name of a meta mapper in the root object (`_id`, `_routing`,
etc.)
- by defining a sub-field both explicitly in the mapping and through the code
in a field mapper (like ExternalMapper does)
This commit adds new checks in order to make sure this never happens.
Close#15057
Today mappings are mutable because of two APIs:
- Mapper.merge, which expects changes to be performed in-place
- IncludeInAll, which allows to change whether values should be put in the
`_all` field in place.
This commit changes both APIs to return a modified copy instead of modifying in
place so that mappings can be immutable. For now, only the type-level object is
immutable, but in the future we can imagine making them immutable at the
index-level so that mapping updates could be completely atomic at the index
level.
Close#9365
This commit addresses two type inference issues that the IntelliJ source
editor struggles with when registering query builder prototypes in
o/e/i/q/IndicesQueriesRegistry.java and
o/e/i/q/f/ScoreFunctionParserMapper.java.
This commit adds explicit logging at the DEBUG level for cluster state
update failures. Currently this responsibility is left to the cluster
state task listener, but we should expliclty log these with a generic
message to address cases where the listener might not.
Relates #14899, relates #15016, relates #15023
This commit changes the behavior of the logging in
TransportBroadcastByNodeAction#onNodeFailure to only trace log
exceptions that are considered shard-not-available exceptions. This
makes the logging consistent with how these exceptions are handled in
the response.
Relates #14927
Migrated from ES-Hadoop. Contains several improvements regarding:
* Security
Takes advantage of the pluggable security in ES 2.2 and uses that in order
to grant the necessary permissions to the Hadoop libs. It relies on a
dedicated DomainCombiner to grant permissions only when needed only to the
libraries installed in the plugin folder
Add security checks for SpecialPermission/scripting and provides out of
the box permissions for the latest Hadoop 1.x (1.2.1) and 2.x (2.7.1)
* Testing
Uses a customized Local FS to perform actual integration testing of the
Hadoop stack (and thus to make sure the proper permissions and ACC blocks
are in place) however without requiring extra permissions for testing.
If needed, a MiniDFS cluster is provided (though it requires extra
permissions to bind ports)
Provides a RestIT test
* Build system
Picks the build system used in ES (still Gradle)
After HighlightBuilder implements Writable now, we can remove
the temporary solution for transporting the highlight section in
SearchSourceBuilder from the coordinating node to the shard as
BytesReference and use HighlightBuilder instead.
The top-level highlighter has many options that can be overwritten per
field. Currently there is very similar code for this in two places.
This PR pulls out the parsing of the common parameters into
AbstractHighlighterBuilder for better reuse and to keep parsing of
common parameters more consistent.
Today we are super lenient (how could I missed that for f**k sake) with failing
/ closing the translog writer when we hit an exception. It's actually worse, we allow
to further write to it and don't care what has been already written to disk and what hasn't.
We keep the buffer in memory and try to write it again on the next operation.
When we hit a disk-full expcetion due to for instance a big merge we are likely adding document to the
translog but fail to write them to disk. Once the merge failed and freed up it's diskspace (note this is
a small window when concurrently indexing and failing the shard due to out of space exceptions) we will
allow in-flight operations to add to the translog and then once we fail the shard fsync it. These operations
are written to disk and fsynced which is fine but the previous buffer flush might have written some bytes
to disk which are not corrupting the translog. That wouldn't be an issue if we prevented the fsync.
Closes#15333