Migrated from ES-Hadoop. Contains several improvements regarding:
* Security
Takes advantage of the pluggable security in ES 2.2 and uses that in order
to grant the necessary permissions to the Hadoop libs. It relies on a
dedicated DomainCombiner to grant permissions only when needed only to the
libraries installed in the plugin folder
Add security checks for SpecialPermission/scripting and provides out of
the box permissions for the latest Hadoop 1.x (1.2.1) and 2.x (2.7.1)
* Testing
Uses a customized Local FS to perform actual integration testing of the
Hadoop stack (and thus to make sure the proper permissions and ACC blocks
are in place) however without requiring extra permissions for testing.
If needed, a MiniDFS cluster is provided (though it requires extra
permissions to bind ports)
Provides a RestIT test
* Build system
Picks the build system used in ES (still Gradle)
After HighlightBuilder implements Writable now, we can remove
the temporary solution for transporting the highlight section in
SearchSourceBuilder from the coordinating node to the shard as
BytesReference and use HighlightBuilder instead.
The top-level highlighter has many options that can be overwritten per
field. Currently there is very similar code for this in two places.
This PR pulls out the parsing of the common parameters into
AbstractHighlighterBuilder for better reuse and to keep parsing of
common parameters more consistent.
Today we are super lenient (how could I missed that for f**k sake) with failing
/ closing the translog writer when we hit an exception. It's actually worse, we allow
to further write to it and don't care what has been already written to disk and what hasn't.
We keep the buffer in memory and try to write it again on the next operation.
When we hit a disk-full expcetion due to for instance a big merge we are likely adding document to the
translog but fail to write them to disk. Once the merge failed and freed up it's diskspace (note this is
a small window when concurrently indexing and failing the shard due to out of space exceptions) we will
allow in-flight operations to add to the translog and then once we fail the shard fsync it. These operations
are written to disk and fsynced which is fine but the previous buffer flush might have written some bytes
to disk which are not corrupting the translog. That wouldn't be an issue if we prevented the fsync.
Closes#15333
This change removes hardcoded ports from cluster formation. It passes
port 0 for http and transport, and then uses a special property to have
the node log the ports used for http and transport (just for tests).
This does not yet work for multi node tests. This brings us one step
closer to working with --parallel.
This commit improves the handling of ThreadLocal Random instance
allocation in o.e.c.Randomness.
- the seed per instance is no longer fixed
- a non-dangerous race to create the ThreadLocal instance has been
removed
- encapsulated all state into an static nested class for safe and lazy
instantiation
This commit adds the following:
* SpatialStrategy documentation to the geo-shape reference docs.
* Updates relation documentation to geo-shape-query reference docs.
* Updates GeoShapeFiledMapper to set points_only to true if TERM strategy is used (to be consistent with documentation)
This option allows to force the xcontent type to use to store the `_source`
document. The default is to use the same format as the input format.
This commit makes this option ignored for 2.x indices and rejected for 3.0
indices.
This commit removes and now forbids all uses of
Collections#shuffle(List) and Random#<init>() across the codebase. The
rationale for removing and forbidding these methods is to increase test
reproducibility. As these methods use non-reproducible seeds, production
code and tests that rely on these methods contribute to
non-reproducbility of tests.
Instead of Collections#shuffle(List) the method
Collections#shuffle(List, Random) can be used. All that is required then
is a reproducible source of randomness. Consequently, the utility class
Randomness has been added to assist in creating reproducible sources of
randomness.
Instead of Random#<init>(), Random#<init>(long) with a reproducible seed
or the aforementioned Randomess class can be used.
Closes#15287
In commit fafeb3a, we've refactored REST response handling logic
and returned HTTP status names instead of HTTP status codes for
bulk item responses. With this commit we restore the original
behavior.
Checked with @bleskes.
This method currently allows to write arbitrary bytes in an xcontent stream.
I changed it so that it can only write data to the same stream as the xcontent
(the bos parameter is removed) and that it yells at you if you try to write
raw bytes that can't be recognized as xcontent. Also the logic to copy the
structure instead of appending the bytes directly if the source and target
are of a different xcontent type have been moved to the low-level
XContentGenerator.
Tons of ancient "benchmarks" exist in elasticsearch. These are main
methods that do some kind of construction of ES classes and time various
things. The problem with these is they are not maintained, and not run.
Refactorings that touch anything that is common in these classes is very
painful. Going through these, almost all would simply not work in 2.x
without modifications (because they do not set path.home).
This change removes the entire benchmark package. If someone needs to
run a benchmark like this, they can look at history for examples if
necessary (although these examples are often not realistic and should
just start real elasticsearch processes in a shell script). Longer term,
we should make this easier to do by having the build support adding real
benchmarks which can be run in jenkins (so we know they actually run,
instead of doing refactorings with pure guesswork as to whether the
benchmark would run correctly).
we are not ready for this yet:
```
if (shardRouting.primary() && shardRouting.isRelocationTarget() == false) {
throw new IllegalIndexShardStateException(shardId, state, "shard is not a replica");
}
```
IndexResponse, DeleteResponse and UpdateResponse share some logic. This can be unified to a single DocWriteResponse base class. On top, some replication actions are now not about write operations anymore. This commit renames ActionWriteResponse to ReplicationResponse
Last some toXContent is moved from the Rest layer to the actual response classes, for more code re-sharing.
Closes#15334
The test configuration with seed A23029712A7EFB34 overwhelmed the pool which is invoked
in TransportService#sendLocalRequest().
With this commit we reduce the maximum number of concurrent requests from 10 to 7 and
add the failure message to the test output on the failing assertion for easier analysis.