This change allow elasticsearch users to store scripts and templates in an index for use at search time.
Scripts/Templates are stored in the .scripts index. The type of the events is set to the script language.
Templates use the mustache language so their type is be "mustache".
Adds the concept of a script type to calls to the ScriptService types are INDEXED,INLINE,FILE.
If a script type of INDEXED is supplied the script will be attempted to be loaded from the indexed, FILE will
look in the file cache and INLINE will treat the supplied script argument as the literal script.
REST endpoints are provided to do CRUD operations as is a java client library.
All query dsl points have been upgraded to allow passing in of explicit script ids and script file names.
Backwards compatible behavior has been preserved so this shouldn't break any existing querys that expect to
pass in a filename as the script/template name. The ScriptService will check the disk cache before parsing the
script.
Closes#5921#5637#5484
For unicast ping the DiscoveryNode identity is based on its id, which in that stage is a dummy value, this breaks any rule in the mock tran
However the TransportAddress is a valid value in unicast ping and all other places, so that is a better alternative.
Closes#6836
The Hunspell service would throw a confusing error message if more than
one affix file was present. This commit distinguishes between the two
error cases: where there are no affix files and when there are too many
affix files.
Also implements lazy dictionary loading, which was used in the tests
but not implemented.
Closes#6850
This commit adds the infrastructure to allow pluging in different
measures for computing the significance of a term.
Significance measures can be provided externally by overriding
- SignificanceHeuristic
- SignificanceHeuristicBuilder
- SignificanceHeuristicParser
closes#6561
We have to mark a shard as corrupted if necessary before the
shard failed event is fired ie. before we call the corresponding
listener in the engine. Otherwise the shard might be re-allocated
on the same node and just started up without being marked as corrupted.
Relates to #5924
Now both TransportRequest and TransportResponse inherit from a base TransportMessage that holds the message headers and also now added the remote transport address (where this message came from).
This started out as a simple correction to a missing setting problem, but go bigger into more general work on the ZenUnicastDiscoveryTets suite. It now works with both network and local mode. I also merge the different ZenUnicast test suites into a single place.
Closes#6835
The `recovery_after_time` tells the gateway to wait before starting recovery from disk. The goal here is to allow for more nodes to join the cluster and thus not start potentially unneeded replications. The `expectedNodes` setting (and friends) tells the gateway when it can start recovering even if the `recover_after_time` has not yet elapsed. However, `expectedNodes` is useless if one doesn't set `recovery_after_time`. This commit changes that by setting a sensible default of 5m for `recover_after_time` *if* a `expectedNodes` setting is present.
Closes#6742
Previously the size of the priority queue was wrongly set to the total number
of terms. Instead, it should be set to 'maxQueryTerms'. This makes the
selection of best terms O(n), instead of O(n*log(n)).
Jira patch: https://issues.apache.org/jira/browse/LUCENE-5795Closes#6657
Out of #6808, we improved the handling of a primary failing to make sure replicas that are initializing are properly failed as well. After double checking it, it has 2 problems, the first, if the same shard routing is failed again, there is no protection that we don't apply the failure (which we do in failed shard cases), and the other was that we already tried to handle it (wrongly) in the elect primary method.
This change fixes the handling to work correctly in the elect primary method, and adds unit tests to verify the behavior
The change also expose a problem in our handling of replica shards that stay initializing during primary failure and electing another replica shard as primary, where we need to cancel its ongoing recovery to make sure it re-starts from the new elected primary
closes#6825
Out of #6808, we improved the handling of a primary failing to make sure replicas that are initializing are properly failed as well. After double checking it, it has 2 problems, the first, if the same shard routing is failed again, there is no protection that we don't apply the failure (which we do in failed shard cases), and the other was that we already tried to handle it (wrongly) in the elect primary method.
This change fixes the handling to work correctly in the elect primary method, and adds unit tests to verify the behavior
closes#6816
This is what DateHistogramParser expects so will enable the builder to build valid requests using these variables.
Also added tests for preOffset and postOffset since these tests did not exist
Closes#5586
Since Lucene version 4.8 each file has a checksum written as it's
footer. We used to calculate the checksums for all files transparently
on the filesystem layer (Directory / Store) which is now not necessary
anymore. This commit makes use of the new checksums in a backwards
compatible way such that files written with the old checksum mechanism
are still compared against the corresponding Alder32 checksum while
newer files are compared against the Lucene build in CRC32 checksum.
Since now every written file is checksummed by default this commit
also verifies the checksum for files during recovery and restore if
applicable.
Closes#5924
This commit also has a fix for #6808 since the added tests in
`CorruptedFileTest.java` exposed the issue.
Closes#6808
Today, the tribe node needs the local node so it adds it when it starts, but other APIs would benefit from adding the local node, also, adding the local node should be done in a cleaner manner, where it belongs, which is right after the discovery service starts in the cluster service
closes#6811
This commit ensures that all primaries are allocated before we
restart the node. If one primary is in post recovery when we
restart it will not be allocated otherwise.