To support real time gets, the engine keeps an in-memory map of recently index docs and their location in the translog. This is needed until documents become visible in Lucene. With 1.3.0, we have improved this map and made tightly integrated with refresh cycles in Lucene in order to keep the memory signature to a bare minimum. On top of that, if the version map grows above a 25% of the index buffer size, we proactively refresh in order to be able to trim the version map back to 0 (see #6363) . In the same release, we have fixed an issue where an update to the indexing buffer could result in an unwanted exception during recovery (#6667) . We solved this by waiting with updating the size of the index buffer until the shard was fully recovered. Sadly this two together can have a negative impact on the speed of translog recovery.
During the second phase of recovery we replay all operations that happened on the shard during the first phase of copying files. In parallel we start indexing new documents into the new created shard. At some point (phase 3 in the recovery), the translog replay starts to send operation which have already been indexed into the shard. The version map is crucial in being able to quickly detect this and skip the replayed operations, without hitting lucene. Sadly #6667 (only updating the index memory buffer once shard is started) means that a shard is using the default 64MB for it's index buffer, and thus only 16MB (25%) for the version map. This much less then the default index buffer size 10% of machine memory (shared between shards).
Since we don't flush anymore when updating the memory buffer, we can remove #6667 and update recovering shards as well. Also, we make the version map max size configurable, with the same default of 25% of the current index buffer.
Closes#10046
The analysis chain should be used instead of relying on this, as it is
confusing when dealing with different per-field analysers.
The `locale` option was only used for `lowercase_expanded_terms`, which,
once removed, is no longer needed, so it was removed as well.
Fixes#9978
Relates to #9973
The file scripts cache key should include the language of the script to prevent conflicts between scripts with same name but different extension (hence lang). Note that script engines can register multiple acronyms that may be used as lang at execution time (e.g. javascript and js are synonyms). We then need to make sure that the same script gets loaded no matter which of the acronyms is used at execution time. The problem didn't exist before this change ad the lang was simply ignored, while now we take it into account.
This change has also some positive effect on inline scripts caching. Up until now, the same script referred to with different acronyms would be compiled and cached multiple times, once per acronym. After this change every script gets compiled and cached only once, as we chose internally the acronym used as part of the cache key, no matter which one the user provides.
Closes#10033
Remove the settings around dangling indices, such as no import and timeout for deletion, we always want to import dangling indices for safety, and we should not allow to change the behavior. This also cleans up the code quite a bit.
closes#10016
we want to make sure the recovery finished all the way to post recovery. Current check, validating the shard is either in POST_RECOVERY or STARTED is not good because the shard could be also closed if things go fast enough (like in our tests). The assertion is changed to check the shard is not left in CREATED or RECOVERING.
Closes#10028
- Added NAME constants for each script language, avoiding to repeat the same strings all over the place.
- Simplified `compile` method signatures by removing a couple of variants. Note that all of these signatures are going to change again with #6418 as in order to compile/execute a script the caller will need to specify which operation is attempting to execute the script, info that will be provided as an additional mandatory argument.
- Removed double call to ScriptService#verifyDynamicScripting for every indexed or dynamic script.
- Decreased ScriptService inner classes visibility to private (CacheKey, IndexedScript, ApplySettings)
- Moved ScriptService inner classes to the bottom of the class, I think it makes it more readable.
- Resolved some compiler warnings
Closes#9992
The tribe node, at startup, sets up the tribe clients that will join their corresponding tribes. All of the tribe.* settings are properly forwarded to the corresponding tribe client. System properties and global configuration settings must not be forwarded to the tribe client though or they will end up overriding per tribe settings with same name causing issues.
For instance if you set the transport.tcp.port to some defined value for the tribe node, via system property or configuration file, that same value must not be forwarded to the tribe clients, otherwise they will try and use the same port, which will be already occupied by the tribe node itself, resulting in startup failed. Same for cluster.name, which will cause the tribe clients not to join their tribes.
Closes#9576Closes#9721
This commit adds scripting capability to significant_terms.
Custom heuristics can be implemented with a script that provides
parameters subset_freq, superset_freq,subset_size, superset_size.
closes#7850
The local DocumentMapper is updated while parsing and dynamic fields are added before
parsing has finished. If parsing fails after a dynamic field has been added already
then the field was not added to the cluster state but was present in the local mapper of this
node. New documents with the same field would not necessarily cause an update either and
after restarting the node the mapping for these fields were lost. Instead the new fields
should always be updated.
closes#9851closes#9874
Since the method can be called in an #execute event of the cluster service, we need the ability to use the cluster state that will be provided in the ClusterChangedEvent, have the ClusterState be provided as a parameter
Previously it was ignored and the publish cluster state timeout would kick in. In that case a stale master node would just wait for the inevitable and waste valuable time.
This issue was discovered by the DiscoveryWithServiceDisruptionsTests#testStaleMasterNotHijackingMajority test.
Also only perform cluster state versions and wrong master node check inside cluster state update task.
If a tragic even happens while we are reading the segments info from the
store the store might have been closed concurrently. We had this
behavior before and was lost in a refactoring.
If a tragic even happens while we are reading the segments info
from the store the store might have been closed concurrently. We had this behavior
before and was lost in a refactoring.
Will allow many reducers to share the same helper functionality without repeating code. Chose
to put these in static helpers instead of adding to Reducer base class. I can imagine other reducers
that aren't time-based (or don't care about contiguous buckets), which would make things like
gap policy useless.
Since these seemed more like helpers than inherent traits of a Reducer, they went into their own
static class.
Closes#9954
If a folder for an index was created that folder is never deleted from that node unless the index is deleted.
Data only nodes therefore can have empty folders for indices that they do not even have shards for.
This commit makes sure empty folders are cleaned up after all shards have moved away from a data only
node. The behavior is unchanged for master eligible nodes.
closes#9985
The Histogram and Range APIs for the aggregations changed so that there was a common interface between he types of Range/Histogram. This PR reflects that change in the Java API docs
Contributes to #9976