This change checks that `index.merge.scheduler.max_thread_count` < `index.merge.scheduler.max_merge_count` and fails index creation
and settings update if the condition is not met.
Fixes#20380
The logging configuration tests write to log files which are deleted at
the end of the test. If these files are not closed, some operating
systems will complain when these deletes are performed. This commit
ensures that the logging system is properly shutdown so that these files
can be properly deleted.
-D parameters used to be allowed when starting elasticsearch scripts.
However, this was removed in #18207, but the elasticsearch-plugin.bat script
was forgotten. This change removes the -D handling.
This was an error-prone version type that allowed overriding previous
version semantics. It could cause primaries and replicas to be out of
sync however, so it has been removed.
Resolves#19769
The evil logging tests write to log files which are deleted at the end
of the test. If these files are not closed, some operating systems will
complain when these deletes are performed. This commit ensures that the
logging system is properly shutdown so that these files can be properly
deleted.
This change adds a `field.with.dots` to all 2.4 bwc indicse and above.
It also adds verification code to OldIndexBackwardsCompatibilityIT to
ensure we upgrade the indices cleanly and the field is present.
Closes#19956
Due to the way the nodes where shut down etc. we always flushed
away the translog. This means we never tested upgrades of transaction
logs from older version. This change regenerates all valid bwc indices
and repositories with transaction logs and adds correspondent changes
to the OldIndexBackwardsCompatibilityIT.java
Parsing a script on retrieval causes it to be re-parsed on every single script call, which can be very expensive for large frequently called scripts. This change switches to parsing scripts only once during store operation.
Hi all,
I was trying to run the percolate examples, but I figured that because of the "type":"keyword" , the code wasn't working.
In the saerch query the "message" : "A new bonsai tree in the office" is a pure string.
I changed it to "text".
With the search refactoring we don't use SearchParseElement anymore to define our own parsing code but only for plugins. There was an abstract subclass called FetchSubPhaseParseElement in our production code, only used in one of our tests. We can remove that abstract class as it is not needed and not that useful for the test that depends on it.
FsInfo#total is removed in favour of getTotal, which allows to retrieve the total value
[TEST] fix FsProbeTests: null is not accepted as path constructor argument
During adding the new settings infrastructure the option to specify the
size of the filter cache as a percentage of the heap size which accidentally
removed. This change adds that ability back.
In addition the `Setting` class had multiple `.byteSizeSetting` methods
which all except one used `ByteSizeValue.parseBytesSizeValue` to parse
the value. One method used `MemorySizeValue.parseBytesSizeValueOrHeapRatio`.
This was confusing as the way the value was parsed depended on how many
arguments were provided.
This change makes all `Setting.byteSizeSetting` methods parse the value
the same way using `ByteSizeValue.parseBytesSizeValue` and adds
`Setting.memorySizeSetting` methods to parse settings that express memory
sizes (i.e. can be absolute bytes values or percentages). Relevant settings
have been moved to use these new methods.
Closes#20330
Exposing lucene 6.x minhash tokenfilter
Generate min hash tokens from an incoming stream of tokens that can
be used to estimate document similarity.
Closes#20149
This changes DiskThresholdDecider to only factor in leaving shards when
checking if a shard can remain. Previously, leaving shards were factored
in for both the `canAllocate` and `canRemain` checks, however, this
makes only the leaving shard sizes subtracted in the `canRemain` check.
It was possible that multiple shards relocating away from the node would
have their entire size subtracted, and the node had a chance to go over
the disk threshold (or hit the disk full) because it subtracted space
that was still being used for other in-progress relocations.
** The default script language is now maintained in `Script` class.
* Added `script.legacy.default_lang` setting that controls the default language for scripts that are stored inside documents (for example percolator queries). This defaults to groovy.
** Added `QueryParseContext#getDefaultScriptLanguage()` that manages the default scripting language. Returns always `painless`, unless loading query/search request in legacy mode then the returns what is configured in `script.legacy.default_lang` setting.
** In the aggregation parsing code added `ParserContext` that also holds the default scripting language like `QueryParseContext`. Most parser don't have access to `QueryParseContext`. This is for scripts in aggregations.
* The `lang` script field is always serialized (toXContent).
Closes#20122
and be much more stingy about what we consider a console candidate.
* Add `// CONSOLE` to check-running
* Fix version in some snippets
* Mark groovy snippets as groovy
* Fix versions in plugins
* Fix language marker errors
* Fix language parsing in snippets
This adds support for snippets who's language is written like
`[source, txt]` and `["source","js",subs="attributes,callouts"]`.
This also makes language required for snippets which is nice because
then we can be sure we can grep for snippets in a particular language.
When Elasticsearch depended on Log4j 1, there was jar hell from the
log4j and the apache-log4j-extras jar. As these dependencies are gone,
the jar hell exemption for Log4j 1 can be removed.
Relates #20336
This commit expands on the message printed when config files are
preserved when removing a plugin to give the user an indication of the
reason the config files are preserved.
Replicated operation consist of a routing action (the original), which is in charge of sending the operation to the primary shard, a primary action which executes the operation on the resolved primary and replica actions which performs the operation on a specific replica. This commit adds the targeted shard's allocation id to the primary and replica actions and makes sure that those match the shard the actions end up executing on.
This helps preventing extremely rare failure mode where a shard moves off a node and back to it, all between an action is sent and the time it's processed.
For example:
1) Primary action is sent to a relocating primary on node A.
2) The primary finishes relocation to node B and start relocating back.
3) The relocation back gets to the phase and opens up the target engine, on the original node, node A.
4) The primary action is executed on the target engine before the relocation finishes, at which the shard copy on node B is still the official primary - i.e., it is executed on the wrong primary.