The search template and template query did not run the template through the script engine before searching for an inner template. This meant that parsing for the inner template failed because the template was not always valid JSON (if it contained mustache code) when it was parsed to find the inner template. This has been fixed and Tests added to check for the failing behaviour.
Tests are from https://github.com/elastic/elasticsearch/pull/8393
We have to make sure all shards are started to know the synced flush will hit them all. Shards that are still initializing during the sync flush may be missed and confuse the stats call
In #10918, we introduced the prompt placeholders. These were had a different format
than our existing placeholders. This changes the prompt placeholders to follow the
format of the existing placeholders.
Relates to #11455
This commit adds an additioal jar that is shaded and keeps all the
artifacts that are used by default on the server-side unshaded. Users
that need a shaded jar can now use the `shaded` classifyer to pull
the shaded minimized jar in instead. Including the shaded jar in a
downstream project looks like this:
```XML
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<classifier>shaded</classifier>
</dependency>
```
After asynchronously fetching shard information the gateway allocator issues a reroute via a cluster state update task. #11421 introduced an optimization trying to avoid submitting unneeded reroutes when results for many shards come in together. This is done by having a rerouting flag, indicating a pending reroute is coming and thus any new incoming shard info doesn't need to issue a reroute. This flag wasn't reset upon an error in the reroute update task. Most notably - if a master node had to step during to a min_master_node violation, it could reject an ongoing reroute. Lacking to reset the flag causing it to skip any future reroute, when the node became master again.
Closes#11519
Since we are creating write.lock earlier now, blob store shouldn't attempt deleting this file during clean up at the end of the restore process. The file is locked and the blog store doesn't succeed, but it generates a lot of useless warnings "failed to delete file [write.lock] during snapshot cleanup".
Closes#11517
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.5.5</version>
</plugin>
```
## Release Notes - Maven Assembly Plugin - Version 2.5.5
* Bug
- [MASSEMBLY-767] - Schema missing from the web site
- [MASSEMBLY-768] - JarInputStream unable to find manifest
created by version 2.5.4
- [MASSEMBLY-769] - ZIP fileMode permissions not properly set with
dependencySet and unpackOptions
Some of our meta fields (such as _id, _version, ...) are returned as top-level
properties of the json document, while other properties (_timestamp, _routing,
...) are returned under `fields`. This commit makes all meta fields returned
as top-level properties.
So eg. `GET test/test/1?fields=_timestamp,foo` would now return
```json
{
"_index": "test",
"_type": "test",
"_id": "1",
"_version": 1,
"_timestamp": 10000000,
"found": true,
"fields": {
"foo": [ "bar" ]
}
}
```
while it used to return
```json
{
"_index": "test",
"_type": "test",
"_id": "1",
"_version": 1,
"found": true,
"fields": {
"_timestamp": 10000000,
"foo": [ "bar" ]
}
}
```
To protect ourselves against running blocking operations on a network thread we have added an assertion that triggers that verifies that the thread calling a BaseFuture.get() is not a networking one. While this assert is good, it wrongly triggers when the get() is called in order to pass it's result to a listener of AbstractListenableActionFuture who is marked to run on the same thread as the callee. At that point, we know that the operation has been completed and the get() call will not block.
To solve this, we change the assertion to ignore a get with a timeout of 0 and use that AbstractListenableActionFuture
Relates to #10402Closes#10573
feedback