The MapperService is the "index wide view" of mappings. Methods on it
are used at query time to lookup how to query a field. This
change reduces the exposed api so that any information returned
is limited to that api exposed by MappedFieldType. In the future,
MappedFieldType will be guaranteed to be the same across all
document types for a given field.
Note CompletionFieldType needed some more settings moved to it. Other
than that, this change is almost purely cosmetic.
The shard can potentially not be deleted if the obsever that checks for the shard
STARTED is not registered because the registering is delayed by the disruption.
If the sum of delays is more than 10s then the wait for shard deletion will time out.
The ResourceWatcher used settings prefixed `watcher.`, which
potentially could clash with the watcher plugin.
In order to prevent confusion, the settings have been renamed to
`resource.reload` prefixes.
This also uses the deprecation logging infrastructure introduced
in #11033 to log deprecated settings and their alternative at
startup.
Closes#11175
In case a developer gets a list of ids from another data source,
it does not make a lot of sense, to convert it to an array first,
and then internally in IdsQueryBuilder elasticsearch creates a
list out of this.
Closes#5089
In order for exists queries to use the null value for
a field, null value needs to be part of the field type (should
differ between document types). This change moves null value
into the field type, as well as simplifies the null value
methods available to remove supportsNullValue().
#11363 introduced a retry logic for the case where we have to wait on a mapping update during the translog replay phase of recovery. The retry throws or recovery stats off as it may count ops twice.
There are different ways to register custom query parsers through plugins, a couple of them work per index via index settings, which is probably even too flexible. There also three different ways to add a global custom query parser through either IndicesQueriesModule or IndicesQueriesRegistry. This commit consolidates the registration of custom query parsers via IndicesQueriesModule#addQuery(Class<? extends QueryParser>). The complexity of supporting parsers per index is not needed hence it got removed. Also the other ways of registering global custom parsers are dropped in favour of the one mentioned above.
Closes#11481
The search template and template query did not run the template through the script engine before searching for an inner template. This meant that parsing for the inner template failed because the template was not always valid JSON (if it contained mustache code) when it was parsed to find the inner template. This has been fixed and Tests added to check for the failing behaviour.
Tests are from https://github.com/elastic/elasticsearch/pull/8393
In #10918, we introduced the prompt placeholders. These were had a different format
than our existing placeholders. This changes the prompt placeholders to follow the
format of the existing placeholders.
Relates to #11455
This commit adds an additioal jar that is shaded and keeps all the
artifacts that are used by default on the server-side unshaded. Users
that need a shaded jar can now use the `shaded` classifyer to pull
the shaded minimized jar in instead. Including the shaded jar in a
downstream project looks like this:
```XML
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<classifier>shaded</classifier>
</dependency>
```
After asynchronously fetching shard information the gateway allocator issues a reroute via a cluster state update task. #11421 introduced an optimization trying to avoid submitting unneeded reroutes when results for many shards come in together. This is done by having a rerouting flag, indicating a pending reroute is coming and thus any new incoming shard info doesn't need to issue a reroute. This flag wasn't reset upon an error in the reroute update task. Most notably - if a master node had to step during to a min_master_node violation, it could reject an ongoing reroute. Lacking to reset the flag causing it to skip any future reroute, when the node became master again.
Closes#11519
Since we are creating write.lock earlier now, blob store shouldn't attempt deleting this file during clean up at the end of the restore process. The file is locked and the blog store doesn't succeed, but it generates a lot of useless warnings "failed to delete file [write.lock] during snapshot cleanup".
Closes#11517