The `_index` field is now a completely virtual field thanks
to #12027. It is no longer necessary to index the actual value
of the index name.
closes#12329
The index name was passed along through many levels of mapping parsing,
just so that it could be used for _index. However, the index name
is really metadata that should exist alongside things like type and
id in SourceToParse.
This change moves index name to SourceToParse, and eliminates it from the
DocumentMapperParser.
Today we grant read+write+delete access to any files underneath the home.
But we have to remove this, if we want to have improved security of files
underneath elasticsearch.
Fold ignored unassigned to a UnassignedShards and have simpler handling of them. Also remove the trapy way of adding an ignored unassigned shards today directly to the list, and have dedicated methods for it.
This change also removes the useless moving of unassigned shards to the end, since anyhow we first, sort those unassigned shards, and second, we now have persistent "store exceptions" that should not cause "dead letter" shard allocation.
Break it into more manageable code by separating allocation primaries and allocating replicas. Start adding basic unit tests for primary shard allocator.
Our thread pools have support for timeout on a task. To support this, a special background task is schedule to run at timeout. That background task fires and check if the main task is still in the executor queue and then cancels it if needed. Currently we schedule this background task before adding the main task to the queue. If the timeout is very small (in tests we often use numbers like 2 ms) the background task can fire before the main one is added to the queue causing the timeout to be missed.
See http://build-us-00.elastic.co/job/es_g1gc_master_metal/11780/testReport/junit/org.elasticsearch.cluster/ClusterServiceTests/testTimeoutUpdateTask/Closes#12319
On top of that:
1) A relocation target shards' allocation id is changed to include the allocation id of the source shard under relocatingId (similar to shard routing semantics)
2) The logic around state change for finalize shard relocation is simplified - one simple start the target shard (we previously had unused logic around relocating state)
Closes#12299
While the GeoJSON spec does say a polygon is represented as an array of LinearRings (where a LinearRing is defined as a 'closed' array of points), the coerce parameter provides users with flexibility to have ES automatically close polygons. This addresses situations like those integrated with twitter (where GeoJSON polygons are not closed) such that our users do not have to write extra code to close the polygon. This code change adds the optional coerce parameter to the GeoShapeFieldMapper.
closes#11131
Currently this target is "yet another way" to run elasticsearch,
which we can't maintain. It also has the problem that it doesnt
ensure its running on the latest source code, doesn't configure
any scratch space properly, won't work with securitymanager, list
goes on.
Even if we made it work, it would break every day, since its untested.
Instead, `mvn package -Drun -DskipTests` will run packaging, and then
startup bin/elasticsearch (like integration tests, but in foreground).
It also enables debugger socket on port 8000, for people that like
IDE debuggers and not system.out.println.
Its a little slower to get started because of all the shading/RPM/DEB
building going on in `package` but that is just what it is right now
until that stuff is moved out.
failsafe uses surefire, which sucks. It also mean integ tests act alien right now.
I would rather have the consistency, e.g. things formatted the same way, running integ tests under security manager, etc.
Store information reports on which nodes shard copies exist, the shard
copy version, indicating how recent they are, and any exceptions
encountered while opening the shard index or from earlier engine failure.
closes#10952
When adding a script to the Groovy classloader, the script name is used
as the class identifier in the classloader. This means that in order not
to break JVM Classloader convention, that script must always be
available by that name. As a result, modifying a script with the same
content over and over causes it to be loaded with a different name (due
to the incrementing integer).
This is particularly bad when something like chef or puppet replaces the
on-disk script file with the same content over and over every time a
machine is converged.
This change makes the script name the SHA1 hash of the script itself,
meaning that replacing a script with the same text will use the same
script name.
Resolves#12212
Just like specifying `?preference=_primary`, this adds the ability to
specify `?preference=_replica` or `?preference=_replica_first` on
requests that support it.
Resolves#12222
This action is a liveness test added in #8763 . It should be excluded, just like the fault detection logic or things become overly chatty.
Closes#12291
Most of our tests call assertSearchResponse which checks whether all shards
were successful. However this is usually not necessary given that all shards
that received documents should be available given the way our indexing works.
The only shards that might not be available are those that did not index
any documents. So removing this assertion would allow us to remove most
ensureGreen/ensureYellow calls while still being able to assert on the content
of the search response since all data have been taken into account.
For now I only removed the ensureYellow/Green calls from SearchQueryTests in
order to not de-stabilize the build, but eventually we should remove most of
them.
Today, the unicast zen test configuration will try to find a open port starting at the internal test cluster's
base port and continuing for 1000 ports. The internal test cluster class assigns a port range of 100 ports
to each JVM. This means that the unicast zen test configuration will try ports in the range for another JVM
and can lead to port conflicts. This change uses the same value for both so that the unicast configuration
does not go into another JVM's port range.
Add a unique allocation id for a shard, helping to uniquely identify a specific allocation taking place to a node.
A special case is relocation, where a transient relocationId is kept around to make sure the target initializing shard (when using RoutingNodes) is using it for its id, and when relocation is done, the transient relocationId becomes the actual id of it.
closes#12242
If an index name is reused but a leftover shard still exists on any node
we fail repeatedly to allocate the shard since we now check the index UUID
before reusing data. This commit allows to recover even if there is such a
leftover shard by deleting the leftover shard.
Closes#10677
During master election each node pings in order to discover other nodes and validate the liveness of existing nodes. Based on this information the node either discovers an existing master or, if enough nodes are found (based on `discovery.zen.minimum_master_nodes>>) a new master will be elected.
Currently, the node that is elected as master will currently update it the cluster state to indicate the result of the election. Other nodes will submit a join request to the newly elected master node. Instead of immediately processing the election result, the elected master
node should wait for the incoming joins from other nodes, thus validating the elections result is properly applied. As soon as enough nodes have sent their joins request (based on the `minimum_master_nodes` settings) the cluster state is modified.
Note that if `minimum_master_nodes` is not set, this change has no effect.
Closes#12161
Require urls for URL repository to be listed in repositories.url.allowed_urls setting. This change ensures that only authorized URLs can be accessed by elasticsearch
The previous strategy (target/xxx + .m2/repository) is obviously broken for multi-module
builds.
Includes hack for crazy jython, which "finds its own jar" then looks for a Lib/ beside it
Lucene deprecated this in 4.0 and we only try best effort to support it.
Folks should only use edit distance rather than some length based
similarity. Yet the formular is simple enough such that users can
still do it in the client if they really need to.
Closes#10638
The old script syntax has been removed from the Java API but the metrics aggregations were missed. This change removes the old script API from the ValuesSourceMetricsAggregationBuilder and removes the relevant test methods for the metrics aggregations.
Today we have a intermediate hierarchy for shard and index exceptions
which makes it hard to introduce generic exceptions like ResourceNotFoundException
intoduced in this commit. This commit breaks up the hierarchy by adding index and shard
as a special internal header that gets rendered for every exception that fills that header.
This commit removes dedicated exceptions like `IndexMissingException` or
`IndexShardMissingException` in favour of `ResourceNotFoundException`
Change the default delayed allocation timeout from 0 (no delayed allocation) to 1m. The value came from a test of having a node with 50 shards being indexed into (so beefy translog requiring flush on shutdown), then shutting it down and starting it back up and waiting for it to join the cluster. This took, on a slow machine, about 30s.
The value is conservatively low and does not try to address a virtual machine / OS restart for now, in order to not have the affect of node going away and users being concerned that shards are not being allocated to the rest of the cluster as a result of that. The setting can always be changed in order to increase the delayed allocation if needed.
closes#12166
The MetaData.clusterUUID is guaranteed to be unique across clusters and is handy (which may or may not have the same human readable cluster name).
Closes#11832
As explained in #11831, we currently have uuid fields on the cluster state, meta data and index metadata. The latter two are persistent across changes are being effectively used as a persistent uuid for this cluster and a persistent uuid for an index. The first (ClusterState.uuid) is ephemeral and changes with every change to the cluster state. This is confusing,
We settled on having the following, new names:
-> ClusterState.uuid -> stateUUID (transient)
-> MetaData.uuid -> clusterUUID (persistent)
-> IndexMetaData.uuid -> indexUUID (persistent).
Closes#11914Closes#11831
Today we throw ElasticsearchException if we can't lock the index. This can cause
problems since some places where we have logic to deal with IOException on shard
deletion won't schedule a retry if we can't lock the index dir for removal. This
is the case on shadow replicas for instance if a shared FS is used. The result
of this is that the delete of an index is never acked.
A change in #12116 introduces closing / cleaning of search ctx even if
the index service was closed due to a relocation of it's last shard. This
is not desired since in that case it's fine to serve the pending requests from
the relocated shard. This commit adds an extra check to ensure that the index is
either removed (delete) or closed via API.
The term query parser was too lenient during parsing and allowed to specify
more than one field, even though this expected to filter only for a single field.
This commit returns an exception if a query has been specified more than once.
Closes#12184
Modified ScriptEngineService to pass in a CompiledScript object
with newly added name and type member variables.
This can in turn be used to give better scripting error messages
with the type of script used and the name of the script.
Required slight modifications to the caching mechanism.
Note that this does not enforce good behavior in that plugins will
have to write exceptions that also output the name of the script
in order to be effective. There was no way to wrap the script
methods in a try/catch block properly further up the chain because
many have script-like objects passed back that can be run at a
later time.
closes#6653closes#11449
Changes in a nutshell:
* All expression logic is now encapsulated by ExpressionResolver interface.
* MetaData#convertFromWildcards() gets replaced by WildcardExpressionResolver.
* All of the indices expansion methods are being moved from MetaData class to the new IndexNameExpressionResolver class.
* All single index expansion optimisations are removed.
The logic for resolving a concrete index name from an expression has been moved from MetaData to IndexExpressionResolver. The logic has been cleaned up and simplified were was possible without breaking bwc.
Also the notion of aliasOrIndex has been changed to index expression.
The IndexNameExpressionResolver translates index name expressions into concrete indices. The list of index name expressions are first delegated to the known ExpressionResolverS. An ExpressionResolver is responsible for translating if possible an expression into another expression (possibly but not required this can be concrete indices or aliases) otherwise the expressions are left untouched. Concretely this means converting wildcard expressions into concrete indices or aliases, but in the future other implementations could convert expressions based on different rules.
To prevent many overloading of methods, DocumentRequest extends now from IndicesRequest. All implementation of DocumentRequest already did implement IndicesRequest indirectly.
This allows the creation of the RPM artifact as part of the
maven package phase. The result of this is that we get checksum and
name correction for-free as it's all build an installed into the m2
repository. This also publishes the RPM together with .deb to the mvn
mirror.
Note: this will only build the RPM as part of the package phase if
`-Dpackage.rpm=true` since the binaries to build the RPM are not
availabel on all platforms.
Today we only clear search contexts for deleted indies. Yet, we should
do the same for closed indices to ensure they can be reopened quickly.
Closes#12116
A method for the new Script API were missing in the ValuesSourceMetricsAggregationBuilder. This change adds the missing method and deprecates the old Script API methods
When a node sends a shard started message to the master, the master goes through the routing table looking for the shard to start. At the moment we validate the indexUUID, the node the shard is assigned to and the fact that the shard is initializing. This check goes wrong if a relocating replica shard finishes recovery just at the moment the source node leaves the cluster. In this case the master will cancel the recovery and will likely assign a new initializing replica to the same target node. In this case the message from the relocation recovery can activate the new replica wrongfully.
Also, the logic for decided whether an incoming shard started message will be applied was split between ShardStateAction and the AllocationService.
This commit does the following:
1) Let ShardStateAction only filter basic stuff like index existence and indexUUID.
2) Move the trickier shard started matching logic to the AllocationService and make it stricter
3) Unify ShardStateAction filtering logic for both shard started and shard failed.
4) Add unit tests for all of the above.
For an example test failure see: http://build-us-00.elastic.co/job/es_core_16_centos/388/Closes#11999
Today everything is tight to having the next version as the latest.
In order to work towards 2.0.0.beta1 we need to fix all the usage of
2.0.0-SNAPSHOT to reflect the version we will release soon.
Usually we do this on the release branch but to simplify things I wanna
keep this on master for now and move to 2.1.0-SNAPSHOT on master once
we created a 2.0 branch.
Closes#12148
This information was stored with the snapshot but wasn't available on the interface. Knowing the version of elasticsearch that created the snapshot can be useful to determine the minimal version of the cluster that is required in order to restore this snapshot.
Closes#11980
This change fixes the plugin manager to trim `elasticsearch-` and `es-` prefixes from plugin names
for our official plugins. This restores the old behavior prior to #11805.
Closes#12143
Today it will remove all permissions and only set execute bit:
---x--x--x
Instead we should preserve existing permissions, and just add
read and execute to whatever is there.
Closes#12142
This change allows custom settings to be passed to the client for the external test cluster,
which is necessary when additional settings need to be passed to the client in order to
properly communicate with the external test cluster.
Before inner_hits existed named queries has support to also verify if inner queries of nested query matched with returned documents. This logic was broken and became obsolete from the moment inner hits get released. #10694 fixed named queries for nested docs in the top_hits agg, but it didn't fix the named query support for nested inner hits. This commit fixes that and on top of this also adds support for parent/child inner hits.
We allow setting the node's name a few different ways: the `name` system
property, the setting `name`, and the setting `node.name`. There is an order
of preference to these settings that gets applied, which can copy values from the
system property or `node.name` setting to the `name` setting. When setting
only `node.name` to one of the prompt placeholders, the user would be
prompted twice as the value of `node.name` is copied to `name` prior to
prompting for input. Additionally, the value entered by the user for `node.name`
would not be used and only the value entered for `name` would be used.
This fix changes the behavior to only prompt once when `node.name is set` and
`name` is not set. This is accomplished by waiting until all values have been
prompted and replaced, then the logic for determining the node's name is
executed.
Closes#11564
This commit adds support to retrieve fields when using the bulk update API. This functionality was previously available for the update API
but not for the bulk update API.
Closes#11527
Fixed documentation since the default rewrite method for fuzzy queries is to
select top terms, fixed usage of the fuzzy rewrite method, and removed unused
`rewrite` parameter.
Close#6932
This rewrite method is interesting because it computes scores as if all terms
had the same frequencies, which avoids disappointments with ranking when a fuzzy
query ranks typos first given that they are less frequent than the correct term.
Today shards are responsible for producing one sort value per document, which
is later used on the coordinating node to resolve the global top documents.
However, this is problematic on string fields with
`missing: _first, order: desc` or `missing: _last, order: asc` given that there
is no such thing as a string that compares greater than any other string. Today
we use a string containing a single code point which is the maximum allowed code
point but this is a hack: instead we should inform the coordinating node that
the document had no value and let it figure out how it should be sorted
depending on whether missing values should be sorted first or last.
Close#9155
Simplify and consolidate ShardRouting construction. Make sure that there is really only one place it gets created, when a shard is first created in unassigned state, and from there on, it is either copy constructed or built internally as a target for relocation.
This change helps make sure within our codebase data carries over by the ShardRouting is not lost as the shard goes through transitions, and can help simplify the addition of more data on it (like uuid).
For testing, a centralized TestShardRouting allows to create testable versions of ShardRouting, that are not needed to be as strict as the non test codebase. This can be cleanup more later on, but it is a good start.
closes#12125
This makes FuzzyQueryBuilder and Parser take an Object as a value using the
same logic as termQuery, so that numbers, dates or Strings would be properly
handled.
Relates #11865Closes#12020
This converts the tracking of jars and classes in JarHell to use
Path objects, instead of URL. This makes for nicer printing
of the underlying path when an error does occur.
This commit defaults fuzzy_transpositions on fuzzy queries to true. This means that by default, tranpositions will now count as a single
edit.
Closes#9278
the failure type. This change marks the engine as corrupted only when the failure
is caused by an actual index corrruption. When an engine is failed for other
reasons, the engine is only closed without removing the shard state.
closes#11788
Plugin Manager can now use another simplified form when a user wants to install an official plugin hosted at elasticsearch download service.
The form we use is:
```sh
bin/plugin install pluginname
```
As plugins share now the same version as elasticsearch, we can automatically guess what is the exact current version of the plugin manager script.
Also, download service will now use `/org.elasticsearch.plugins/pluginName/pluginName-version.zip` URL path to download a plugin.
If the older form is provided (`user/plugin/version` or `user/plugin`), we will still use:
* elasticsearch download service at `/user/plugin/plugin-version.zip`
* maven central with groupIp=user, artifactId=plugin and version=version
* github with user=user, repoName=plugin and tag=version
* github with user=user, repoName=plugin and branch=master if no version is set
Note that community plugin providers can use other download services by using `--url` option.
If you try to use the new form with a non core elasticsearch plugin, the plugin manager will reject
it and will give you all known core plugins.
```
Usage:
-u, --url [plugin location] : Set exact URL to download the plugin from
-i, --install [plugin name] : Downloads and installs listed plugins [*]
-t, --timeout [duration] : Timeout setting: 30s, 1m, 1h... (infinite by default)
-r, --remove [plugin name] : Removes listed plugins
-l, --list : List installed plugins
-v, --verbose : Prints verbose messages
-s, --silent : Run in silent mode
-h, --help : Prints this help message
[*] Plugin name could be:
elasticsearch-plugin-name for Elasticsearch 2.0 Core plugin (download from download.elastic.co)
elasticsearch/plugin/version for elasticsearch commercial plugins (download from download.elastic.co)
groupId/artifactId/version for community plugins (download from maven central or oss sonatype)
username/repository for site plugins (download from github master)
Elasticsearch Core plugins:
- elasticsearch-analysis-icu
- elasticsearch-analysis-kuromoji
- elasticsearch-analysis-phonetic
- elasticsearch-analysis-smartcn
- elasticsearch-analysis-stempel
- elasticsearch-cloud-aws
- elasticsearch-cloud-azure
- elasticsearch-cloud-gce
- elasticsearch-delete-by-query
- elasticsearch-lang-javascript
- elasticsearch-lang-python
```
AbstractFieldMapper is the only direct base class of FieldMapper.
This change moves all AbstractFieldMapper functionality into
FieldMapper, since there is no need for 2 levels of abstraction.
Each scroll on a scan causes a query to be executed. This commit adds support for these indirect queries to count against the search stats.
Additionally, this commit adds three new search stats: scroll_count, scroll_time_in_millis, and scroll_current. scroll_count tracks the
number of completed scrolls. scroll_time_in_millis tracks the total time that scrolls were held open. scroll_current tracks the number of
scrolls currently open.
Closes#9109
In testing infra, one can simulate node GCs, network issues and other problems by adding a disruption to the test cluster. Those disruption are automatically removed after the test is done. At the moment each disruption indicates how long it will take the cluster to heal once the disruption is removed and the test cluster waits for this amount of time. However, more often than not this is an upper bound, causing a much longer wait than needed. Instead we should push the responsibility of healing to the disruption it self, where we can be smarter about what we wait for.
Closes#12071
When using `awaitBusy`, sometimes, you might not want to double time between two runs in an infinitive manner.
For example, let's say it will probably take 30 seconds to run a test.
When doubling all the time, you will most likely wait for a bigger time than needed:
|iteration|ms |s |duration (ms)|duration (s)|
|-----------|-------------|-----------|-----------|-----------|
|1|1|0,001|1|0,001|
|2|2|0,002|3|0,003|
|3|4|0,004|7|0,007|
|4|8|0,008|15|0,015|
|5|16|0,016|31|0,031|
|6|32|0,032|63|0,063|
|7|64|0,064|127|0,127|
|8|128|0,128|255|0,255|
|9|256|0,256|511|0,511|
|10|512|0,512|1023|1,023|
|11|1024|1,024|2047|2,047|
|12|2048|2,048|4095|4,095|
|13|4096|4,096|8191|8,191|
|14|8192|8,192|16383|16,383|
|15|16384|16,384|32767|32,767|
|16|32768|32,768|65535|65,535|
|17|65536|65,536|131071|131,071|
|18|131072|131,072|262143|262,143|
|19|262144|262,144|524287|524,287|
|20|524288|524,288|1048575|1048,575|
|21|1048576|1048,576|2097151|2097,151|
For example here, if the task is successful after 35 seconds, we will most likely have to wait for 32s more before the Predicate is run again.
With this patch, the maximum sleep time is now set to 1 second.
This pipeline aggregation runs a script on each bucket in the parent aggregation to determine whether the bucket is kept in the final aggregation tree. If the script returns true the bucket is retained, if it returns false the bucket is dropped
If you are using the default date or the named identifiers of dates,
the current implementation was allowed to read a year with only one
digit. In order to make this more strict, this fixes a year to be at
least 4 digits. Same applies for month, day, hour, minute, seconds.
Also the new default is `strictDateOptionalTime` for indices created
with Elasticsearch 2.0 or newer.
In addition a couple of not exposed date formats have been exposed, as they
have been mentioned in the documentation.
Closes#6158
Field names containing dots can cause problems. For example, @jpountz
made this recreation which cause no error, but can result in a
serialization exception if the type already exists:
https://gist.github.com/jpountz/8c66817e00a322b81f85
But this is not just a potential conflict. It also has larger problems,
since only the leaf mapper is created. The intermediate "foo" object
field would not exist if only "foo.bar" was in the mappings.
This change forbids the use of dots in field names. It also
fixes an issue with passing through the update_all_types setting,
which was always set to true whenever a type already existed (!).
I do not think we should worry about backwards compatibility here. This
should be a hard break (and added to the migration plugin).
This commit adds logic to prefer shards with higher priority
or from newer indicse to be allocated first if they are unallocated post API.
This commit allows users to set `index.priority` to a non-negative integer to
prioritize index recovery for certain indices. This setting is dynamically updateable
and defaults to `0`. If two indices have the same priority this change takes the creation
date into account to prioritize shards from newer indices which is important in the time-based
indices usecase.
Closes#11787
When a bulk request fails on a Delete or Update request, the BulkItemResponse
reports incorrect "index" operation in the response. This PR fixes this
for the case of closed indices as reported in #9821 but also for
other failures and adds tests for the two cases covered.
Closes#9821
the specialization can cause stack overflows if an exception is a
ElasticsearchWrapperException as well as a ElasticsearchException.
This commit just relies on the unwrap logic now to find the cause and only
renders if we the rendering exception is the cause otherwise forwards
to the generic exception rendering.
Closes#11994
While MappedFieldType contains settings for doc values and fielddata,
AbstractFieldMapper had special logic in its constructor that
required setting these on the field type from there. This change
removes those settings from the AbstractFieldMapper constructor.
As a result, defaultDocValues(), defaultFieldType() and
defaultFieldDataType() are no longer needed.
This commit merges the pre-existing special exception that
allowed to associate headers with exceptions and the elasticsaerch
base class `ElasticsearchException` This allows for more generic use
of exceptions where plugins can associate meta-data with any elasticsearch
base exception to control behavior etc.
This also addds a generic SecurityException to allow plugins to pass on
information based on the RestStatus.
If the version of a node is lower than the minimum supported version or higher than the maximum supported version, a node shouldn't be allowed to join and nodes should join that elected master node
Closes#11924
Removed ParseField#match variant that accepts the field name only, without parse flags. Such a method is harmful as it defaults to empty parse flags, meaning that no deprecation exceptions will be thrown in strict mode, which defeats the purpose of using ParseField. Unfortunately such a method was used in a lot of places were the parse flags weren't easily accessible (outside of query parsing), and in a lot of other places just by mistake.
Parse flags have been introduced now as part of SearchContext and mappers where needed. There are a few places (e.g. java api requests) where it is not possible to retrieve them as they depend on the index settings, in that case we explicitly pass in EMPTY_FLAGS for now, but this has to be seen as an exception.
Closes#11859
This commit changes MessageChannelHandler to not skip the underlying
ChannelBuffer while a StreamInput is open on top of it. In case eg. compression
is enabled, this prevents failures due to the fact that the decompressed
stream input expects a certain structure that it can't verify if the position
of the underlying buffer is changed.
Added dynamic arguments to `ElasticsearchException`, `ElasticsearchParseException` and `ElasticsearchTimeoutException`.
This helps keeping the exception messages clean and readable and promotes consistency around wrapping dynamic args with `[` and `]`.
This is just the start, we need to propagate this to all exceptions deriving from `ElasticsearchException`. Also, work started on standardizing on lower case logging & exception messages. We need to be consistent here...
- Uses the same `LoggerMessageFormat` as used by our logging infrastructure.
The work around for resolving `now` doesn't need to be used for aliases, becuase alias filters are parsed at search time. However it can't be removed, because the percolator relies on it.
Parent/child can be specified again in alias filters, this now works again because alias filters are parsed at search time. Parent/child will also use the late query parse work around, to make sure to do the final preparations when the search context is around. This allows the aliases api to validate the parent/child queries without failing because there is no search context.
Closes#10485
Eventually, the field type should not need any names, because there
will be only one name which leads to finding it (the full name, which is
also the index name). However, the short or "simple" name (using java
terminology for class names) is needed just in a couple places, for
serialization.
This change moves the simple name out of MappedFieldType.Names, into
Mapper, and makes Mapper and FieldMapper abstract classes.
Today we loose the RestStatus code for non-serializable exceptions.
This can be tricky if they are supposed to signal certain situations
like authentication errors etc. This commit adds support for carrying on
the exceptions in the NotSerializableExceptoinWrapper
The filters aggregation now has an option to add an 'other' bucket which will, when turned on, contain all documents which do not match any of the defined filters. There is also an option to change the name of the 'other' bucket from the default of '_other_'
Closes#11289
This change means that when the skip gap policy is used, the bucket script aggregation will skip executing the script on a bucket if any of the required bucket_paths are missing for the bucket. No aggregation will be added to the bucket, and the aggregation will move to the next bucket.
This commit changes the postrm script so that it prints error messages instead of failing & exiting when the deletion of a directory failed while removing a RPM/DEB package.
Closes#11373
This allows a lot of null checks to be removed where we were always falling back to the ValueFormat.RAW anyway. Now the format is set to ValueFormat.RAW when no alternative is suitable.
Closes#10594
Field stats index constraints allows to omit all field stats for indices that don't match with the constraint. An index
constraint can exclude indices' field stats based on the `min_value` and `max_value` statistic. This option is only
useful if the `level` option is set to `indices`.
For example index constraints can be useful to find out the min and max value of a particular property of your data in
a time based scenario. The following request only returns field stats for the `answer_count` property for indices
holding questions created in the year 2014:
curl -XPOST 'http://localhost:9200/_field_stats?level=indices' -d '{
"fields" : ["answer_count"] <1>
"index_constraints" : { <2>
"creation_date" : { <3>
"min_value" : { <4>
"gte" : "2014-01-01T00:00:00.000Z",
},
"max_value" : {
"lt" : "2015-01-01T00:00:00.000Z"
}
}
}
}'
Closes#11187
"Root" is a very confusing term for meta field mappers. This change
renames "RootMapper" to "MetadataFieldMapper" and simplifies
how metadata mappers are setup.
It also requires that metadata mappers are now a FieldMapper
(MetadataFieldMapper extends from AbstractFieldMapper). The only
use of a root mapper that wasn't a field mapper was the theoretical
"external" root mapper (just a test mapper). But it doesn't make
sense to not have an actual field, and this falls inline with
the hopefully eventual collapsing of AbstractFieldMapper/FieldMapper/Mapper.
- shard listing actions underpinning shard allocation do not have access to that new node yet (causing errors during shard allocation see #11923
- the very first cluster state published to a node already has shard assignments to it. This surfaced other issues we are working to fix separately
This commit changes the reroute to be done post processing the initial join cluster state to side step these issues while we work on a longer term solution.
Closes#11960
We had several problems with Java Serializatin in the past. At some point
in the Java 1.7.x series JDKs where not compatible anymore when java
serialization (ObjectStream) was used to exchange objects. In elasticsearch
we used this to serialize exceptions across the wire which caused several problems
with incompatible JDKs. While causing lot of trouble this essentially prevented
users from moving forward and upgrade their JVMs. To prevent these kind of issues
this commit removes the dependency on java serialization entirely and bans the
usage of ObjectOutputStream and ObjectInputStream entirely.
Yet, we can't fully serialize all exception anymore such that this commit
is best effort and adds hand written serialization to all elasticsearch exceptions
as well to a selected set of JDK and Lucene exceptions. (see StreamOutput#writeThrowable /
StreamInput.readThrowable). Stacktraces should be preserved for all exceptions while
several names might be replaced with ElasticsearchException if there is no mapping for
the given exception.
In order to support older RPM based distributions like CentOS5,
we should have one RPM available, which is not signed.
This commit creates an unsigned RPM first, then moves it over to
target/releases during the build, then builds a signed RPM.
The unsigned one is uploaded via S3, where as the signed one is
used for the repositories.
In addition, you can now build an RPM without having to specify
any gpg credentials due to offloading this into a maven profile
that is only activated when specifying `rpm.sign` property.
Closes#11587
instead of maintaining a thread local cache in the PercolatorQueriesRegistry.
Before PercolatorQueriesRegistry had its own cache, because all the queries had to forcefully opt out of caching. Nowadays in master small segments are never cached by the query cache, so the reason for the dedicated cache is no longer valid.
Today we keep track of how often filters are used at the index level in order
to decide whether they should be cached or not. This is an issue if you have
several shards of the same index on the same node as it will multiply statistics
by the number of shards that you have for this index on the node, which defeats
the purpose of waiting for a filter to be reused before caching them.
If the translog UUID is corrupted we should not convert it
to UTF-8 since it might be invalid. Instead we should compare
the UTF-8 byte representation directly.
This more consistent with the other logging it makes and since it can be used in many operations the output can be more verbose (without adding too much info as to who timed out exactly - which we can fix separately). If need be the caller of the observer can log a higher level message.
Closes#11722
In order to be more consistent with what they do, the query cache has been
renamed to request cache and the filter cache has been renamed to query
cache.
A known issue is that package/logger names do no longer match settings names,
please speak up if you think this is an issue.
Here are the settings for which I kept backward compatibility. Note that they
are a bit different from what was discussed on #11569 but putting `cache` before
the name of what is cached has the benefit of making these settings consistent
with the fielddata cache whose size is configured by
`indices.fielddata.cache.size`:
* index.cache.query.enable -> index.requests.cache.enable
* indices.cache.query.size -> indices.requests.cache.size
* indices.cache.filter.size -> indices.queries.cache.size
Close#11569
Today, we disable CORS by default, but if a user simply enables CORS their instance of
elasticsearch will allow cross origin requests from anywhere, as the default value for allowed
origins is `*`.
This changes the default to be `null` so that no origins are allowed and the user must explicitly
specify the origins they wish to allow requests from. The documentation also mentions that there
is a security risk in using `*` as the value.
Closes#11169
In order to be backwards compatible, indices created before 2.x must support
indexing of a unix timestamp and its configured date format. Indices created
with 2.x must configure the `epoch_millis` date formatter in order to
support this.
Relates #10971
Today we are very lenient in parsing the translog files. This is
actually not necessary since we have a clear run once upgrade path.
All files are converted into the new file name pattern such that we
only need to look at old file patterns in the context of the upgrade.
This commit makes parsing really strict with the exceptoin of the upgrade path.
This adds a new pipeline aggregation, the cumulative sum aggregation. This is a parent aggregation which must be specified as a sub-aggregation to a histogram or date_histogram aggregation. It will add a new aggregation to each bucket containing the sum of a specified metrics over this and all previous buckets.
Today we mark a translog as upgraded by adding a marker to the engine commit.
Yet, this commit was only added if there was no translog present before ie. only
if we have a fresh engine which is missing the entire point. Yet, this commit
adds a backwards index tests that ensures we can open old indices more than once
ie. mark the index as upgraded.
Closes#11858
In Lucene 5x the exception thrown when highlighter encounters a huge term
is a BytesRefHash.MaxBytesLengthExceededException but in Lucene 4x it is
wrapped in a RuntimeException. Therefore, it seems saver to unwrap this.
There was only a single actual "use" of close, for a threadlocal
in VersionFieldMapper. However, that threadlocal is completely
unnecessary, so this change removes the threadlocal and
close() altogether.
This commit consolidates several abstractions on the shard level in
ordinary classes not managed by the shard level guice injector.
Several classes have been collapsed into IndexShard and IndexShardGatewayService
was cleaned up to be more lightweight and self-contained. It has also been moved into
the index.shard package and it's operation is renamed from recovery from "gateway" to recovery
from "store" or "shard_store".
Closes#11847
This commit folds ShardRouting, ImmutableShardRouting and MutableShardRouting
into ShardRouting. All mutators are package private anyway today so it's just
unnecessary abstraction.
ShardRoutings are now frozen once they are added to the IndexRoutingTable
to prevent modifications outside of the allocation code.
This commit makes the get and search APIs always return `_parent`, `_routing`,
`_timestamp` and `_ttl` in addition to `_id` and `_type`. This way, consumers
always have all required information in order to reindex a document.
Currently the SnapshotsService is concerned with both maintaining the global snapshot lifecycle on the master node as well as responsible for keeping track of individual shards on the data nodes. This refactoring separates two areas of concerns by moving all shard-level operations into a separate SnapshotShardsService.
Closes#11756
Currently the filter cache is configured to have a maximum size in bytes of 10%
of the JVM memory, and a maximum number of cached filters (across all segments
of all shard on the same node) of 100000. I would like to change the latter to
a more reasonable value of 1000.
Given that we track the most 256 most recently used filters per index and only
cache those that have been seen 5 times or more, a single index cannot have more
than 50 hot filters, so a maximum number of cached filters of 1000 per node
should be more than necessary.
Today, we have scheduled reroute that kicks every 10 seconds and checks if a
reroute is needed. We use it when adding nodes, since we don't reroute right
away once its added, and give it a time window to add additional nodes.
We do have recover after nodes setting and such in order to wait for enough
nodes to be added, and also, it really depends at what part of the 10s window
you end up, sometimes, it might not be effective at all. In general, its historic
from the times before we had recover after nodes and such.
This change removes the 10s scheduling, simplifies RoutingService, and adds
explicit reroute when a node is added to the system. It also adds unit tests
to RoutingService.
closes#11776
Since elasticsearch doesn't shade artifacts anymore (see #11522), the dependencies list for RPM/DEB must be updated. Now we package all maven libs by default except the generated -shaded/-tests/-test-cours JARs and slf4j-api (marked as optionnal).
We currently are very lax about allowing data types to conflict for the
same field name, across document types. This change makes the underlying
map in MapperService a 1-1 map of field name to field type, and throws
exception when new types are not compatible.
To still allow changing a type, with parameters that are allowed to be
changed, but for a field that exists in multiple types, a new parameter
to index creation and put mapping API is added: update_all_types.
This defaults to false, and the exception messages suggest using
this parameter when trying to modify a setting that is allowed to be
modified but is being limited by this restriction.
There are also a couple changes which try to base fields from new types
for dynamic mappings, and root mappers, on existing settings. For
dynamic mappings this is important if the dynamic defaults have been
changed. For root mappings, this is mostly just for backcompat when
pre 2.0 root mappers could have their field type changed.
fixes#8871
we currently don't expose this.
This adds the following to the OS section of `_nodes`:
```
"os": {
"name": "Mac OS X",
...
}
```
and the following to the OS section of `_cluster/stats`:
```
"os": {
...
"names": [
{
"name": "Mac OS X",
"count": 1
}
],
...
},
```
Closes#11807
This is a follow up to #8143 and #6730 for _timestamp. It removes
support for `path`, as well as any field type settings, and
enables docvalues for _timestamp, for 2.0. Users who need to
adjust these settings can use a date field.
- Fixes tests, and removes a few special snowflake, fragile tests.
- Removes concrete implementation of predict() and moves it into
each model so that the logic is clearer. Because there is some
shared checks/assertions, those remain in predict() and the main
prediction happens in doPredict()
The commit about adding cluster health response features also removed
accidentally some functionality, that resulted in wrong instanceof checks
in InternalClusterService and thus in test failures because the cluster
state task that was added via an anonymous was missing the cast.
This commit readds the abstract class with slight renaming.
Commit id was: 88f8d58c8b
If we mark the shard as being in POST_RECOVERY before the percolator
is fully set up we might expose it to the user as fully searchable before
all queries are loaded. This can lead to wrong results especially in tests
when a shard is concurrently marked as STARTED.
This commit also removes unneded abstractions on IndexShard where readoperations
should be allowed when the purose is a write.
In order to get a quick overview using by simply checking the cluster state
and its corresponding cat API, the following two attributes have been added
to the cluster health response:
* task max waiting time, the time value of the first task of the
queue and how long it has been waiting
* active shards percent: The percentage of the number of shards that are in
initializing state
This makes the cluster health API handy to check, when a fully restarted
cluster is back up and running.
Closes#10805
The change makes rest-spec-api a project in the same way as we build dev-tools. it packages the tests and api in a bundle using the maven-remote-resources-plugin and uses the same plugin in the plugins and core pom to unpack the rest-api-spec into the target directory and references the rest tests there in the test resources.
The main stimulus for this change is that for those using Eclipse the current build does not work. After running `mvn eclipse:eclipse` the Eclipse IDE errors because the rest-api-spec is outside of the project scope, meaning that every time the command is run (required whenever any dependencies change), the class path of all the projects has to be manually fixed.
This fixes an issue to allow for negative unix timestamps.
An own printer for epochs instead of just having a parser has been added.
Added docs that only 10/13 length unix timestamps are supported
Added docs in upgrade documentation
Fixes#11478
Tests relying on sleeps and latch timeouts are prone to weird timing issues
and hard to read / understand error messages. This commit moves towards a more
deterministic error model and replaces empty fails with real exceptions.
Some repository verification exceptions are currently only returned to the users but not logged on the nodes where the exceptions occurred, which makes troubleshooting difficult.
Closes#11760
For #8871, we need to be able to check field types are compatible,
without comparing FieldMappers. This change moves the simulation
checks (which generate merge conflicts) for any properties of
MappedFieldTypes into a new method, validateCompatible.
This also simplifies the merge code which merges settings
between the old and new fieldtypes. Previously, each subclass
of FieldMapper would have to set its own fieldtype settings.
However, now that we have .clone(), which perfectly copies
all properties (with subclasses accounted for), we can now
do a simple clone when merging.
Finally, this fixes a subtle bug in merging, in which if
merging has conflicts, and we were not simulating, we would
still update the field type, even though it was not compatible!
NOTE: there is one test failure I am trying to track down with
timestamp merging. Otherwise, all tests pass.
Currently, we delete the shard _state file on engine failure.
This behaviour does not persist the engine failure reason for later inspection.
This commit marks the shard store as corrupted instead of deleting
the _state file to ensure the store index can not be opened after and
the engine failure is persisted.
CommonTermsQueryParser does not check for disable_coords, only for
disable_coord. Yet the builder only outputs disable_coords, leading to
disabling the coordination factor to be ignored in the Java API.
Closes#11730Closes#11780
When using compression over the network, you might sometimes see warnings that
the stream was not fully read. This is because DeflaterOutputStream adds an
end-of-stream marker. When deserializing, we need to poll for one byte using
InputStream.read() to make sure to decode this EOS marker.
For the record, it does not strike all the time today because we perform
buffering when decompressing to avoid performing too many JNI calls, but it
is easy to make this warning happen all the time by decreasing the size of
the buffer we use.
Close#11748
Need to reset the registered setting in order to make sure the nex round will capture the right delay interval
also randomize setting and name the setting properly
closes#11759
Added several classes to support expressions being used for numerical
calculations in aggregations. Expressions will still not compile
when used with mapping and update script contexts.
Closes#11596Closes#11689
Allow to set delayed allocation timeout on unassigned shards when a node leaves the cluster. This allows to wait for the node to come back for a specific period in order to try and assign the shards back to it to reduce shards movements and unnecessary relocations.
The setting is an index level setting under `index.unassigned.node_left.delayed_timeout` and defaults to 0 (== no delayed allocation). We might want to change the default, but lets do it in a different change to come up with the best value for it. The setting can be updated dynamically.
When shards are delayed, a log message with "info" level will notify how many shards are being delayed.
An implementation note, we really only need to care about delaying allocation on unassigned replica shards. If the primary shard is unassigned, anyhow we are going to wait for a copy of it, so really the only case to delay allocation is for replicas.
close#11712
`com.google.common.collect.Iterators#emptyIterator()` is marked as deprecated and will be removed in May 2016. We should use JDK7 `Collections#emptyIterator()`
By setting human parameter to true, it's now possible to see human readable versions of Elasticsearch that created and updated the index as well as the date when the index was created.
Closes#11484
This changes the parameter name `ignore_like` to the more user friendly name
`unlike`. This later feature generates a query from the terms in `A` but not
from the terms in `B`. This translates to a result set which is like `A` but
unlike `B`. We could have further negatively boosted any documents that have
some `B`, but these documents already do not receive any contribution from
having `B`, and would therefore negatively compete with documents having `A`.
Closes#11117
this is really just a workaround for plugins to run their own
REST tests instead of the core ones. It opts out of the rest test
loading from the core jar file and tries to load from the classpath instead.
Eventually we need to fix this infrastrucutre to move away from parameterized
tests such that subclasses can override behavior.
Closes#11721
Added a licenses/ directory to core which contains a sha1 file for each JAR
dependency, and one or more LICENSE files and one NOTICE file for each
project.
Also adds dev-tools/src/main/resources/license-check/check_license_and_sha.pl
which checks that the licenses/ dir is up to date during a mvn verify,
and which can be used to update the sha1 files when upgrading dependencies.
Closes#2794Closes#10684Closes#11705
This change adds a simplistic heuristic to try to balance new shard
allocations across multiple data paths on one node so that e.g. if
there are two path.data and both have roughly the same free space, if
10 shards are suddenly allocated, we will put 5 on one path and 5 on
the other (vs 10 on a single path today).
Closes#11185Closes#11122
Unassigned meta includes additional information as to why a shard is unassigned, this is especially handy when a shard moves to unassigned due to node leaving or shard failure.
The additional data is provided as part of the cluster state, and as part of `_cat/shards` API.
The additional meta includes the timestamp that the shard has moved to unassigned, allowing us in the future to build functionality such as delay allocation due to node leaving until a copy of the shard is found.
closes#11653
Since the /var/run/elasticsearch directory is cleaned when the operating system starts, the init.d script must ensure that the PID_DIR is correctly created.
Closes#11594
The cache we are using for fielddata reclaims memory lazily/asynchronously. While this
helps with throughput this is an issue when a clear operation is issued manually since
memory is not reclaimed immediately. Since our clear methods already perform in linear
time anyway, this commit changes the fielddata cache to reclaim memory synchronously
when a clear command is issued manually. However, it remains lazy in case cache entries
are invalidated because segments or readers are closed, which is important since such
events happen all the time.
Close#11695
Some of our Java api builders had wrong logic when it comes to serializing the query in json format, resulting in missing fields like _name. Also, regexp parser was ignoring the _name field.
Closes#11694
`_timestamp` uses NumericDocValues instead of SortedNumericDocValues like other
numeric fields since it is guaranteed to be single-valued. However, we don't
need a different fielddata impl for it since DocValues.getSortedNumeric already
falls back to NUMERIC doc values if SORTED_NUMERIC are not available.
[maven] clean pom.xml
In Maven parent project, in dependency management, we should only declare which versions of 3rd party jars we want to use but not force any scope.
It makes then more obvious in modules what is exactly the scope of any dependency.
For example, one could imagine importing `jimfs` as a `compile` dependency in another module/plugin with:
```xml
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
</dependency>
```
But it won't work as expected as the default maven `scope` should be `compile` but here it's `test` as defined in the parent project.
So, if you want to use this lib for tests, you should simply define:
```xml
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
<scope>test</scope>
</dependency>
```
We also remove `maven-s3-wagon` from gce plugin as it's not used.
This was previously a container for an ObjectMapper, along with the
DocumentMapper that ObjectMapper came from. However, there was
only one use of needing the associated DocumentMapper, and that
wasn't actually used.
In Maven parent project, in dependency management, we should only declare which versions of 3rd party jars we want to use but not force any scope.
It makes then more obvious in modules what is exactly the scope of any dependency.
For example, one could imagine importing `jimfs` as a `compile` dependency in another module/plugin with:
```xml
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
</dependency>
```
But it won't work as expected as the default maven `scope` should be `compile` but here it's `test` as defined in the parent project.
So, if you want to use this lib for tests, you should simply define:
```xml
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
<scope>test</scope>
</dependency>
```
We also remove `maven-s3-wagon` from gce plugin as it's not used.
Now that doc values are the default for fielddata, specialized in-memory
formats are becoming an esoteric option. This commit removes such formats:
- `fst` on string fields,
- `compressed` on geo points.
I also removed documentation and tests that the fielddata cache is shared if
you change the format, since this is only true for in-memory fielddata formats
(given that for doc values, the caching is done directly in Lucene).
The script APIs have been deprecated long ago we can now remove them.
This commit still keeps the parsing code since it might be used in a
query that is still stuck in transaction log. This issue should be discussed
elsewhere.
Closes#11619
In order to restrict a single set of field type settings for a given
field name across an index, we need the ability to compare field types.
This change adds equals and hashcode, as well as tests for every field
type.
Make sure there is a single place where shard routing move to unassigned, so we can add additional metadata when it does, also, simplify shard routing implementations a bit
closes#11634
The AsyncShardFetch retrieves shard information from the different nodes in order to detirment the best location for unassigned shards. The class uses TransportNodesListGatewayStartedShards and TransportNodesListShardStoreMetaData in order to fetch this information. These actions, inherit from TransportNodesAction and are activated using a list of node ids. Those node ids are extracted from the cluster state that is used to assign shards.
If we perform a reroute and adding new news in the same cluster state update task, it is possible that the AsyncShardFetch administration is based on
a different cluster state then the one used by TransportNodesAction to resolve nodes. This can cause a problem since TransportNodesAction filters away unkown nodes, causing the administration in AsyncShardFetch to get confused.
This commit fixes this allowing to override node resolving in TransportNodesAction and uses the exact node ids transfered by AsyncShardFetch
Information about in-progress snapshot and restore processes is not really metadata and should be represented as a part of the cluster state similar to discovery nodes, routing table, and cluster blocks. Since in-progress snapshot and restore information is no longer part of metadata, this refactoring also enables us to handle cluster blocks in more consistent manner and allow creation of snapshots of a read-only cluster.
Closes#8102
This commit cleans up all the MergeScheduler infrastructure
and simplifies / removes all unneeded abstractions. The MergeScheduler
itself is now private to the Engine and all abstractions like Providers
that had support for multiple merge schedulers etc. are removed.
Closes#11602
This is helpful to track down the origin of pending_tasks that aren't
expected. In tests we catch this with an assert, but in production
asserts may not be enabled so we should at least add the class name.
Today we provide the ability to plug in MergePolicy and
we provide the once lucene ships with. We do not recommend to change
the default and even only a small number of expert users would ever touch
this. This commit removes the ancient log byte size and log doc count
merge policy providers, simplifies the MergePolicy wiring and makes the
tiered MP the one and only default. All notions of a merge policy has been
removed from the docs and should be deprecated in the previous version.
Closes#11588
There used to be a null check for _field_names mapper not existing. This
was recently removed. However, there is a corner case when the mapper
may be missing: when no types or docs exist at all in the index.
This change adds back a null check and just returns no docs.
The current ExceptionsHelper.unwrapCause(exception) requires the incoming exception to support ElasticsearchWrapperException , which TranslogRecoveryPerformer.BatchOperationException doesn't implement. I opted for a more generic solution
We had to make CompressedXContent.equals decompress data to fix some
correctness issues which had the downside of making equals() slow. Now we store
a crc32 alongside compressed data which should help avoid decompress data in
most cases.
Close#11247
Now that mapping updates are sync and done before indexing we don't really need the waiting component. Also, removed many places were they were used as safe guard against delayed mapping updates, which are now not needed.
While we had initially planned to keep rivers around in 2.0 to ease migration,
keeping support for rivers is challenging as it conflicts with other important
changes that we want to bring to 2.0 like synchronous dynamic mappings updates.
Nothing impossible to fix, but it would increase the complexity of how we
deal with dynamic mappings updates and manage rivers, while handling dynamic
mappings updates correctly is important for resiliency and rivers are on the go.
So removing rivers in 2.0 may well be a better trade-off.
These methods are now all in MappedFieldType. This removes the remaining
callers of the methods on FieldMapper, and cuts down the FieldMapper
API to no longer include them.
The MapperService is the "index wide view" of mappings. Methods on it
are used at query time to lookup how to query a field. This
change reduces the exposed api so that any information returned
is limited to that api exposed by MappedFieldType. In the future,
MappedFieldType will be guaranteed to be the same across all
document types for a given field.
Note CompletionFieldType needed some more settings moved to it. Other
than that, this change is almost purely cosmetic.
The shard can potentially not be deleted if the obsever that checks for the shard
STARTED is not registered because the registering is delayed by the disruption.
If the sum of delays is more than 10s then the wait for shard deletion will time out.
The ResourceWatcher used settings prefixed `watcher.`, which
potentially could clash with the watcher plugin.
In order to prevent confusion, the settings have been renamed to
`resource.reload` prefixes.
This also uses the deprecation logging infrastructure introduced
in #11033 to log deprecated settings and their alternative at
startup.
Closes#11175
In case a developer gets a list of ids from another data source,
it does not make a lot of sense, to convert it to an array first,
and then internally in IdsQueryBuilder elasticsearch creates a
list out of this.
Closes#5089
In order for exists queries to use the null value for
a field, null value needs to be part of the field type (should
differ between document types). This change moves null value
into the field type, as well as simplifies the null value
methods available to remove supportsNullValue().
#11363 introduced a retry logic for the case where we have to wait on a mapping update during the translog replay phase of recovery. The retry throws or recovery stats off as it may count ops twice.
There are different ways to register custom query parsers through plugins, a couple of them work per index via index settings, which is probably even too flexible. There also three different ways to add a global custom query parser through either IndicesQueriesModule or IndicesQueriesRegistry. This commit consolidates the registration of custom query parsers via IndicesQueriesModule#addQuery(Class<? extends QueryParser>). The complexity of supporting parsers per index is not needed hence it got removed. Also the other ways of registering global custom parsers are dropped in favour of the one mentioned above.
Closes#11481
The search template and template query did not run the template through the script engine before searching for an inner template. This meant that parsing for the inner template failed because the template was not always valid JSON (if it contained mustache code) when it was parsed to find the inner template. This has been fixed and Tests added to check for the failing behaviour.
Tests are from https://github.com/elastic/elasticsearch/pull/8393
In #10918, we introduced the prompt placeholders. These were had a different format
than our existing placeholders. This changes the prompt placeholders to follow the
format of the existing placeholders.
Relates to #11455
This commit adds an additioal jar that is shaded and keeps all the
artifacts that are used by default on the server-side unshaded. Users
that need a shaded jar can now use the `shaded` classifyer to pull
the shaded minimized jar in instead. Including the shaded jar in a
downstream project looks like this:
```XML
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<classifier>shaded</classifier>
</dependency>
```
After asynchronously fetching shard information the gateway allocator issues a reroute via a cluster state update task. #11421 introduced an optimization trying to avoid submitting unneeded reroutes when results for many shards come in together. This is done by having a rerouting flag, indicating a pending reroute is coming and thus any new incoming shard info doesn't need to issue a reroute. This flag wasn't reset upon an error in the reroute update task. Most notably - if a master node had to step during to a min_master_node violation, it could reject an ongoing reroute. Lacking to reset the flag causing it to skip any future reroute, when the node became master again.
Closes#11519
Since we are creating write.lock earlier now, blob store shouldn't attempt deleting this file during clean up at the end of the restore process. The file is locked and the blog store doesn't succeed, but it generates a lot of useless warnings "failed to delete file [write.lock] during snapshot cleanup".
Closes#11517