A change in #12116 introduces closing / cleaning of search ctx even if
the index service was closed due to a relocation of it's last shard. This
is not desired since in that case it's fine to serve the pending requests from
the relocated shard. This commit adds an extra check to ensure that the index is
either removed (delete) or closed via API.
The term query parser was too lenient during parsing and allowed to specify
more than one field, even though this expected to filter only for a single field.
This commit returns an exception if a query has been specified more than once.
Closes#12184
Modified ScriptEngineService to pass in a CompiledScript object
with newly added name and type member variables.
This can in turn be used to give better scripting error messages
with the type of script used and the name of the script.
Required slight modifications to the caching mechanism.
Note that this does not enforce good behavior in that plugins will
have to write exceptions that also output the name of the script
in order to be effective. There was no way to wrap the script
methods in a try/catch block properly further up the chain because
many have script-like objects passed back that can be run at a
later time.
closes#6653closes#11449
Changes in a nutshell:
* All expression logic is now encapsulated by ExpressionResolver interface.
* MetaData#convertFromWildcards() gets replaced by WildcardExpressionResolver.
* All of the indices expansion methods are being moved from MetaData class to the new IndexNameExpressionResolver class.
* All single index expansion optimisations are removed.
The logic for resolving a concrete index name from an expression has been moved from MetaData to IndexExpressionResolver. The logic has been cleaned up and simplified were was possible without breaking bwc.
Also the notion of aliasOrIndex has been changed to index expression.
The IndexNameExpressionResolver translates index name expressions into concrete indices. The list of index name expressions are first delegated to the known ExpressionResolverS. An ExpressionResolver is responsible for translating if possible an expression into another expression (possibly but not required this can be concrete indices or aliases) otherwise the expressions are left untouched. Concretely this means converting wildcard expressions into concrete indices or aliases, but in the future other implementations could convert expressions based on different rules.
To prevent many overloading of methods, DocumentRequest extends now from IndicesRequest. All implementation of DocumentRequest already did implement IndicesRequest indirectly.
This allows the creation of the RPM artifact as part of the
maven package phase. The result of this is that we get checksum and
name correction for-free as it's all build an installed into the m2
repository. This also publishes the RPM together with .deb to the mvn
mirror.
Note: this will only build the RPM as part of the package phase if
`-Dpackage.rpm=true` since the binaries to build the RPM are not
availabel on all platforms.
Today we only clear search contexts for deleted indies. Yet, we should
do the same for closed indices to ensure they can be reopened quickly.
Closes#12116
A method for the new Script API were missing in the ValuesSourceMetricsAggregationBuilder. This change adds the missing method and deprecates the old Script API methods
When a node sends a shard started message to the master, the master goes through the routing table looking for the shard to start. At the moment we validate the indexUUID, the node the shard is assigned to and the fact that the shard is initializing. This check goes wrong if a relocating replica shard finishes recovery just at the moment the source node leaves the cluster. In this case the master will cancel the recovery and will likely assign a new initializing replica to the same target node. In this case the message from the relocation recovery can activate the new replica wrongfully.
Also, the logic for decided whether an incoming shard started message will be applied was split between ShardStateAction and the AllocationService.
This commit does the following:
1) Let ShardStateAction only filter basic stuff like index existence and indexUUID.
2) Move the trickier shard started matching logic to the AllocationService and make it stricter
3) Unify ShardStateAction filtering logic for both shard started and shard failed.
4) Add unit tests for all of the above.
For an example test failure see: http://build-us-00.elastic.co/job/es_core_16_centos/388/Closes#11999
Today everything is tight to having the next version as the latest.
In order to work towards 2.0.0.beta1 we need to fix all the usage of
2.0.0-SNAPSHOT to reflect the version we will release soon.
Usually we do this on the release branch but to simplify things I wanna
keep this on master for now and move to 2.1.0-SNAPSHOT on master once
we created a 2.0 branch.
Closes#12148
This information was stored with the snapshot but wasn't available on the interface. Knowing the version of elasticsearch that created the snapshot can be useful to determine the minimal version of the cluster that is required in order to restore this snapshot.
Closes#11980
This change fixes the plugin manager to trim `elasticsearch-` and `es-` prefixes from plugin names
for our official plugins. This restores the old behavior prior to #11805.
Closes#12143
Today it will remove all permissions and only set execute bit:
---x--x--x
Instead we should preserve existing permissions, and just add
read and execute to whatever is there.
Closes#12142
This change allows custom settings to be passed to the client for the external test cluster,
which is necessary when additional settings need to be passed to the client in order to
properly communicate with the external test cluster.
Before inner_hits existed named queries has support to also verify if inner queries of nested query matched with returned documents. This logic was broken and became obsolete from the moment inner hits get released. #10694 fixed named queries for nested docs in the top_hits agg, but it didn't fix the named query support for nested inner hits. This commit fixes that and on top of this also adds support for parent/child inner hits.
We allow setting the node's name a few different ways: the `name` system
property, the setting `name`, and the setting `node.name`. There is an order
of preference to these settings that gets applied, which can copy values from the
system property or `node.name` setting to the `name` setting. When setting
only `node.name` to one of the prompt placeholders, the user would be
prompted twice as the value of `node.name` is copied to `name` prior to
prompting for input. Additionally, the value entered by the user for `node.name`
would not be used and only the value entered for `name` would be used.
This fix changes the behavior to only prompt once when `node.name is set` and
`name` is not set. This is accomplished by waiting until all values have been
prompted and replaced, then the logic for determining the node's name is
executed.
Closes#11564
This commit adds support to retrieve fields when using the bulk update API. This functionality was previously available for the update API
but not for the bulk update API.
Closes#11527
Fixed documentation since the default rewrite method for fuzzy queries is to
select top terms, fixed usage of the fuzzy rewrite method, and removed unused
`rewrite` parameter.
Close#6932
This rewrite method is interesting because it computes scores as if all terms
had the same frequencies, which avoids disappointments with ranking when a fuzzy
query ranks typos first given that they are less frequent than the correct term.
Today shards are responsible for producing one sort value per document, which
is later used on the coordinating node to resolve the global top documents.
However, this is problematic on string fields with
`missing: _first, order: desc` or `missing: _last, order: asc` given that there
is no such thing as a string that compares greater than any other string. Today
we use a string containing a single code point which is the maximum allowed code
point but this is a hack: instead we should inform the coordinating node that
the document had no value and let it figure out how it should be sorted
depending on whether missing values should be sorted first or last.
Close#9155
Simplify and consolidate ShardRouting construction. Make sure that there is really only one place it gets created, when a shard is first created in unassigned state, and from there on, it is either copy constructed or built internally as a target for relocation.
This change helps make sure within our codebase data carries over by the ShardRouting is not lost as the shard goes through transitions, and can help simplify the addition of more data on it (like uuid).
For testing, a centralized TestShardRouting allows to create testable versions of ShardRouting, that are not needed to be as strict as the non test codebase. This can be cleanup more later on, but it is a good start.
closes#12125
This makes FuzzyQueryBuilder and Parser take an Object as a value using the
same logic as termQuery, so that numbers, dates or Strings would be properly
handled.
Relates #11865Closes#12020
This converts the tracking of jars and classes in JarHell to use
Path objects, instead of URL. This makes for nicer printing
of the underlying path when an error does occur.
This commit defaults fuzzy_transpositions on fuzzy queries to true. This means that by default, tranpositions will now count as a single
edit.
Closes#9278
the failure type. This change marks the engine as corrupted only when the failure
is caused by an actual index corrruption. When an engine is failed for other
reasons, the engine is only closed without removing the shard state.
closes#11788
Plugin Manager can now use another simplified form when a user wants to install an official plugin hosted at elasticsearch download service.
The form we use is:
```sh
bin/plugin install pluginname
```
As plugins share now the same version as elasticsearch, we can automatically guess what is the exact current version of the plugin manager script.
Also, download service will now use `/org.elasticsearch.plugins/pluginName/pluginName-version.zip` URL path to download a plugin.
If the older form is provided (`user/plugin/version` or `user/plugin`), we will still use:
* elasticsearch download service at `/user/plugin/plugin-version.zip`
* maven central with groupIp=user, artifactId=plugin and version=version
* github with user=user, repoName=plugin and tag=version
* github with user=user, repoName=plugin and branch=master if no version is set
Note that community plugin providers can use other download services by using `--url` option.
If you try to use the new form with a non core elasticsearch plugin, the plugin manager will reject
it and will give you all known core plugins.
```
Usage:
-u, --url [plugin location] : Set exact URL to download the plugin from
-i, --install [plugin name] : Downloads and installs listed plugins [*]
-t, --timeout [duration] : Timeout setting: 30s, 1m, 1h... (infinite by default)
-r, --remove [plugin name] : Removes listed plugins
-l, --list : List installed plugins
-v, --verbose : Prints verbose messages
-s, --silent : Run in silent mode
-h, --help : Prints this help message
[*] Plugin name could be:
elasticsearch-plugin-name for Elasticsearch 2.0 Core plugin (download from download.elastic.co)
elasticsearch/plugin/version for elasticsearch commercial plugins (download from download.elastic.co)
groupId/artifactId/version for community plugins (download from maven central or oss sonatype)
username/repository for site plugins (download from github master)
Elasticsearch Core plugins:
- elasticsearch-analysis-icu
- elasticsearch-analysis-kuromoji
- elasticsearch-analysis-phonetic
- elasticsearch-analysis-smartcn
- elasticsearch-analysis-stempel
- elasticsearch-cloud-aws
- elasticsearch-cloud-azure
- elasticsearch-cloud-gce
- elasticsearch-delete-by-query
- elasticsearch-lang-javascript
- elasticsearch-lang-python
```
AbstractFieldMapper is the only direct base class of FieldMapper.
This change moves all AbstractFieldMapper functionality into
FieldMapper, since there is no need for 2 levels of abstraction.
Each scroll on a scan causes a query to be executed. This commit adds support for these indirect queries to count against the search stats.
Additionally, this commit adds three new search stats: scroll_count, scroll_time_in_millis, and scroll_current. scroll_count tracks the
number of completed scrolls. scroll_time_in_millis tracks the total time that scrolls were held open. scroll_current tracks the number of
scrolls currently open.
Closes#9109
In testing infra, one can simulate node GCs, network issues and other problems by adding a disruption to the test cluster. Those disruption are automatically removed after the test is done. At the moment each disruption indicates how long it will take the cluster to heal once the disruption is removed and the test cluster waits for this amount of time. However, more often than not this is an upper bound, causing a much longer wait than needed. Instead we should push the responsibility of healing to the disruption it self, where we can be smarter about what we wait for.
Closes#12071
When using `awaitBusy`, sometimes, you might not want to double time between two runs in an infinitive manner.
For example, let's say it will probably take 30 seconds to run a test.
When doubling all the time, you will most likely wait for a bigger time than needed:
|iteration|ms |s |duration (ms)|duration (s)|
|-----------|-------------|-----------|-----------|-----------|
|1|1|0,001|1|0,001|
|2|2|0,002|3|0,003|
|3|4|0,004|7|0,007|
|4|8|0,008|15|0,015|
|5|16|0,016|31|0,031|
|6|32|0,032|63|0,063|
|7|64|0,064|127|0,127|
|8|128|0,128|255|0,255|
|9|256|0,256|511|0,511|
|10|512|0,512|1023|1,023|
|11|1024|1,024|2047|2,047|
|12|2048|2,048|4095|4,095|
|13|4096|4,096|8191|8,191|
|14|8192|8,192|16383|16,383|
|15|16384|16,384|32767|32,767|
|16|32768|32,768|65535|65,535|
|17|65536|65,536|131071|131,071|
|18|131072|131,072|262143|262,143|
|19|262144|262,144|524287|524,287|
|20|524288|524,288|1048575|1048,575|
|21|1048576|1048,576|2097151|2097,151|
For example here, if the task is successful after 35 seconds, we will most likely have to wait for 32s more before the Predicate is run again.
With this patch, the maximum sleep time is now set to 1 second.
This pipeline aggregation runs a script on each bucket in the parent aggregation to determine whether the bucket is kept in the final aggregation tree. If the script returns true the bucket is retained, if it returns false the bucket is dropped
If you are using the default date or the named identifiers of dates,
the current implementation was allowed to read a year with only one
digit. In order to make this more strict, this fixes a year to be at
least 4 digits. Same applies for month, day, hour, minute, seconds.
Also the new default is `strictDateOptionalTime` for indices created
with Elasticsearch 2.0 or newer.
In addition a couple of not exposed date formats have been exposed, as they
have been mentioned in the documentation.
Closes#6158
Field names containing dots can cause problems. For example, @jpountz
made this recreation which cause no error, but can result in a
serialization exception if the type already exists:
https://gist.github.com/jpountz/8c66817e00a322b81f85
But this is not just a potential conflict. It also has larger problems,
since only the leaf mapper is created. The intermediate "foo" object
field would not exist if only "foo.bar" was in the mappings.
This change forbids the use of dots in field names. It also
fixes an issue with passing through the update_all_types setting,
which was always set to true whenever a type already existed (!).
I do not think we should worry about backwards compatibility here. This
should be a hard break (and added to the migration plugin).
This commit adds logic to prefer shards with higher priority
or from newer indicse to be allocated first if they are unallocated post API.
This commit allows users to set `index.priority` to a non-negative integer to
prioritize index recovery for certain indices. This setting is dynamically updateable
and defaults to `0`. If two indices have the same priority this change takes the creation
date into account to prioritize shards from newer indices which is important in the time-based
indices usecase.
Closes#11787
When a bulk request fails on a Delete or Update request, the BulkItemResponse
reports incorrect "index" operation in the response. This PR fixes this
for the case of closed indices as reported in #9821 but also for
other failures and adds tests for the two cases covered.
Closes#9821
the specialization can cause stack overflows if an exception is a
ElasticsearchWrapperException as well as a ElasticsearchException.
This commit just relies on the unwrap logic now to find the cause and only
renders if we the rendering exception is the cause otherwise forwards
to the generic exception rendering.
Closes#11994
While MappedFieldType contains settings for doc values and fielddata,
AbstractFieldMapper had special logic in its constructor that
required setting these on the field type from there. This change
removes those settings from the AbstractFieldMapper constructor.
As a result, defaultDocValues(), defaultFieldType() and
defaultFieldDataType() are no longer needed.
This commit merges the pre-existing special exception that
allowed to associate headers with exceptions and the elasticsaerch
base class `ElasticsearchException` This allows for more generic use
of exceptions where plugins can associate meta-data with any elasticsearch
base exception to control behavior etc.
This also addds a generic SecurityException to allow plugins to pass on
information based on the RestStatus.
If the version of a node is lower than the minimum supported version or higher than the maximum supported version, a node shouldn't be allowed to join and nodes should join that elected master node
Closes#11924
Removed ParseField#match variant that accepts the field name only, without parse flags. Such a method is harmful as it defaults to empty parse flags, meaning that no deprecation exceptions will be thrown in strict mode, which defeats the purpose of using ParseField. Unfortunately such a method was used in a lot of places were the parse flags weren't easily accessible (outside of query parsing), and in a lot of other places just by mistake.
Parse flags have been introduced now as part of SearchContext and mappers where needed. There are a few places (e.g. java api requests) where it is not possible to retrieve them as they depend on the index settings, in that case we explicitly pass in EMPTY_FLAGS for now, but this has to be seen as an exception.
Closes#11859
This commit changes MessageChannelHandler to not skip the underlying
ChannelBuffer while a StreamInput is open on top of it. In case eg. compression
is enabled, this prevents failures due to the fact that the decompressed
stream input expects a certain structure that it can't verify if the position
of the underlying buffer is changed.
Added dynamic arguments to `ElasticsearchException`, `ElasticsearchParseException` and `ElasticsearchTimeoutException`.
This helps keeping the exception messages clean and readable and promotes consistency around wrapping dynamic args with `[` and `]`.
This is just the start, we need to propagate this to all exceptions deriving from `ElasticsearchException`. Also, work started on standardizing on lower case logging & exception messages. We need to be consistent here...
- Uses the same `LoggerMessageFormat` as used by our logging infrastructure.
The work around for resolving `now` doesn't need to be used for aliases, becuase alias filters are parsed at search time. However it can't be removed, because the percolator relies on it.
Parent/child can be specified again in alias filters, this now works again because alias filters are parsed at search time. Parent/child will also use the late query parse work around, to make sure to do the final preparations when the search context is around. This allows the aliases api to validate the parent/child queries without failing because there is no search context.
Closes#10485
Eventually, the field type should not need any names, because there
will be only one name which leads to finding it (the full name, which is
also the index name). However, the short or "simple" name (using java
terminology for class names) is needed just in a couple places, for
serialization.
This change moves the simple name out of MappedFieldType.Names, into
Mapper, and makes Mapper and FieldMapper abstract classes.
Today we loose the RestStatus code for non-serializable exceptions.
This can be tricky if they are supposed to signal certain situations
like authentication errors etc. This commit adds support for carrying on
the exceptions in the NotSerializableExceptoinWrapper
The filters aggregation now has an option to add an 'other' bucket which will, when turned on, contain all documents which do not match any of the defined filters. There is also an option to change the name of the 'other' bucket from the default of '_other_'
Closes#11289
This change means that when the skip gap policy is used, the bucket script aggregation will skip executing the script on a bucket if any of the required bucket_paths are missing for the bucket. No aggregation will be added to the bucket, and the aggregation will move to the next bucket.
This commit changes the postrm script so that it prints error messages instead of failing & exiting when the deletion of a directory failed while removing a RPM/DEB package.
Closes#11373
This allows a lot of null checks to be removed where we were always falling back to the ValueFormat.RAW anyway. Now the format is set to ValueFormat.RAW when no alternative is suitable.
Closes#10594
Field stats index constraints allows to omit all field stats for indices that don't match with the constraint. An index
constraint can exclude indices' field stats based on the `min_value` and `max_value` statistic. This option is only
useful if the `level` option is set to `indices`.
For example index constraints can be useful to find out the min and max value of a particular property of your data in
a time based scenario. The following request only returns field stats for the `answer_count` property for indices
holding questions created in the year 2014:
curl -XPOST 'http://localhost:9200/_field_stats?level=indices' -d '{
"fields" : ["answer_count"] <1>
"index_constraints" : { <2>
"creation_date" : { <3>
"min_value" : { <4>
"gte" : "2014-01-01T00:00:00.000Z",
},
"max_value" : {
"lt" : "2015-01-01T00:00:00.000Z"
}
}
}
}'
Closes#11187
"Root" is a very confusing term for meta field mappers. This change
renames "RootMapper" to "MetadataFieldMapper" and simplifies
how metadata mappers are setup.
It also requires that metadata mappers are now a FieldMapper
(MetadataFieldMapper extends from AbstractFieldMapper). The only
use of a root mapper that wasn't a field mapper was the theoretical
"external" root mapper (just a test mapper). But it doesn't make
sense to not have an actual field, and this falls inline with
the hopefully eventual collapsing of AbstractFieldMapper/FieldMapper/Mapper.
- shard listing actions underpinning shard allocation do not have access to that new node yet (causing errors during shard allocation see #11923
- the very first cluster state published to a node already has shard assignments to it. This surfaced other issues we are working to fix separately
This commit changes the reroute to be done post processing the initial join cluster state to side step these issues while we work on a longer term solution.
Closes#11960
We had several problems with Java Serializatin in the past. At some point
in the Java 1.7.x series JDKs where not compatible anymore when java
serialization (ObjectStream) was used to exchange objects. In elasticsearch
we used this to serialize exceptions across the wire which caused several problems
with incompatible JDKs. While causing lot of trouble this essentially prevented
users from moving forward and upgrade their JVMs. To prevent these kind of issues
this commit removes the dependency on java serialization entirely and bans the
usage of ObjectOutputStream and ObjectInputStream entirely.
Yet, we can't fully serialize all exception anymore such that this commit
is best effort and adds hand written serialization to all elasticsearch exceptions
as well to a selected set of JDK and Lucene exceptions. (see StreamOutput#writeThrowable /
StreamInput.readThrowable). Stacktraces should be preserved for all exceptions while
several names might be replaced with ElasticsearchException if there is no mapping for
the given exception.
In order to support older RPM based distributions like CentOS5,
we should have one RPM available, which is not signed.
This commit creates an unsigned RPM first, then moves it over to
target/releases during the build, then builds a signed RPM.
The unsigned one is uploaded via S3, where as the signed one is
used for the repositories.
In addition, you can now build an RPM without having to specify
any gpg credentials due to offloading this into a maven profile
that is only activated when specifying `rpm.sign` property.
Closes#11587
instead of maintaining a thread local cache in the PercolatorQueriesRegistry.
Before PercolatorQueriesRegistry had its own cache, because all the queries had to forcefully opt out of caching. Nowadays in master small segments are never cached by the query cache, so the reason for the dedicated cache is no longer valid.
Today we keep track of how often filters are used at the index level in order
to decide whether they should be cached or not. This is an issue if you have
several shards of the same index on the same node as it will multiply statistics
by the number of shards that you have for this index on the node, which defeats
the purpose of waiting for a filter to be reused before caching them.
If the translog UUID is corrupted we should not convert it
to UTF-8 since it might be invalid. Instead we should compare
the UTF-8 byte representation directly.
This more consistent with the other logging it makes and since it can be used in many operations the output can be more verbose (without adding too much info as to who timed out exactly - which we can fix separately). If need be the caller of the observer can log a higher level message.
Closes#11722
In order to be more consistent with what they do, the query cache has been
renamed to request cache and the filter cache has been renamed to query
cache.
A known issue is that package/logger names do no longer match settings names,
please speak up if you think this is an issue.
Here are the settings for which I kept backward compatibility. Note that they
are a bit different from what was discussed on #11569 but putting `cache` before
the name of what is cached has the benefit of making these settings consistent
with the fielddata cache whose size is configured by
`indices.fielddata.cache.size`:
* index.cache.query.enable -> index.requests.cache.enable
* indices.cache.query.size -> indices.requests.cache.size
* indices.cache.filter.size -> indices.queries.cache.size
Close#11569
Today, we disable CORS by default, but if a user simply enables CORS their instance of
elasticsearch will allow cross origin requests from anywhere, as the default value for allowed
origins is `*`.
This changes the default to be `null` so that no origins are allowed and the user must explicitly
specify the origins they wish to allow requests from. The documentation also mentions that there
is a security risk in using `*` as the value.
Closes#11169
In order to be backwards compatible, indices created before 2.x must support
indexing of a unix timestamp and its configured date format. Indices created
with 2.x must configure the `epoch_millis` date formatter in order to
support this.
Relates #10971
Today we are very lenient in parsing the translog files. This is
actually not necessary since we have a clear run once upgrade path.
All files are converted into the new file name pattern such that we
only need to look at old file patterns in the context of the upgrade.
This commit makes parsing really strict with the exceptoin of the upgrade path.
This adds a new pipeline aggregation, the cumulative sum aggregation. This is a parent aggregation which must be specified as a sub-aggregation to a histogram or date_histogram aggregation. It will add a new aggregation to each bucket containing the sum of a specified metrics over this and all previous buckets.
Today we mark a translog as upgraded by adding a marker to the engine commit.
Yet, this commit was only added if there was no translog present before ie. only
if we have a fresh engine which is missing the entire point. Yet, this commit
adds a backwards index tests that ensures we can open old indices more than once
ie. mark the index as upgraded.
Closes#11858
In Lucene 5x the exception thrown when highlighter encounters a huge term
is a BytesRefHash.MaxBytesLengthExceededException but in Lucene 4x it is
wrapped in a RuntimeException. Therefore, it seems saver to unwrap this.
There was only a single actual "use" of close, for a threadlocal
in VersionFieldMapper. However, that threadlocal is completely
unnecessary, so this change removes the threadlocal and
close() altogether.
This commit consolidates several abstractions on the shard level in
ordinary classes not managed by the shard level guice injector.
Several classes have been collapsed into IndexShard and IndexShardGatewayService
was cleaned up to be more lightweight and self-contained. It has also been moved into
the index.shard package and it's operation is renamed from recovery from "gateway" to recovery
from "store" or "shard_store".
Closes#11847
This commit folds ShardRouting, ImmutableShardRouting and MutableShardRouting
into ShardRouting. All mutators are package private anyway today so it's just
unnecessary abstraction.
ShardRoutings are now frozen once they are added to the IndexRoutingTable
to prevent modifications outside of the allocation code.
This commit makes the get and search APIs always return `_parent`, `_routing`,
`_timestamp` and `_ttl` in addition to `_id` and `_type`. This way, consumers
always have all required information in order to reindex a document.
Currently the SnapshotsService is concerned with both maintaining the global snapshot lifecycle on the master node as well as responsible for keeping track of individual shards on the data nodes. This refactoring separates two areas of concerns by moving all shard-level operations into a separate SnapshotShardsService.
Closes#11756
Currently the filter cache is configured to have a maximum size in bytes of 10%
of the JVM memory, and a maximum number of cached filters (across all segments
of all shard on the same node) of 100000. I would like to change the latter to
a more reasonable value of 1000.
Given that we track the most 256 most recently used filters per index and only
cache those that have been seen 5 times or more, a single index cannot have more
than 50 hot filters, so a maximum number of cached filters of 1000 per node
should be more than necessary.
Today, we have scheduled reroute that kicks every 10 seconds and checks if a
reroute is needed. We use it when adding nodes, since we don't reroute right
away once its added, and give it a time window to add additional nodes.
We do have recover after nodes setting and such in order to wait for enough
nodes to be added, and also, it really depends at what part of the 10s window
you end up, sometimes, it might not be effective at all. In general, its historic
from the times before we had recover after nodes and such.
This change removes the 10s scheduling, simplifies RoutingService, and adds
explicit reroute when a node is added to the system. It also adds unit tests
to RoutingService.
closes#11776
Since elasticsearch doesn't shade artifacts anymore (see #11522), the dependencies list for RPM/DEB must be updated. Now we package all maven libs by default except the generated -shaded/-tests/-test-cours JARs and slf4j-api (marked as optionnal).