This refactoring and cleanup is that each request handler ends up
implementing too many methods that can be provided when the request handler itself
is registered, including a prototype like class that can be used to instantiate
new request instances for streaming.
closes#10730
Today we have a lot of bloat in the IndexStore and related classes. THe IndexStore interface
is unneeded as we always subclass AbstractIndexStore and it hides circular dependencies
that are problematic when added. Guice proxies them if you have an interface which is bad in
general. This commit removes most of the bloat classes and unifies all the classes we have
into a single one since they all just structural and don't encode any functionality.
Refactor TransportShardReplicationOperationAction state management into clear separate Primary phase and Replication phase. The primary phase is responsible for routing the request to the node holding the primary, validating it and performing the operation on the primary. The Replication phase is responsible for sending the request to the replicas and managing their responses.
This also adds unit test infrastructure for this class, and some basic tests. We can extend later as we continue developing.
Closes#10749
This commit adds support for structural errors / failures / exceptions
on the elasticsearch REST layer. Exceptions are rendering with at least
a `type` and a `reason` corresponding to the exception name and the message.
Some expcetions like the ones associated with an index or a shard will have
additional information about the index the exception was triggered on or the
shard respectivly.
Each rendered response will also contain a list of root causes which is a list
of distinct shard level errors returned for the request. Root causes are the lowest
level elasticsearch exception found per shard response and are intended to be displayed
to the user to indicate the soruce of the exception.
Shard level response are by-default grouped by their type and reason to reduce the amount
of duplicates retunred. Yet, the same exception retunred from different indices will not be
grouped.
Closes#3303
Disabling doc values or trying to index hash values are not
correct uses of this the murmur3 field type, and just cause
problems. This disallows changing doc values or index options
for 2.0+.
closes#10465
This commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between
an operation that modifies the index or cluster metadata and an operation that does not change any metadata.
Before this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when
the cluster or the index is read-only.
Related to #8102, #2833Closes#3703Closes#5855Closes#10521Closes#10522
While dynamic mappings updates are using the same code path as updates from the
API when applied on a data node since #10593, they were still using a different
code path on the master node. This commit makes dynamic updates processed the
same way as updates from the API, which also seems to do a better way at
acknowledgements (I could not reproduce the ConcurrentDynamicTemplateTests
failure anymore). It also adds more checks, like for instance that indexing on
replicas should not trigger dynamic mapping updates since they should have been
handled on the primary before.
Close#10720
The field stats api returns field level statistics such as lowest, highest values and number of documents that have at least one value for a field.
An api like this can be useful to explore a data set you don't know much about. For example you can figure at with the lowest and highest response times are, so that you can create a histogram or range aggregation with sane settings.
This api doesn't run a search to figure this statistics out, but rather use the Lucene index look these statics up (using Terms class in Lucene). So finding out these stats for fields is cheap and quick.
The min/max values are based on the type of the field. So for a numeric field min/max are numbers and date field the min/max date and other fields the min/max are term based.
Closes#10523
- Randomizes the metric type between min/max/avg. Should have identical behavior, but good to test
- Fixes improper handling of gaps due to a bug in the production of the "expected" dataset. Due to this fix,
randomization of gap policy was re-enabled
- Bunch of renaming to be more descriptive and less verbose
This commit adds the ability for moving average models to output a "prediction" based on the current
moving average model. For simple, linear and single, this prediction is simply converges on the
moving average's mean at the last point, leading to a straight line. For double, this will
predict in the direction of the linear trend (either globally or locally, depending on beta).
Also adds some more tests.
Closes#10545
Added an initial script construct to unify the parameters typically
passed to methods in the ScriptService. This changes the way several public
methods are called in the ScriptService along with all the callers
since they must wrap the parameters passed in into a script object. In the
future, parsing parameters can also be moved into this construct along with
ToXContent.
closes#10649
When using a filesystem that may have lag between an index being created
on the primary and a on the replica, creation of the ShadowEngine can
fail because there are no segments in the directory.
In these situations, we retry during engine creation to wait until an
index is present in the directory. The number wait delay is
configurable, defaulting to waiting for 5 seconds from an index to
become available.
Resolves#10637