This change adds a Fixture class for use by gradle. A Fixture is an
external process that integration tests will use. It can be added as a
dependsOn for integTest, and will automatically be shutdown upon success
or failure, as well as relevant information dumped on failure. There is
also an example fixture in this change.
* Pipeline store can now only start when there is no .ingest index or all primary shards of .ingest have been started
* IngestPlugin adds`node.ingest` setting to `true`. This is used to figure out to what nodes to send the refresh request too. This setting isn't yet configurable. This will be done in a follow up issue.
* Removed the background pipeline updater and added added logic to deal with specific scenarious to reload all pipelines.
* Ingest services are no longer be managed by Guice. Only the bootstrapper gets managed by guice and that contructs
all the services/components ingest will need.
Added ingest wide template infrastructure to IngestDocument
Added a TemplateService interface that the ingest framework uses
Added a TemplateService implementation that the ingest plugin provides that delegates to the ES' script service
Cut SetProcessor over to use the template infrastructure for the `field` and `value` settings.
Removed the MetaDataProcessor
Removed dependency on mustache library
Added qa ingest mustache rest test so that the ingest and mustache integration can be tested.
We have the Text API, which is essentially a wrapper around a String and a
BytesReference and then we have 3 implementations depending on whether the
String view should be cached, the BytesReference view should be cached, or both
should be cached.
This commit merges everything into a single Text that is essentially the old
StringAndBytesText impl.
Long term we should look into whether this API has any performance benefit or
if we could just use plain strings. This would greatly simplify all our other
APIs that currently use Text.
This fixes the `lenient` parameter to be `missingClasses`. I will remove this boolean and we can handle them via the normal whitelist.
It also adds a check for sheisty classes (jar hell with the jdk).
This is inspired by the lucene "sheisty" classes check, but it has false positives. This check is more evil, it validates every class file against the extension classloader as a resource, to see if it exists there. If so: jar hell.
This jar hell is a problem for several reasons:
1. causes insanely-hard-to-debug problems (like bugs in forbidden-apis)
2. hides problems (like internal api access)
3. the code you think is executing, is not really executing
4. security permissions are not what you think they are
5. brings in unnecessary dependencies
6. its jar hell
The more difficult problems are stuff like jython, where these classes are simply 'uberjared' directly in, so you cant just fix them by removing a bogus dependency. And there is a legit reason for them to do that, they want to support java 1.4.
When creating a metadata mapper for a new type, we reuse an existing
configuration from an existing type (if any) in order to avoid introducing
conflicts. However this field type that is provided is considered as both an
initial configuration and the default configuration. So at serialization time,
we might only serialize the difference between the current configuration and
this default configuration, which might be different to what is actually
considered the default configuration.
This does not cause bugs today because metadata mappers usually override the
toXContent method and compare the current field type with Defaults.FIELD_TYPE
instead of defaultFieldType() but I would still like to do this change to
avoid future bugs.
The `path` option allowed to index/store a field `a.b.c` under just `c` when
set to `just_name`. This "feature" has been removed in 2.0 in favor of `copy_to`
so we can remove the back compat in 3.x.
Today mappings are mutable because of two APIs:
- Mapper.merge, which expects changes to be performed in-place
- IncludeInAll, which allows to change whether values should be put in the
`_all` field in place.
This commit changes both APIs to return a modified copy instead of modifying in
place so that mappings can be immutable. For now, only the type-level object is
immutable, but in the future we can imagine making them immutable at the
index-level so that mapping updates could be completely atomic at the index
level.
Close#9365
This change adds back the http.type setting. It also cleans up all the
transport related guice code to be consolidated within the
NetworkModule (as transport and http related stuff is what and how ES
exposes over the network). The setter methods previously used by some
plugins to override eg the TransportService or HttpServerTransport are
removed, and those plugins should now register a custom implementation
of the class with a name and set that using the appropriate config
setting. Note that I think ActionModule should also be moved into here,
to sit along side the rest actions, but I left that for a followup.
closes#14148
Migrated from ES-Hadoop. Contains several improvements regarding:
* Security
Takes advantage of the pluggable security in ES 2.2 and uses that in order
to grant the necessary permissions to the Hadoop libs. It relies on a
dedicated DomainCombiner to grant permissions only when needed only to the
libraries installed in the plugin folder
Add security checks for SpecialPermission/scripting and provides out of
the box permissions for the latest Hadoop 1.x (1.2.1) and 2.x (2.7.1)
* Testing
Uses a customized Local FS to perform actual integration testing of the
Hadoop stack (and thus to make sure the proper permissions and ACC blocks
are in place) however without requiring extra permissions for testing.
If needed, a MiniDFS cluster is provided (though it requires extra
permissions to bind ports)
Provides a RestIT test
* Build system
Picks the build system used in ES (still Gradle)
This commit removes and now forbids all uses of
Collections#shuffle(List) and Random#<init>() across the codebase. The
rationale for removing and forbidding these methods is to increase test
reproducibility. As these methods use non-reproducible seeds, production
code and tests that rely on these methods contribute to
non-reproducbility of tests.
Instead of Collections#shuffle(List) the method
Collections#shuffle(List, Random) can be used. All that is required then
is a reproducible source of randomness. Consequently, the utility class
Randomness has been added to assist in creating reproducible sources of
randomness.
Instead of Random#<init>(), Random#<init>(long) with a reproducible seed
or the aforementioned Randomess class can be used.
Closes#15287
IndexResponse, DeleteResponse and UpdateResponse share some logic. This can be unified to a single DocWriteResponse base class. On top, some replication actions are now not about write operations anymore. This commit renames ActionWriteResponse to ReplicationResponse
Last some toXContent is moved from the Rest layer to the actual response classes, for more code re-sharing.
Closes#15334
Unify metadata map and source, add also support for _ingest prefix. Depending on the prefix, either _source, nothing or _ingest, we will figure out which map to use for values retrieval, but also modifications.
If one is using the ingest plugin and providing a pipeline id with the request, the chance that the source is going to be modified is 99%. We shouldn't worry about keeping track of whether something changed. That seemed useful at first so we can save the resources for setting back the source (map to bytes) when not needed. Also, we are trying to unify metadata fields and source in the same map and that is going to complicate how we keep track of changes that happen in the source only. Best solution is to remove the flag.
IngestDocument now holds an additional map of transient metadata. The only field that gets added automatically is `timestamp`, which contains the timestamp of ingestion in ISO8601 format. In the future it will be possible to eventually add or modify these fields, which will not get indexed, but they will be available via templates to all of the processors.
Transient metadata will be visualized by the simulate api, although they will never get indexed. Moved WriteableIngestDocument to the simulate package as it's only used by simulate and it's now modelled for that specific usecase.
Also taken the chance to remove one IngestDocument constructor used only for testing (accepting only a subset of es metadata fields). While doing that introduced some more randomizations to some existing processor tests.
Closes#15036
throw exception if a copy_to is within a multi field
Copy to within multi field is ignored from 2.0 on, see #10802.
Instead of just ignoring it, we should throw an exception if this
is found in the mapping when a mapping is added. For already
existing indices we should at least log a warning.
We remove the copy_to in any case.
related to #14946
This commit adds the infrastructure to make settings that are updateable
resetable and changes the application of updates to be transactional. This means
setting updates are either applied or not. If the application failes all values are rejected.
This initial commit converts all dynamic cluster settings to make use of the new infrastructure.
All cluster level dynamic settings are not resettable to their defaults or to the node level settings.
The infrastructure also allows to list default values and descriptions which is not fully implemented yet.
Values can be reset using a list of key or simple regular expressions. This has only been implemented on the java
layer yet. For instance to reset all recovery settings to their defaults a user can just specify `indices.recovery.*`.
This commit also adds strict settings validation, if a setting is unknown or if a setting can not be applied the entire
settings update request will fail.
When using S3 or EC2, it was possible to use a proxy to access EC2 or S3 API but username and password were not possible to be set.
This commit adds support for this. Also, to make all that consistent, proxy settings for both plugins have been renamed:
* from `cloud.aws.proxy_host` to `cloud.aws.proxy.host`
* from `cloud.aws.ec2.proxy_host` to `cloud.aws.ec2.proxy.host`
* from `cloud.aws.s3.proxy_host` to `cloud.aws.s3.proxy.host`
* from `cloud.aws.proxy_port` to `cloud.aws.proxy.port`
* from `cloud.aws.ec2.proxy_port` to `cloud.aws.ec2.proxy.port`
* from `cloud.aws.s3.proxy_port` to `cloud.aws.s3.proxy.port`
New settings are `proxy.username` and `proxy.password`.
```yml
cloud:
aws:
protocol: https
proxy:
host: proxy1.company.com
port: 8083
username: myself
password: theBestPasswordEver!
```
You can also set different proxies for `ec2` and `s3`:
```yml
cloud:
aws:
s3:
proxy:
host: proxy1.company.com
port: 8083
username: myself1
password: theBestPasswordEver1!
ec2:
proxy:
host: proxy2.company.com
port: 8083
username: myself2
password: theBestPasswordEver2!
```
Note that `password` is filtered with `SettingsFilter`.
We also fix a potential issue in S3 repository. We were supposed to accept key/secret either set under `cloud.aws` or `cloud.aws.s3` but the actual code never implemented that.
It was:
```java
account = settings.get("cloud.aws.access_key");
key = settings.get("cloud.aws.secret_key");
```
We replaced that by:
```java
String account = settings.get(CLOUD_S3.KEY, settings.get(CLOUD_AWS.KEY));
String key = settings.get(CLOUD_S3.SECRET, settings.get(CLOUD_AWS.SECRET));
```
Also, we extract all settings for S3 in `AwsS3Service` as it's already the case for `AwsEc2Service` class.
Closes#15268.
Since 2.2 we run all scripts with minimal privileges, similar to applets in your browser.
The problem is, they have unrestricted access to other things they can muck with (ES, JDK, whatever).
So they can still easily do tons of bad things
This PR restricts what classes scripts can load via the classloader mechanism, to make life more difficult.
The "standard" list was populated from the old list used for the groovy sandbox: though
a few more were needed for tests to pass (java.lang.String, java.util.Iterator, nothing scary there).
Additionally, each scripting engine typically needs permissions to some runtime stuff.
That is the downside of this "good old classloader" approach, but I like the transparency and simplicity,
and I don't want to waste my time with any feature provided by the engine itself for this, I don't trust them.
This is not perfect and the engines are not perfect but you gotta start somewhere. For expert users that
need to tweak the permissions, we already support that via the standard java security configuration files, the
specification is simple, supports wildcards, etc (though we do not use them ourselves).
Azure team released new versions of their Java SDK.
According to https://github.com/Azure/azure-sdk-for-java/wiki/Azure-SDK-for-Java-Features, it comes with 2 versions.
We should at least update to `0.9.0` of V1 but also consider moving to the new APIs (V2).
This commit first updates to latest API V1.
```xml
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-svc-mgmt-compute</artifactId>
<version>0.9.0</version>
</dependency>
```
Closes#15209
Failures to merge a mapping can either come as a MergeMappingException if they
come from Mapper.merge or as an IllegalArgumentException if they come from
FieldTypeLookup.checkCompatibility. I think we should settle on one: this pull
request replaces all usage of MergeMappingException with
IllegalArgumentException.
The PipelineTests tried to test if the configured map/list in set processor wasn't modified while documents were ingested. Creating a pipeline programmatically created more noise than the test needed. The new tests in IngestDocumentTests have the same goal, but is much smaller and clearer by directly testing against IngestDocument.
This commit removes and now forbids all uses of the type-unsafe empty
Collections fields Collections#EMPTY_LIST, Collections#EMPTY_MAP, and
Collections#EMPTY_SET. The type-safe methods Collections#emptyList,
Collections#emptyMap, and Collections#emptySet should be used instead.
This change attempts to simplify the gradle tasks for precommit. One
major part of that is using a "less groovy style", as well as being more
consistent about how tasks are created and where they are configured. It
also allows the things creating the tasks to set up inter task
dependencies, instead of assuming them (ie decoupling from tasks
eleswhere in the build).
Rename processor now checks whether the field to rename exists and throws exception if it doesn't. It also checks that the new field to rename to doesn't exist yet, and throws exception otherwise. Also we make sure that the rename operation is atomic, otherwise things may break between the remove and the set and we'd leave the document in an inconsistent state.
Note that the requirement for the new field name to not exist simplifies the usecase for e.g. { "rename" : { "list.1": "list.2"} } as such a rename wouldn't be accepted if list is actually a list given that either list.2 already exists or the index is out of bounds for the existing list. If one really wants to replace an existing field, that field needs to be removed first through remove processor and then rename can be used.
1) It no longer extends from Closeable.
2) Removed the config directory setter. Implementation that relied on it, now get the location to the config dir via their constructors.
Validation is not done as part of the distance setter method and tested in GeoDistanceQueryBuilderTests. Fixed GeoDistanceTests to adapt to the new validation.
Closes#15135
When reading, through #getFieldValue and #hasField, and a list is encountered, the next element in the path is treated as the index of the item that the path points to (e.g. `list.0.key`). If the index is not a number or out of bounds, an exception gets thrown.
Added #appendFieldValue method that has the same behaviour as setFieldValue, but when a list is the last element in the path, instead of replacing the whole list it will simply add a new element to the existing list. This method is currently unused, we have to decide whether the set processor or a new processor should use it.
A few other changes made:
- Renamed hasFieldValue to hasField, as this method is not really about values but only keys. It will return true if a key is there but its value is null, while it returns false only when a field is not there at all.
- Changed null semantic in getFieldValue. null gets returned only when it was an actual value in the source, an exception is thrown otherwise when trying to access a non existing field, so that null != field not present.
- Made remove stricter about non existing fields. Throws error when trying to remove a non existing field. This is more consistent with the other methods in IngestDocument which are strict about fields that are not present.
Relates to #14324
Do not to load fields from _source when using the `fields` option.
Non stored (non existing) fields are ignored by the fields visitor when using the `fields` option.
Fixes#10783
Support * wildcard to retrieve stored fields when using the `fields` option.
Supported pattern styles are "xxx*", "*xxx", "*xxx*" and "xxx*yyy".
Its enough to test the content type for what we are testing.
Currently tests are flaky if charset is detected as e.g. windows-1252 vs iso-8859-1 and so on.
In fact, they fail on windows 100% of the time.
We are not trying to test charset detection heuristics (which might be different even due to newlines in tests or other things).
If we want to do test that, we should test it separately.
When importing dangling indices on a single node that is data and master eligable the async dangling index
call can still be in-flight when the cluster is checked for green / yellow. Adding a dedicated master node
and a data only node that does the importing fixes this issus just like we do in OldIndexBackwardsCompatibilityIT
Client in PipelineStore gets provided via a guice provider
Processor and Factory throw Exception instead of IOException
Removed PipelineExecutionService.Listener with ActionListener
* Moved PipelineReference to a top level class and named it PipelineDefinition
* Pulled some logic from the crud transport classes to the PipelineStore
* Use IOUtils#close(...) where appropriate
Move ParsedSimulateRequest to SimulatePipelineRequest and remove Parser class in favor of static parse methods.
Simplified execute methods in SimulateExecutionService.
This moves the registration of field mappers from the index level to the node
level and also ensures that mappers coming from plugins are treated no
differently from core mappers.
* Forbid System.setProperties & co in forbidden APIs.
* Ban property write access at runtime with security manager.
Plugins that need to modify system properties will need to request permission in their plugin-security.policy
This makes AvgTests use a mock plugin engine. I also removed the
textScriptExplicit* methods for the base class since they only make sense for
a groovy script, not a mock script.
Bug introduced in #13779: we don't filter anymore credentials because we were filtering `cloud.azure.storage.account` and `cloud.azure.storage.key` but now credentials are like `cloud.azure.storage.XXX.account` and `cloud.azure.storage.XXX.key` where `XXX` can be a storage setting id.
Closes#14843.
# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.
At the time of geo_shape query conception, CONTAINS was not yet a supported spatial operation in Lucene. Since it is now available this commit adds ShapeRelation.CONTAINS to GeoShapeQuery. Randomized testing is included and documentation is updated.
We actually want to keep the test when using deprecated setting in 3.0.
We will keep this setting deprecated as well so people will be able to update in a smoother way.
Also add the deprecating information to the migration documentation.
Follow up for #13228.
This commit adds support for a secondary storage account:
```yml
cloud:
azure:
storage:
my_account1:
account: your_azure_storage_account1
key: your_azure_storage_key1
default: true
my_account2:
account: your_azure_storage_account2
key: your_azure_storage_key2
```
When creating a repository, you can choose which azure account you want to use for it:
```sh
curl -XPUT localhost:9200/_snapshot/my_backup1?pretty -d '{
"type": "azure"
}'
curl -XPUT localhost:9200/_snapshot/my_backup2?pretty -d '{
"type": "azure",
"settings": {
"account" : "my_account2",
"location_mode": "secondary_only"
}
}'
```
`location_mode` supports `primary_only` or `secondary_only`. Defaults to `primary_only`. Note that if you set it
to `secondary_only`, it will force `read_only` to true.
DateHistogramTests had some dependency on groovy scripts and
were moved to the lang-groovy module. This PR moves it back
and replaces use of groovy scripts by a mock script engine.
Removing three test cases that were testing doing some date
manipulation using script, since these are more groovy script
tests than testing the DateHistogram aggregation.
Only MutateProcessor implemented equals / hashcode hence we would only use that one in our tests, since they relied on them. Better to not rely on equals/hashcode, drop them and mock processor/pipeline in our tests that need them. That also allow to make MutateProcessor constructor package private as the other processors.
Removed equals and hashcode whenever they wouldn't be reliable because of exception comparison. at the end of the day we use them for testing and we can simplify our tests without requiring equals and hashcode in prod code, which also would require more tests if maintained.
Add equals/hashcode test for Data/TransportData and randomize existing serialization tests
This fixes an issue where if the field for the aggregation was unmapped the extended bounds would get dropped and the resulting buckets would not cover the extended bounds requested.
Closes#14735
This fixes an issue where if the field for the aggregation was unmapped the extended bounds would get dropped and the resulting buckets would not cover the extended bounds requested.
Closes#14735
This commit removes all noreleases and cuts over to Lucene 5.4 GeoPointField type. Included are randomized testing updates to unit and integration test suites for ensuring full backward compatability with existing geo_point indexes.
Remove code duplications from ConfigurationUtils
Make sure that the mutate processor doesn't use Tuple as that would require to depend on core.
Also make sure that the MutateProcessor tests don't end up testing the factory as well.
Make processor getters package private as they are only needed in tests.
Add new tests to MutateProcessorFactoryTests
Transitive dependencies can be confusing and hard to deal with when
conflicts arise between them. This change removes transitive
dependencies from elasticsearch, and forces any dependency conflicts to
be resolved manually, instead of automatically by gradle.
closes#14627
`AbstractLegacyBlobContainer` was kept for historical reasons (see #13434).
We can migrate Azure and S3 repositories to use the new methods added in #13434 so we can remove `AbstractLegacyBlobContainer` class.
Some dependencies must be specified in a couple places in the build.
e.g. randomized runner is specified both in buildSrc (for the gradle
wrapper plugin), as well as in the test-framework.
This change creates buildSrc/versions.properties which acts similar to
the set of shared version properties we used to have in the maven parent
pom.
geoip: add a `fields` option to control what fields are added by geoip processor
geoip: instead of adding all fields, only `country_code`, `city_name`, `location`, `continent_name` and `region_name` fields are added.
fix forbiddenapis
update
clean up and add rest test
update mutate factory to use configuration utilities
compile gsub pattern
cleanup, update parseBooleans, null tests
The documentation says we support EPUB, but the parser is not enabled.
This parser does not require any external dependencies, so I think its ok?
Separately, test-framework drags in an ancient commons-codec (via httpclient), which gradle
"upgrades", but IDEs can't handle this case and just hit jar hell. So just wire that to 1.9,
this allows running tests in the IDE for this plugin.
The plugin name currently defaults to the gradle project name. But the
gradle project name for standalone repo (like an external plugin would
be) defaults to the directory name of the repo. This is trappy, since it
depends on how the repo was checked out.
This change enforces the plugin name is always set.
closes#14603
Random code shouldn't be listening on sockets elsewhere.
Today its the wild west, but we only need to grant access to what the user configured.
This means e.g. multicast plugin has to declare its intentions in its security.policy
Closes#14549
When GroovySecurityTests are run before any other test using
time zones, joda ZoneInfoProvider fails to load the time zones
correctly and never tries again later. This makes sure we load it
correctly on startup.
Relates to #14524
This was basically a resurrected form of the tests for the old sandbox.
We use it to check that groovy scripts some degree of additional containment.
The other scripting plugins (javascript, python) already have this as a unit test,
its much easier to debug any problems that way.
closes#14484
the current jar is over 3 years old, we should upgrade it for bugfixes.
the current integration could be more secure: set a global policy and enforce additional (compile-time) checks
closes#14466
In tests processors can be created from the their constructors instead of builders.
In the IngestModule, register instances instead of class instances.
Also moved all processor classes into a subdirectory and introduced a
ConfigException class to be a catch-all for things that can go wrong
when constructing new processors with configurations that possibly throw
exceptions. The GrokProcessor loads patterns from the resources
directory.
fix resource path issue, and add rest-api-spec test for grok
fix rest-spec tests
changes: license, remove configexception, throw IOException
add more tests and fix iso8601-hour pattern
move grok patterns from resources to config
fix tests with pom changes, updated IngestClientIT with grok processor
update gradle build script for grok deps and test configuration
move config files to src/main/packaging
move Env out of Processor, fix test for src/main/packaging change
add docs
clean up test resources task
update Grok to be immutable
- Updated the Grok class to be immutable. This means that all the
pattern bank loading is handled by an external utility class called
PatternUtils.
- fixed tabs in the nagios patterns file's comments
Removes the mapping transform feature which when used made debugging very
difficult. Users should transform their documents on the way into
Elasticsearch rather than having Elasticsearch do it.
Closes#12674
This change moves all the analysis component registration to the node level
and removes the significant API overhead to register tokenfilter, tokenizer,
charfilter and analyzer. All registration is done without guice interaction such
that real factories via functional interfaces are passed instead of class objects
that are instantiated at runtime.
This change also hides the internal analyzer caching that was done previously in the
IndicesAnalysisService entirely and decouples all analysis registration and creation
from dependency injection.
This change removes the leftover pom files. A couple files were left for
reference, namely in qa tests that have not yet been migrated (vagrant
and multinode). The deb and rpm assemblies also still exist for
reference when finishing their setup in gradle.
See #13930
Closes#14353
Squashed commit of the following:
commit edae0729f71ea3d3f9fa9c0d27c9effc042eb5a9
Author: Robert Muir <rmuir@apache.org>
Date: Thu Oct 29 14:13:42 2015 -0400
update sha1 and simplify test
commit 635c4f245d66ad353a16267c810e02b725553fad
Author: Robert Muir <rmuir@apache.org>
Date: Thu Oct 29 07:01:26 2015 -0400
Add threadgroup isolation.
Code with `modifyThread` and `modifyThreadGroup` may only modify
its own threadgroup (or an ancestor of that). This enforces
what is intended by the ThreadGroup class.
This has two immediate implications:
1. Code without these permissions (scripts) may not create or mess with threads
2. ES application threads cannot mess with Java system threads
ES puts all application threads in one single group today, but in the future
this can be organized better, and we will have more isolation in the system.
Similarly to what we did with the search api, we can now also move query parsing on the coordinating node for the explain api. Given that the explain api is a single shard operation (compared to search which is instead a broadcast operation), this doesn't change a lot in how the api works internally. The main benefit is that we can simplify the java api by requiring a structured query object to be provided rather than a bytes array that will get parsed on the data node. Previously if you specified a QueryBuilder it would be serialized in json format and would get reparsed on the data node, while now it doesn't go through parsing anymore (as expected), given that after the query-refactoring we are able to properly stream queries natively.
Closes#14270
We have two types of parse methods for queries: one for the inner query, to be used once the parser is positioned within the query element, and one for the whole query source, including the query element that wraps the actual query.
With the search refactoring we ended up using the former in count, cat count and delete by query, whereas we should have used the former. It ends up working properly given that we have a registered (deprecated) query called "query", which used to allow to wrap a filter into a query, but this has the following downsides:
1) prevents us from removing the deprecated "query" query
2) we end up supporting a top level query that is not wrapped within a query element (pre 1.0 syntax iirc that shouldn't be supported anymore)
This commit finally removes the "query" query and fixes the related parsing bugs. We also had some tests that were providing queries in the wrong format, those have been fixed too.
Closes#13326Closes#14304
This commit brings all the registration etc. from IndexCacheModule into
IndexModule. As a side-effect to remove a circular dependency between
IndicesService and IndicesWarmer this commit also cleans up IndicesWarmer and
separates the Engine from the warmer.