This is one of our esoteric metadata mappers so I think we should distribute
it in a plugin rather than in elasticsearch core.
This introduces one limitation: the value of the `_size` parameter is not
retrievable for documents that are only in the transaction log.
This commit adds a new API to allow scripts to say whether they need scores.
In practice, only the `expression` script engine makes use of it correctly,
other engines just return `true` since they can't predict whether they'll
need scores. This should make scripted aggregations and `function_query`
faster as we'll now be able to pass needsScores=false to Query.createWeight.
Most of the abstract base test classes we have were previously @Ignored.
However, there were also some other tests ignored. Having two ways to
quiet tests is confusing, and clearly it has caused some tests
to get lost in the fold.
This change moves all base test classes to use the "TestCase" suffix,
which is not picked up by the test class name pattern. It also removes
@Ignore from (almost) all tests, and adds it to forbidden apis.
And since we were renaming, I shorted base test class names to use
"ES" instead of "Elasticsearch". I type this a lot of types a day,
and I have heard others express a similar desire for a shorter name.
closes#10659
Now that integ tests are moved into `mvn verify`, we don't really have
a need for @Slow, and especially not @Integration. This removes
uses of the first, and completely removes uses of the latter.
Closes#12367
Squashed commit of the following:
commit 9453c411798121aa5439c52e95301f60a022ba5f
Merge: 3511a9c 828d8c7
Author: Robert Muir <rmuir@apache.org>
Date: Wed Jul 22 08:22:41 2015 -0400
Merge branch 'master' into refactor_pluginservice
commit 3511a9c616503c447de9f0df9b4e9db3e22abd58
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 21:50:15 2015 -0700
Remove duplicated constant
commit 4a9b5b4621b0ef2e74c1e017d9c8cf624dd27713
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 21:01:57 2015 -0700
Add check that plugin must specify at least site or jvm
commit 19aef2f0596153a549ef4b7f4483694de41e101b
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 20:52:58 2015 -0700
Change plugin "plugin" property to "classname"
commit 07ae396f30ed592b7499a086adca72d3f327fe4c
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:36:05 2015 -0400
remove test with no methods
commit 550e73bf3d0f94562f4dde95239409dc5a24ce25
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:31:58 2015 -0400
fix loading to use classname
commit 04463aed12046da0da5cac2a24c3ace51a79f799
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:24:19 2015 -0400
rename to classname
commit 9f3afadd1caf89448c2eb913757036da48758b2d
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 20:18:46 2015 -0700
moved PluginInfo and refactored parsing from properties file
commit df63ccc1b8b7cc64d3e59d23f6c8e827825eba87
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:08:26 2015 -0400
fix test
commit c7febd844be358707823186a8c7a2d21e37540c9
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:03:44 2015 -0400
remove test
commit 017b3410cf9d2b7fca1b8653e6f1ebe2f2519257
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 22:58:31 2015 -0400
fix test
commit c9922938df48041ad43bbb3ed6746f71bc846629
Merge: ad59af4 01ea89a
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 22:37:28 2015 -0400
Merge branch 'master' into refactor_pluginservice
commit ad59af465e1f1ac58897e63e0c25fcce641148a7
Author: Areek Zillur <areek.zillur@elasticsearch.com>
Date: Tue Jul 21 19:30:26 2015 -0400
[TEST] Verify expected number of nodes in cluster before issuing shardStores request
commit f0f5a1e087255215b93656550fbc6bd89b8b3205
Author: Lee Hinman <lee@writequit.org>
Date: Tue Jul 21 11:27:28 2015 -0600
Ignore EngineClosedException during translog fysnc
When performing an operation on a primary, the state is captured and the
operation is performed on the primary shard. The original request is
then modified to increment the version of the operation as preparation
for it to be sent to the replicas.
If the request first fails on the primary during the translog sync
(because the Engine is already closed due to shadow primaries closing
the engine on relocation), then the operation is retried on the new primary
after being modified for the replica shards. It will then fail due to the
version being incorrect (the document does not yet exist but the request
expects a version of "1").
Order of operations:
- Request is executed against primary
- Request is modified (version incremented) so it can be sent to replicas
- Engine's translog is fsync'd if necessary (failing, and throwing an exception)
- Modified request is retried against new primary
This change ignores the exception where the engine is already closed
when syncing the translog (similar to how we ignore exceptions when
refreshing the shard if the ?refresh=true flag is used).
commit 4ac68bb1658688550ced0c4f479dee6d8b617777
Author: Shay Banon <kimchy@gmail.com>
Date: Tue Jul 21 22:37:29 2015 +0200
Replica allocator unit tests
First batch of unit tests to verify the behavior of replica allocator
commit 94609fc5943c8d85adc751b553847ab4cebe58a3
Author: Jason Tedor <jason@tedor.me>
Date: Tue Jul 21 14:04:46 2015 -0400
Correctly list blobs in Azure storage to prevent snapshot corruption and do not unnecessarily duplicate Lucene segments in Azure Storage
This commit addresses an issue that was leading to snapshot corruption for snapshots stored as blobs in Azure Storage.
The underlying issue is that in cases when multiple snapshots of an index were taken and persisted into Azure Storage, snapshots subsequent
to the first would repeatedly overwrite the snapshot files. This issue does render useless all snapshots except the final snapshot.
The root cause of this is due to String concatenation involving null. In particular, to list all of the blobs in a snapshot directory in
Azure the code would use the method listBlobsByPrefix where the prefix is null. In the listBlobsByPrefix method, the path keyPath + prefix
is constructed. However, per 5.1.11, 5.4 and 15.18.1 of the Java Language Specification, the reference null is first converted to the string
"null" before performing the concatenation. This leads to no blobs being returned and therefore the snapshot mechanism would operate as if
it were writing the first snapshot of the index. The fix is simply to check if prefix is null and handle the concatenation accordingly.
Upon fixing this issue so that subsequent snapshots would no longer overwrite earlier snapshots, it was discovered that the snapshot metadata
returned by the listBlobsByPrefix method was not sufficient for the snapshot layer to detect whether or not the Lucene segments had already
been copied to the Azure storage layer in an earlier snapshot. This led the snapshot layer to unnecessarily duplicate these Lucene segments
in Azure Storage.
The root cause of this is due to known behavior in the CloudBlobContainer.getBlockBlobReference method in the Azure API. Namely, this method
does not fetch blob attributes from Azure. As such, the lengths of all the blobs appeared to the snapshot layer to be of length zero and
therefore they would compare as not equal to any new blobs that the snapshot layer is going to persist. To remediate this, the method
CloudBlockBlob.downloadAttributes must be invoked. This will fetch the attributes from Azure Storage so that a proper comparison of the
blobs can be performed.
Closeselastic/elasticsearch-cloud-azure#51, closeselastic/elasticsearch-cloud-azure#99
commit cf1d481ce5dda0a45805e42f3b2e0e1e5d028b9e
Author: Lee Hinman <lee@writequit.org>
Date: Mon Jul 20 08:41:55 2015 -0600
Unit tests for `nodesAndVersions` on shared filesystems
With the `recover_on_any_node` setting, these unit tests check that the
correct node list and versions are returned.
commit 3c27cc32395c3624f7c794904d9ea4faf2eccbfb
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 14:15:59 2015 -0400
don't fail junit4 integration tests if there are no tests.
instead fail the failsafe plugin, which means the external cluster will still get shut down
commit 95d2756c5a8c21a157fa844273fc83dfa3c00aea
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 17:16:53 2015 +0200
Testing: Fix help displaying tests under windows
The help files are using a unix based file separator, where as
the test relies on the help being based on the file system separator.
This commit fixes the test to remove all `\r` characters before
comparing strings.
The test has also been moved into its own CliToolTestCase, as it does
not need to be an integration test.
commit 944f06ea36bd836f007f8eaade8f571d6140aad9
Author: Clinton Gormley <clint@traveljury.com>
Date: Tue Jul 21 18:04:52 2015 +0200
Refactored check_license_and_sha.pl to accept a license dir and package path
In preparation for the move to building the core zip, tar.gz, rpm, and deb as separate modules, refactored check_license_and_sha.pl to:
* accept a license dir and path to the package to check on the command line
* to be able to extract zip, tar.gz, deb, and rpm
* all packages except rpm will work on Windows
commit 2585431e8dfa5c82a2cc5b304cd03eee9bed7a4c
Author: Chris Earle <pickypg@users.noreply.github.com>
Date: Tue Jul 21 08:35:28 2015 -0700
Updating breaking changes
- field names cannot be mapped with `.` in them
- fixed asciidoc issue where the list was not recognized as a list
commit de299b9d3f4615b12e2226a1e2eff5a38ecaf15f
Author: Shay Banon <kimchy@gmail.com>
Date: Tue Jul 21 13:27:52 2015 +0200
Replace primaryPostAllocated flag and use UnassignedInfo
There is no need to maintain additional state as to if a primary was allocated post api creation on the index routing table, we hold all this information already in the UnassignedInfo class.
closes#12374
commit 43080bff40f60bedce5bdbc92df302f73aeb9cae
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 15:45:05 2015 +0200
PluginManager: Fix bin/plugin calls in scripts/bats test
The release and smoke test python scripts used to install
plugins in the old fashion.
Also the BATS testing suite installed/removed plugins in that
way. Here the marvel tests have been removed, as marvel currently
does not work with the master branch.
In addition documentation has been updated as well, where it was
still missing.
commit b81ccba48993bc13c7678e6d979fd96998499233
Author: Boaz Leskes <b.leskes@gmail.com>
Date: Tue Jul 21 11:37:50 2015 +0200
Discovery: make sure NodeJoinController.ElectionCallback is always called from the update cluster state thread
This is important for correct handling of the joining thread. This causes assertions to trip in our test runs. See http://build-us-00.elastic.co/job/es_g1gc_master_metal/11653/ as an example
Closes#12372
commit 331853790bf29e34fb248ebc4c1ba585b44f5cab
Author: Boaz Leskes <b.leskes@gmail.com>
Date: Tue Jul 21 15:54:36 2015 +0200
Remove left over no commit from TransportReplicationAction
It asks to double check thread pool rejection. I did and don't see problems with it.
commit e5724931bbc1603e37faa977af4235507f4811f5
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 15:31:57 2015 +0200
CliTool: Various PluginManager fixes
The new plugin manager parser was not called correctly in the scripts.
In addition the plugin manager now creates a plugins/ directory in case
it does not exist.
Also the integration tests called the plugin manager in the deprecated way.
commit 7a815a370f83ff12ffb12717ac2fe62571311279
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 13:54:18 2015 +0200
CLITool: Port PluginManager to use CLITool
In order to unify the handling and reuse the CLITool infrastructure
the plugin manager should make use of this as well.
This obsolets the -i and --install options but requires the user
to use `install` as the first argument of the CLI.
This is basically just a port of the existing functionality, which
is also the reason why this is not a refactoring of the plugin manager,
which will come in a separate commit.
commit 7f171eba7b71ac5682a355684b6da703ffbfccc7
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Tue Jul 21 10:44:21 2015 +0200
Remove custom execute local logic in TransportSingleShardAction and TransportInstanceSingleOperationAction and rely on transport service to execute locally. (forking thread etc.)
Change TransportInstanceSingleOperationAction to have shardActionHandler to, so we can execute locally without endless spinning.
commit 0f38e3eca6b570f74b552e70b4673f47934442e1
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 17:36:12 2015 -0700
More readMetadata tests and pickiness
commit 880b47281bd69bd37807e8252934321b089c9f8e
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 14:42:09 2015 -0700
Started unit tests for plugin service
commit cd7c8ddd7b8c4f3457824b493bffb19c156c7899
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 07:21:07 2015 -0400
fix tests
commit 673454f0b14f072f66ed70e32110fae4f7aad642
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 06:58:25 2015 -0400
refactor pluginservice
This commit addresses an issue that was leading to snapshot corruption for snapshots stored as blobs in Azure Storage.
The underlying issue is that in cases when multiple snapshots of an index were taken and persisted into Azure Storage, snapshots subsequent
to the first would repeatedly overwrite the snapshot files. This issue does render useless all snapshots except the final snapshot.
The root cause of this is due to String concatenation involving null. In particular, to list all of the blobs in a snapshot directory in
Azure the code would use the method listBlobsByPrefix where the prefix is null. In the listBlobsByPrefix method, the path keyPath + prefix
is constructed. However, per 5.1.11, 5.4 and 15.18.1 of the Java Language Specification, the reference null is first converted to the string
"null" before performing the concatenation. This leads to no blobs being returned and therefore the snapshot mechanism would operate as if
it were writing the first snapshot of the index. The fix is simply to check if prefix is null and handle the concatenation accordingly.
Upon fixing this issue so that subsequent snapshots would no longer overwrite earlier snapshots, it was discovered that the snapshot metadata
returned by the listBlobsByPrefix method was not sufficient for the snapshot layer to detect whether or not the Lucene segments had already
been copied to the Azure storage layer in an earlier snapshot. This led the snapshot layer to unnecessarily duplicate these Lucene segments
in Azure Storage.
The root cause of this is due to known behavior in the CloudBlobContainer.getBlockBlobReference method in the Azure API. Namely, this method
does not fetch blob attributes from Azure. As such, the lengths of all the blobs appeared to the snapshot layer to be of length zero and
therefore they would compare as not equal to any new blobs that the snapshot layer is going to persist. To remediate this, the method
CloudBlockBlob.downloadAttributes must be invoked. This will fetch the attributes from Azure Storage so that a proper comparison of the
blobs can be performed.
Closeselastic/elasticsearch-cloud-azure#51, closeselastic/elasticsearch-cloud-azure#99
In order to unify the handling and reuse the CLITool infrastructure
the plugin manager should make use of this as well.
This obsolets the -i and --install options but requires the user
to use `install` as the first argument of the CLI.
This is basically just a port of the existing functionality, which
is also the reason why this is not a refactoring of the plugin manager,
which will come in a separate commit.
failsafe uses surefire, which sucks. It also mean integ tests act alien right now.
I would rather have the consistency, e.g. things formatted the same way, running integ tests under security manager, etc.
1. tests don't have a bogus test dependency on zips anymore,
instead we handle this in pre-integration-test. This reduces
lots of confusion for e.g. mvn clean test.
2. refactor integ logic so that core/ and plugin/ share it.
previously they were duplicates but the above change simplifies life.
it also makes it easier for doing more interesting stuff
Today we have a intermediate hierarchy for shard and index exceptions
which makes it hard to introduce generic exceptions like ResourceNotFoundException
intoduced in this commit. This commit breaks up the hierarchy by adding index and shard
as a special internal header that gets rendered for every exception that fills that header.
This commit removes dedicated exceptions like `IndexMissingException` or
`IndexShardMissingException` in favour of `ResourceNotFoundException`
Modified ScriptEngineService to pass in a CompiledScript object
with newly added name and type member variables.
This can in turn be used to give better scripting error messages
with the type of script used and the name of the script.
Required slight modifications to the caching mechanism.
Note that this does not enforce good behavior in that plugins will
have to write exceptions that also output the name of the script
in order to be effective. There was no way to wrap the script
methods in a try/catch block properly further up the chain because
many have script-like objects passed back that can be run at a
later time.
closes#6653closes#11449
Changes in a nutshell:
* All expression logic is now encapsulated by ExpressionResolver interface.
* MetaData#convertFromWildcards() gets replaced by WildcardExpressionResolver.
* All of the indices expansion methods are being moved from MetaData class to the new IndexNameExpressionResolver class.
* All single index expansion optimisations are removed.
The logic for resolving a concrete index name from an expression has been moved from MetaData to IndexExpressionResolver. The logic has been cleaned up and simplified were was possible without breaking bwc.
Also the notion of aliasOrIndex has been changed to index expression.
The IndexNameExpressionResolver translates index name expressions into concrete indices. The list of index name expressions are first delegated to the known ExpressionResolverS. An ExpressionResolver is responsible for translating if possible an expression into another expression (possibly but not required this can be concrete indices or aliases) otherwise the expressions are left untouched. Concretely this means converting wildcard expressions into concrete indices or aliases, but in the future other implementations could convert expressions based on different rules.
To prevent many overloading of methods, DocumentRequest extends now from IndicesRequest. All implementation of DocumentRequest already did implement IndicesRequest indirectly.
Today everything is tight to having the next version as the latest.
In order to work towards 2.0.0.beta1 we need to fix all the usage of
2.0.0-SNAPSHOT to reflect the version we will release soon.
Usually we do this on the release branch but to simplify things I wanna
keep this on master for now and move to 2.1.0-SNAPSHOT on master once
we created a 2.0 branch.
Closes#12148
Reported in https://github.com/elastic/elasticsearch/issues/11647#issuecomment-118523861
> btw, I think you broke some plugins on Master, cloud-aws doesn't register s3 repos anymore.
```
org.elasticsearch.common.inject.CreationException: Guice creation errors:
1) Error injecting constructor, java.lang.NoClassDefFoundError: org/apache/http/protocol/HttpContext
at org.elasticsearch.repositories.s3.S3Repository.<init>(Unknown Source)
while locating org.elasticsearch.repositories.s3.S3Repository
while locating org.elasticsearch.repositories.Repository
1 error
at org.elasticsearch.common.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:344)
at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:178)
at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)
at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:140)
at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:69)
at org.elasticsearch.repositories.RepositoriesService.createRepositoryHolder(RepositoriesService.java:404)
at org.elasticsearch.repositories.RepositoriesService.registerRepository(RepositoriesService.java:368)
at org.elasticsearch.repositories.RepositoriesService.access$100(RepositoriesService.java:55)
at org.elasticsearch.repositories.RepositoriesService$1.execute(RepositoriesService.java:110)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:378)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:209)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:179)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: org/apache/http/protocol/HttpContext
at com.amazonaws.AmazonWebServiceClient.<init>(AmazonWebServiceClient.java:129)
at com.amazonaws.services.s3.AmazonS3Client.<init>(AmazonS3Client.java:432)
at com.amazonaws.services.s3.AmazonS3Client.<init>(AmazonS3Client.java:414)
at org.elasticsearch.cloud.aws.InternalAwsS3Service.getClient(InternalAwsS3Service.java:153)
at org.elasticsearch.cloud.aws.InternalAwsS3Service.client(InternalAwsS3Service.java:82)
at org.elasticsearch.repositories.s3.S3Repository.<init>(S3Repository.java:125)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:56)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:54)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:865)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:201)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:858)
at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)
... 13 more
Caused by: java.lang.ClassNotFoundException: org.apache.http.protocol.HttpContext
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 37 more
```
Closes#12034.
We can keep only unit tests in plugins instead of starting each time a local node and running tests against it.
Also follow up of #12091
Tests can use more than one JVM
We don't need anymore to set the number of jvm to run tests as we moved IT to Rest Tests
For all plugins but cloud plugins which will require another way for running integration tests.
Follow up for https://github.com/elastic/elasticsearch-analysis-kuromoji/issues/61
We don't shade anymore elasticsearch dependencies, so plugins might include jars in the distribution ZIP file which might not be needed anymore.
For example, `elasticsearch-cloud-aws` comes with:
```
Archive: cloud-aws/target/releases/elasticsearch-cloud-aws-2.0.0-SNAPSHOT.zip
Length Date Time Name
-------- ---- ---- ----
1920788 05-18-15 09:42 aws-java-sdk-ec2-1.9.34.jar
503963 05-18-15 09:42 aws-java-sdk-core-1.9.34.jar
232771 01-19-15 09:24 commons-codec-1.6.jar
915096 01-19-15 09:24 jackson-databind-2.3.2.jar
252288 05-18-15 09:42 aws-java-sdk-kms-1.9.34.jar
62050 01-19-15 09:24 commons-logging-1.1.3.jar
282269 10-31-14 13:19 httpcore-4.3.2.jar
35058 01-19-15 09:24 jackson-annotations-2.3.0.jar
229998 05-29-15 12:28 jackson-core-2.5.3.jar
589289 01-19-15 09:24 joda-time-2.7.jar
562858 05-18-15 09:42 aws-java-sdk-s3-1.9.34.jar
590533 10-31-14 13:19 httpclient-4.3.5.jar
44854 06-12-15 19:22 elasticsearch-cloud-aws-2.0.0-SNAPSHOT.jar
-------- -------
6221815 13 files
```
A lot of those files are already distributed with elasticsearch itself so classes are available within the classloader.
We mark all es core dependencies as provided in plugins.
We also remove `groupId` as already defined in parent pom.
And we remove non needed licenses files as some jars are not included anymore in plugins.
Closes#11647.
The change makes rest-spec-api a project in the same way as we build dev-tools. it packages the tests and api in a bundle using the maven-remote-resources-plugin and uses the same plugin in the plugins and core pom to unpack the rest-api-spec into the target directory and references the rest tests there in the test resources.
The main stimulus for this change is that for those using Eclipse the current build does not work. After running `mvn eclipse:eclipse` the Eclipse IDE errors because the rest-api-spec is outside of the project scope, meaning that every time the command is run (required whenever any dependencies change), the class path of all the projects has to be manually fixed.
With a recent change in core, we don't support anymore non explicit byte size units when setting values.
This commit fix that in azure repository. Otherwise, tests can't be executed.
The deleted counter is incremented even if the document is missing. Also, this commit ensures that the scroll id is cleared even if no documents are found by the scan request.
this is really just a workaround for plugins to run their own
REST tests instead of the core ones. It opts out of the rest test
loading from the core jar file and tries to load from the classpath instead.
Eventually we need to fix this infrastrucutre to move away from parameterized
tests such that subclasses can override behavior.
Closes#11721
The delete by query plugin adds support for deleting all of the documents (from one or more indices) which match the specified query. It is a replacement for the problematic delete-by-query functionality which has been removed from Elasticsearch core in 2.0. Internally, it uses the Scan/Scroll and Bulk APIs to delete documents in an efficient and safe manner. It is slower than the old delete-by-query functionality, but fixes the problems with the previous implementation.
Closes#7052
[build] mark elasticsearch as provided in plugins
When we build a plugin, we suppose it will be executed within elasticsearch server.
So we should mark it as `provided`.
If a java developer needs to embed the plugin and elasticsearch, it will make sense to declare both in its `pom.xml` file.
In Maven parent project, in dependency management, we should only declare which versions of 3rd party jars we want to use but not force any scope.
It makes then more obvious in modules what is exactly the scope of any dependency.
For example, one could imagine importing `jimfs` as a `compile` dependency in another module/plugin with:
```xml
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
</dependency>
```
But it won't work as expected as the default maven `scope` should be `compile` but here it's `test` as defined in the parent project.
So, if you want to use this lib for tests, you should simply define:
```xml
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
<scope>test</scope>
</dependency>
```
We also remove `maven-s3-wagon` from gce plugin as it's not used.
The script APIs have been deprecated long ago we can now remove them.
This commit still keeps the parsing code since it might be used in a
query that is still stuck in transaction log. This issue should be discussed
elsewhere.
Closes#11619
When we build a plugin, we suppose it will be executed within elasticsearch server.
So we should mark it as `provided`.
If a java developer needs to embed the plugin and elasticsearch, it will make sense to declare both in its `pom.xml` file.