* Clean up the generics around significant terms aggregation results
* Reduce code duplicated between `SignificantLongTerms` and
`SignificantStringTerms` by creating `InternalMappedSignificantTerms`
and moving common things there where possible.
* Migrate to `NamedWriteable`
* Line length fixes while I was there
This exposes a method to start an action and return a task from
`NodeClient`. This allows reindex to use the injected `Client` rather
than require injecting `TransportAction`s
After #13834 many tests that used Groovy scripts (for good or bad reason) in their tests have been moved in the lang-groovy module and the issue #13837 has been created to track these messy tests in order to clean them up.
This commit moves more tests back in core, removes the dependency on Groovy, changes the scripts in order to use the mocked script engine, and change the tests to integration tests.
This commit moves back some messy tests that have been placed in lang-groovy module in https://github.com/elastic/elasticsearch/pull/13834. It removes the dependency on Groovy plugin as well as change back the tests to integration tests (IT suffix).
It also changes the current MockScriptEngine and MockScriptPlugin to make it easier to use.
This change activates the doc_values on the _size field for indices created after 5.0.0-alpha4.
It also adds a note in the breaking changes that explain the situation and how to get around it.
Closes#18334
Node IDs are currently randomly generated during node startup. That means they change every time the node is restarted. While this doesn't matter for ES proper, it makes it hard for external services to track nodes. Another, more minor, side effect is that indexing the output of, say, the node stats API results in creating new fields due to node ID being used as keys.
The first approach I considered was to use the node's published address as the base for the id. We already [treat nodes with the same address as the same](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/discovery/zen/NodeJoinController.java#L387) so this is a simple change (see [here](https://github.com/elastic/elasticsearch/compare/master...bleskes:node_persistent_id_based_on_address)). While this is simple and it works for probably most cases, it is not perfect. For example, if after a node restart, the node is not able to bind to the same port (because it's not yet freed by the OS), it will cause the node to still change identity. Also in environments where the host IP can change due to a host restart, identity will not be the same.
Due to those limitation, I opted to go with a different approach where the node id will be persisted in the node's data folder. This has the upside of connecting the id to the nodes data. It also means that the host can be adapted in any way (replace network cards, attach storage to a new VM). I
It does however also have downsides - we now run the risk of two nodes having the same id, if someone copies clones a data folder from one node to another. To mitigate this I changed the semantics of the protection against multiple nodes with the same address to be stricter - it will now reject the incoming join if a node exists with the same id but a different address. Note that if the existing node doesn't respond to pings (i.e., it's not alive) it will be removed and the new node will be accepted when it tries another join.
Last, and most importantly, this change requires that *all* nodes persist data to disk. This is a change from current behavior where only data & master nodes store local files. This is the main reason for marking this PR as breaking.
Other less important notes:
- DummyTransportAddress is removed as we need a unique network address per node. Use `LocalTransportAddress.buildUnique()` instead.
- I renamed `node.add_lid_to_custom_path` to `node.add_lock_id_to_custom_path` to avoid confusion with the node ID which is now part of the `NodeEnvironment` logic.
- I removed the `version` paramater from `MetaDataStateFormat#write` , it wasn't really used and was just in the way :)
- TribeNodes are special in the sense that they do start multiple sub-nodes (previously known as client nodes). Those sub-nodes do not store local files but derive their ID from the parent node id, so they are generated consistently.
Once all of these are migrated we'll be able to remove aggregation's
custom "streams" which function that same as NamedWriteable. It also
allows us to make most of the fields on aggregations final which is
rather nice.
The internal representation of the object that JsonPath gives access to is a map. That is independent of the initial input format, which is json but could also be yaml etc.
This commit renames JsonPath to ObjectPath and adds a static method to create an ObjectPath from an XContent
We introduced a special response_body assertion to test our docs snippets. The match assertion does the same job though and can be reused and adapted where needed. ResponseBodyAssertion contains provides much better and accurate errors though, which can be now utilized in MatchAssertion so that many more REST tests can benefit from readable error messages.
Each response body gets always stashed and can be retrieved for later evaluations already. Instead of providing the response body as strings that get parsed to json objects separately, then converted to maps as ResponseBodyAssertion did, we parse everything once, the json is part of the yaml test, which is supported. The only downside is that json comments cannot be used, rather yaml comments should be used (// C style vs # ). There were only two docs tests that were using comments in ingest-node.asciidoc where I went ahead and remove the comments which didn't seem that useful anyways.
As discussed at https://github.com/elastic/elasticsearch-cloud-azure/issues/91#issuecomment-229113595, we know that the current `discovery-azure` plugin only works with Azure Classic VMs / Services (which is somehow Legacy now).
The proposal here is to rename `discovery-azure` to `discovery-azure-classic` in case some users are using it.
And deprecate it for 5.0.
Closes#19144.
This commit modifies TimeValue parsing to keep the input time unit. This
enables round-trip parsing from instances of String to instances of
TimeValue and vice-versa. With this, this commit removes support for the
unit "w" representing weeks, and also removes support for fractional
values of units (e.g., 0.5s).
Relates #19102
Instead of plugins calling `registerTokenizer` to extend the analyzer
they now instead have to implement `AnalysisPlugin` and override
`getTokenizer`. This lines up extending plugins in with extending
scripts. This allows `AnalysisModule` to construct the `AnalysisRegistry`
immediately as part of its constructor which makes testing anslysis
much simpler.
This also moves the default analysis configuration into `AnalysisModule`
which is how search is setup.
Like `ScriptModule`, `AnalysisModule` no longer extends `AbstractModule`.
Instead it is only responsible for building `AnslysisRegistry`. We still
bind `AnalysisRegistry` but we only do so in `Node`. This is means it
is available at module construction time so we slowly remove the need to
bind it in guice.
This commit reverts the slow tests heartbeat added in
b6fbd18e09. The heartbeat has not served
its stated purpose of drawing attention to slow tests, and the heartbeat
can kill builds with SELinux enforcing enabled. While the heartbeats can
be disabled via the environment variable PULSE_SERVER, and SELinux
policy files can be changed, since the heartbeats are not accomplishing
their intended purpose they should be removed.
Relates #19071
This commit upgrades JNA from version 4.1.0 to 4.2.2. Additionally, this
dependency is now non-optional as JNA is dual-licensed with Apache
License 2.0 since JNA 4.0.0.
Relates #19045
ES only sends a non-200 response all shards fail but we should
fail the tests generated by docs if any of them fail.
Depending on the outcome of #18978 this might be a temporary
workaround.
Today we only emit that the setting wasn't found unless we have
some DYM suggestions. Yet, if a setting is not found at all and there
are no suggestions due to typos it's likely a removed setting or the plugin
that is supposed to be configured is not installed.
This commit adds some info text to the exception to help the user debugging
the problem before opening bugreports.
Instead of emitting:
`unknown setting [foo.bar]`
we now emit:
`unknown setting [foo.bar] please check the migration guide for removed settings and ensure that the plugin you are configuring is installed`
Relates to #18663
We pretended to be able to ackt like a different version node for so long it's
time to be honest and remove this ability. It's just confusing and where needed
and tested we should build dedicated extension points.
This change removes some unnecessary dependencies from ClusterService
and cleans up ClusterName creation. ClusterService is now not created
by guice anymore.
Projects that don't depend on elasticsearch-test fail otherwise because org.elasticsearch.test.EsIntegTestCase (default integ test class) is not in the classpath. They should provide their onw integ test base class, but having integration tests should not be mandatory. One can simply set skipIntegTestsInDisguise to true to prevent loading of integ test class.
Today we have a push model for registering basically anything. All our extension points
are defined on modules which we pass in to plugins. This is harder to maintain and adds
unnecessary dependencies on the modules itself. This change moves towards a pull model
where the plugin offers a getter kind of method to get the extensions. This will also
help in the future if we need to pass dependencies to the extension points which can
easily be defined on the method as arguments if a pull model is used.
With this commit we add a benchmarks project that contains the necessary build
infrastructure and an example benchmark. It is added as a separate project to avoid
interfering with the regular build too much (especially sanity checks) and to keep
the microbenchmarks isolated.
Microbenchmarks are generated with `gradle :benchmarks:jmhJar` and can be run with
` gradle :benchmarks:jmh`.
We intentionally do not use the
[jmh-gradle-plugin](https://github.com/melix/jmh-gradle-plugin) as it causes all
sorts of problems (dependencies are not properly excluded, not all JMH parameters
can be set) and it adds another abstraction layer that is not needed.
Closes#18242
This will let things that don't depend on :test:framework like the
client use it.
Also skip initializing the classes we check because we don't care
about their initialization behavior because we're not executing them.
This makes the naming conventions check pretty close to instant
from a "human eye" perspective.
Groovy does some crazy capturing when using closures inside a loop. In
this case, it somehow decided the local loop variable would be
modified, and so each closure was getting a wrapped value that would be
updated on each loop iteration, until all the closures pointed at the
last value. This change fixes the loop to extract the object to be used by
the closures.
Testability of ICSS is achieved by introducing interfaces for IndicesService, IndexService and IndexShard. These interfaces extract all relevant methods used by ICSS (which do not deal directly with store) and give the possibility to easily mock all the store behavior away in the tests (and cuts down on dependencies).
Folded grok processor into ingest-common module.
The rest tests have been moved to ingest-common module as well, because these tests don't run in the rest-api-spec module but in the distribution:integ-test-zip module
and adding a test plugin there felt just wrong to me. I think this is ok. I left a tiny ingest rest test behind in that tests with an empty pipeline.
Removed messy tests, these tests were already covered in the rest tests
Added ingest test plugin in test infra so that each module testing integration with ingest doesn't need write its own plugin
Moved reindex ingest tests to qa module
Closes#18490
This adds support for setting the refresh request parameter to
`wait_for` in the `index`, `delete`, `update`, and `bulk` APIs. When
`refresh=wait_for` is set those APIs will not return until their
results have been made visible to search by a refresh.
Also it adds a `forced_refresh` field to the response of `index`,
`delete`, `update`, and to each item in a bulk response. This will
be true for requests with `?refresh` or `?refresh=true` and will be
true for some requests (see below) with `refresh=wait_for` but ought
to otherwise always be false.
`refresh=wait_for` is implemented as a list of
`Tuple<Translog.Location, Consumer<Boolean>>`s in the new `RefreshListeners`
class that is managed by `IndexShard`. The dynamic, index scoped
`index.max_refresh_listeners` setting controls a maximum number of
listeners allowed in any shard. If more than that many listeners
accumulate in the engine then a refresh will be forced, the thread that
adds the listener will be blocked until the refresh completes, and then the
listener will be called with a `forcedRefresh` flag so it knows that it was
the "straw that broke the camel's back". These listeners are only used by
`refresh=wait_for` and that flag manifests itself as `forced_refresh` being
`true` in the response.
About half of this change comes from piping async-ness down to the appropriate
layer in a way that is compatible with the ongoing with with sequence ids.
Closes#1063
You can look up the winding story of all the commits here:
https://github.com/elastic/elasticsearch/pull/17986
Here are the commit messages in case they are intersting to you:
commit 59a753b89109828d2b8f0de05cb104fc663cf95e
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 10:18:23 2016 -0400
Replace a method reference with implementing an interface
Saves a single allocation and forces more commonality
between the WriteResults.
commit 31f7861a85b457fb7378a6f27fa0a0c171538f68
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 10:07:55 2016 -0400
Revert "Replace static method that takes consumer with delegate class that takes an interface"
This reverts commit 777e23a6592c75db0081a53458cc760f4db69507.
commit 777e23a6592c75db0081a53458cc760f4db69507
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 09:29:35 2016 -0400
Replace static method that takes consumer with delegate class that takes an interface
Same number of allocations, much less code duplication.
commit 9b49a480ca9587a0a16ebe941662849f38289644
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 08:25:38 2016 -0400
Patch from boaz
commit c2bc36524fda119fd0514415127e8901d94409c8
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:46:27 2016 -0400
Fix docs
After updating to master we are actually testing them.
commit 03975ac056e44954eb0a371149d410dcf303e212
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:20:11 2016 -0400
Cleanup after merge from master
commit 9c9a1deb002c5bebb2a997c89fa12b3d7978e02e
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:09:14 2016 -0400
Breaking changes notes
commit 1c3e64ae06c07a85f7af80534fab88279adb30b4
Merge: 9e63ad6 f67e580
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:00:05 2016 -0400
Merge branch 'master' into block_until_refresh2
commit 9e63ad6de52d0b28f0b6d7203721baf1ebf6f56b
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 13:21:27 2016 -0400
Test for TransportWriteAction
commit 522ecb59d39b3c9e8df0d3b8df34b9e7aeaf0ce9
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:30:18 2016 -0400
Document deprecation
commit 0cd67b947f58867e704a1f0e66928a6fb5a11f11
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:26:23 2016 -0400
Deprecate setRefresh(boolean)
Users should use `setRefresh(RefreshPolicy)` instead.
commit aeb1be3f2c501990b33fb1f8230d496035f498ef
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:12:27 2016 -0400
Remove checkstyle suppression
It is fixed
commit 00d09a9caa638b6f90f4896b5502dd98d8fad56e
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:08:28 2016 -0400
Improve comment
commit 788164b898a6ee2878a273961230122b7386c3c9
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:01:01 2016 -0400
S/ReplicatedWriteResponse/WriteResponse/
Now it lines up with WriteRequest.
commit b74cf3fe778352b140355afcaa08d3d4412d749d
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 18:27:52 2016 -0400
Preserve `?refresh` behavior
`?refresh` means the same things as `?refresh=true`.
commit 30f972bdaeaaa0de6fe67746cdb8628aa86f5a8c
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 17:39:05 2016 -0400
Handle hanging documents
If a document is added to the index during a refresh we weren't properly
firing its refresh listener. This happened because the way we detect
whether a refresh makes something visible or not is imperfect. It is
ok because it always errs on the side of thinking that something isn't
yet visible.
So when a document arrives during a refresh the refresh listeners
won't think it made it into a refresh when, often, it does. The way
we work around this is by telling Elasticsearch that it ought to
trigger a refresh if there are any pending refresh listeners even
if there aren't pending documents to update. Lucene short circuits
the refresh so it doesn't take that much effort, but the refresh
listeners still get the signal that a refresh has come in and they
still pick up the change and notify the listener.
This means that the time that a listener can wait is actually slightly
longer than the refresh interval.
commit d523b5702b60c7ba309fb0dcf3cd3a4798f11960
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 14:34:01 2016 -0400
Explain Integer.MAX_VALUE
commit 4ffb7c0e954343cc1c04b3d7be2ebad66d3a016b
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 14:27:39 2016 -0400
Fire all refresh listeners in a single thread
Rather than queueing a runnable each.
commit 19606ec3bbe612095df45eba734c5b7eb2709c01
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 14:09:52 2016 -0400
Assert translog ordering
commit 6bb4e5c75e850f4a42518f06fbc955f7ec76d245
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 13:17:44 2016 -0400
Support null RefreshListeners in InternalEngine
Just skip using it.
commit 74be1480d6e44af2b354ff9ea47c234d4870b6c2
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 18:02:03 2016 -0400
Move funny ShardInfo hack for bulk into bulk
This should make it easier to understand because it is closer to where it
matters....
commit 2b771f8dabd488e056cfdc9989608d18264ddfb0
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:39:46 2016 -0400
Pull listener out into an inner class with javadoc and stuff
commit 058481ad72019c0492b03a7a4ac32a48673697d3
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:33:42 2016 -0400
Fix javadoc links
commit d2123b1cabf29bce8ff561d4a4c1c1d5b42bccad
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:28:09 2016 -0400
Make more stuff final
commit 8453fc4f7850f6a02fb5971c17a942a3e3fd9f7b
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:26:48 2016 -0400
Javadoc
commit fb16d2fc7016c1e8e1621d481e8781c7ef43326c
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 16:14:48 2016 -0400
Rewrite refresh docs
commit 5797d1b1c4d233c0db918c0d08c21731ddccd05e
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 15:02:34 2016 -0400
Fix forced_refresh flag
It wasn't being set.
commit 43ce50a1de250a9e073a2ca6cbf55c1b4c74b11b
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 14:02:56 2016 -0400
Delay translog sync and flush until after refresh
The sync might have occurred for us during the refresh so we
have less work to do. Maybe.
commit bb2739202e084703baf02cfa58f09517598cf14e
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 13:08:08 2016 -0400
Remove duplication in WritePrimaryResult and WriteReplicaResult
commit 2f579f89b4867a880396f2e7fcffc508449ff2de
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 12:19:05 2016 -0400
Clean up registration of RefreshListeners
commit 87ab6e60ca5ba945bf0fba84784b2bbe53506abf
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 11:28:30 2016 -0400
Shorten lock time in RefreshListeners
Also use null to represent no listeners rather than an empty list.
This saves allocating a new ArrayList every refresh cycle on every
index.
commit 0d49d9c5720dadfb67da3fa760397bf6d874601c
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 10:46:18 2016 -0400
Flip relationship between RefreshListeners and Engine
Now RefreshListeners comes to Engine from EngineConfig.
commit b2704b8a39382953f8f91a9743e894ee289f7514
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 09:37:58 2016 -0400
Remove unused imports
Maybe I added them?
commit 04343a22647f19304d9dc716b3fac9b183227f63
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 09:37:52 2016 -0400
Javadoc
commit da1e765678890a02d61d8a29aa433274beb5e00c
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 09:26:35 2016 -0400
Reply with non-null
Also move the fsync and flush to before the refresh listener stuff.
commit 5d8eecd0d904b497844b4c81c46477bd6178ed3a
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 08:58:47 2016 -0400
Remove funky synchronization in AsyncReplicaAction
commit 1ec71eea0f4e1228ae1497d982307be818ef4b65
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 08:01:14 2016 -0400
s/LinkedTransferQueue/ArrayList/
commit 7da36a4ceed2ccf7955138c3b005237fa41efcb4
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 07:46:38 2016 -0400
More cleanup for RefreshListeners
commit 957e9b77007c32ee75dde152c6622bab065d5993
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 07:34:13 2016 -0400
/Consumer<Runnable>/Executor/
commit 4d8bf5d4a70dcc56150c8d8d14165cd23d308b3c
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 22:20:42 2016 -0400
explain
commit 15d948a348089bb2937eec5ac4e96f3ec67dbe32
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 22:17:59 2016 -0400
Better....
commit dc28951d02973fc03b4d51913b5f96de14b75607
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 21:09:20 2016 -0400
Javadocs and compromises
commit 8eebaa89c0a1ee74982fbe0d56d1485ca2ae09db
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 20:52:49 2016 -0400
Take boaz's changes to their logic conclusion and unbreak important stuff like bulk
commit 7056b96ea412f275005b93e3570bcff895859ed5
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 15:49:32 2016 -0400
Patch from boaz
commit 87be7eaed09a274cc6a99d1a3da81d2d7bf9dd64
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 15:49:13 2016 -0400
Revert "Move async parts of replica operation outside of the lock"
This reverts commit 13807ad10b6f5ecd39f98c9f20874f9f352c5bc2.
commit 13807ad10b6f5ecd39f98c9f20874f9f352c5bc2
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 22:53:15 2016 -0400
Move async parts of replica operation outside of the lock
commit b8cadcef565908b276484f7f5f988fd58b38d8b6
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 16:17:20 2016 -0400
Docs
commit 91149e0580233bf79c2273b419fe9374ca746648
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 15:17:40 2016 -0400
Finally!
commit 1ff50c2faf56665d221f00a18d9ac88745904bf5
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 15:01:53 2016 -0400
Remove Translog#lastWriteLocation
I wasn't being careful enough with locks so it wasn't right anyway.
Instead this builds a synthetic Tranlog.Location when you call
getWriteLocation with much more relaxed equality guarantees. Rather
than being equal to the last Translog.Location returned it is
simply guaranteed to be greater than the last translog returned
and less than the next.
commit 55596ea68b5484490c3637fbad0d95564236478b
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 14:40:06 2016 -0400
Remove listener from shardOperationOnPrimary
Create instead asyncShardOperationOnPrimary which is called after
all of the replica operations are started to handle any async
operations.
commit 3322e26211bf681b37132274ee158ae330afc28b
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 17:20:02 2016 -0400
Increase default maximum number of listeners to 1000
commit 88171a8322a424e624d48960fb4c98dd43e4d671
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 16:40:57 2016 -0400
Rename test
commit 179c27c4f829f2c6ded65967652cf85adaf2ae52
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 16:35:27 2016 -0400
Move refresh listeners into their own class
They still live at the IndexShard level but they live on their
own in RefreshListeners which interacts with IndexShard using a
couple of callbacks and a registration method. This lets us test
the listeners without standing up an entire IndexShard. We still
test the listeners against an InternalEngine, because the interplay
between InternalEngine, Translog, and RefreshListeners is complex
and important to get right.
commit d8926d5fc1d24b4da8ccff7e0f0907b98c583c41
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 11:02:38 2016 -0400
Move refresh listeners into IndexShard
commit df91cde398eb720143a85a8c6fa19bdc3a74e07d
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 16:01:03 2016 -0400
unused import
commit 066da45b08148b266e4173166662fc1b3f66ed53
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 15:54:11 2016 -0400
Remove RefreshListener interface
Just pass a Translog.Location and a Consumer<Boolean> when registering.
commit b971d6d3301c7522b2e7eb90d5d8dd96a77fa625
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 14:41:06 2016 -0400
Docs for setForcedRefresh
commit 6c43be821eaf61141d3ec520f988aad3a96a3941
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 14:34:39 2016 -0400
Rename refresh setter and getter
commit e61b7391f91263a4c4d6107bfbc2a828bbcc805c
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 22:48:09 2016 -0400
Trigger listeners even when there is no refresh
Each refresh gives us an opportunity to pick up any listeners we may
have left behind.
commit 0c9b0477085c021f503db775640d25668e02f635
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 20:30:06 2016 -0400
REST
commit 8250343240de7e63118c663a230a7a314807a754
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 19:34:22 2016 -0400
Switch to estimated count
We don't need a linear time count of the number of listeners - a volatile
variable is good enough to guess. It probably undercounts more than it
overcounts but it isn't a huge problem.
commit bd531167fe54f1bde6f6d4ddb0a8de5a7bcc18a2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 18:21:02 2016 -0400
Don't try and set forced refresh on bulk items without a response
NullPointerExceptions are bad. If the entire request fails then the user
has worse problems then "did these force a refresh".
commit bcfded11515af5e0b3c3e36f3c2f73f5cd26512e
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 18:14:20 2016 -0400
Replace LinkedList and synchronized with LinkedTransferQueue
commit 8a80cc70a76375a7593745884cb987535b37ca80
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 17:38:24 2016 -0400
Support for update
commit 1f36966742f851b7328015151ef6fc8f95299af2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 15:46:06 2016 -0400
Cleanup translog tests
commit 8d121bf35eb265b8a0aee9710afeb1b054a113d4
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 15:40:53 2016 -0400
Cleanup listener implementation
Much more testing too!
commit 2058f4a808762c4588309f21b13b677245832f2c
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:45:55 2016 -0400
Pass back information about whether we refreshed
commit e445cb0cb91ebdbcfdbf566696edb2bf1c84a882
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:03:31 2016 -0400
Javadoc
commit 611cbeeaeb458f4b428bfc43a1ee6652adf4baff
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:01:40 2016 -0400
Move ReplicationResponse
now it is in the same package as its request
commit 9919758b644fd73895fb88cd6a4909a8387eb2e2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:00:14 2016 -0400
Oh boy that wasn't working
commit 247cb483c4459dea8e95e0e3bd2e4bf8d452c598
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 10:29:37 2016 -0400
Basic block_until_refresh exposed to java client
and basic "is it plugged in" style tests.
commit 46c855c9971cb2b748206d2afa6a2d88724be3ba
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 10:11:10 2016 -0400
Move test to own class
commit a5ffd892d0a352ae7e9757f2640fc2a1fa656bf2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 07:44:25 2016 -0400
WIP
commit 213bebb6ece11b85d17e44af9a54fc2e5e332d39
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 21:35:52 2016 -0400
Add refresh listeners
commit a2bc7f30e6d4857a1224ef5a89909b36c8f33731
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 21:11:55 2016 -0400
Return last written location from refresh
commit 85033a87551da89f36a23d4dfd5016db218e08ee
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 20:28:21 2016 -0400
Never reply to replica actions while you have the operation lock
This last thing was causing periodic test failures because we were
replying while we had the operation lock. Now, we probably could get
away with that in most cases but the tests don't like it and it isn't
a good idea to do network io while you have a lock anyway. So this
prevents it.
commit 1f25cf35e796835b3827b8a4110e09e5de61784c
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 19:56:18 2016 -0400
Cleanup
commit 52c5f7c3f04710901f503334239a611c0e21c85a
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 19:33:00 2016 -0400
Add a listener to shard operations
commit 5b142dc331214c8eef90587144f4b3f959f9eced
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 18:03:52 2016 -0400
Cleanup
commit 3d22b2d7ceb473db339259452a7c4f117ce86069
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:59:55 2016 -0400
Push the listener into shardOperationOnPrimary
commit 34b378943b8185451acf6350f661c0ad33b5836d
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:48:47 2016 -0400
Doc
commit b42b8da968d42cc7414020c7b199606a5dcce50a
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:45:40 2016 -0400
Don't finish early if the primary finishes early
We use a "fake" pending shard that we resolve when the replicas have
all started.
commit 0fc045b56e1e02a48c30383ac50a281d5af7e0b6
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:30:06 2016 -0400
Make performOnPrimary asyncS
Instead of returning Tuple<Response, ReplicaRequest> it returns
ReplicaRequest and takes a ActionListener<Response> as an argument.
We call the listener immediately to preserve backwards compatibility
for now.
commit 80119b9a26ede96a865af45904c3ac69d5b19b59
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:51:53 2016 -0400
Factor out common code in shardOperationOnPrimary
commit 0642083676702618f900fa842c08802a04c1a53e
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:32:29 2016 -0400
Factor out common code from shardOperationOnReplica
commit 8bdc415fedaaa9f2d0c555590a13ec4699a7c3f7
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:23:28 2016 -0400
Create ReplicatedMutationRequest
Superclass for index, delete, and bulkShard requests.
commit 0f8fa846a2822c4293df32fed18c9b99660b39ff
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:10:30 2016 -0400
Create TransportReplicatedMutationAction
It is the superclass of replication actions that mutate data: index, delete,
and shardBulk. shardFlush and shardRefresh are replication actions but they
do not extend TransportReplicatedMutationAction because they don't change
the data, only shuffle it around.
* [TEST] wait for yellow after setup doc tests
We have many places in the doc where we expect and index to be
yellow before we execute a query. Therefore we have to
always wait for yellow after setup.
We still have a wrapper called RestTestClient that is very specific to Rest tests, as well as RestTestResponse etc. but all the low level bits around http connections etc. are now handled by RestClient.
If this option is enabled on a processor it silently catches any processor related failure and continues executing the rest of the pipeline.
Closes#18493
There is no reason to read the entire marvel hero file to test the features,
it might take several seconds to do so which is unnecessary.
This commit also splits SearchSuggestTests into core and modules/mustache
also add @Nighlty to forbidden API to make sure we don't use it since they won't run in CI these days.
This change makes it possible to compile a separate project with e.g. targetCompatibility 1.7. Adds specific options (compact profile) only when compiling for >= 1.8.
It came out with improvements around idea integration and language levels. This will make it possible to have the upcoming java client as a new project compiled against java 7 and have idea working on the right language level.
We currently fail on any deprecations found during the build. However,
this includes things deprecated within ES, which adds a heavy burden in
order to deprecate apis (requring to add suppressions to all internal
callers of the API).
This change adds `-deprecation` to xlint. We should consider in the
future having a task to "see" what deprecated apis we currently use for
analysis.
The syntax highlighter only supports [source,js].
Also adds a check to the rest test generator that runs during
the build that'll fail the build if it sees `[source,json]`.
Significant changes:
* AbstractQueryTestCase has moved to the test framework module, in order for query builder tests in modules and plugins
* Added support to AbstractQueryTestCase to register plugins
* Lift the restriction that only one percolator could be added per index. This validation existed in MapperService, but because the percolator moved to a module it could no longer exist there. Instead of bringing it back it was removed. This validation existed since the percolator cache only supported one percolator query per document, since the percolator cache has been removed this restriction could removed as well.
* While moving percolator tests to the new module, also removed a couple of tests for the deprecated percolate and mpercolate api. These APIs are now sugar APIs for bwc and rediect to the searvh and msearvh APIs. Some tests were still testing as if percolate and mpercolate API did the percolation, but this no longer the case and these tests could be removed.
This now requires that system properties passed to Gradle must be in the form of "-Dtests.es.*" instead of
"-Des.*". It then chops off "tests.es." and passes that as a "-E" property to Elasticsearch.
Also changed system properties:
- `tests.logger.level` became `tests.es.logger.level`
- `node.mode` became `tests.es.node.mode`
- `node.local` became `tests.es.node.local`
This change makes ES compile with java9 again, build 118.
* There are a handful of changes due to failure to determine types during compile.
* The attachment plugins which use tika needed to have tika upgraded in order to pickup fixes there for java 9.
* azure discovery and s3 repository indirectly depend on jaxb, which is no longer in the default modules. They now add a jaxb dependency externally, and make JarHell allow for this package.
This commit passes the system property tests.logger.level down to the
external nodes launched in integration tests. Specific tests that want
to override the default logging level should push down a setting to
the nodes using cluster configuration instead of pushing down a system
property to the nodes using cluster configuration.
Relates #18489
Today when parsing settings during bootstrap, we add a system property
for every Elasticsearch setting. Additionally, settings can be set via
system properties. This commit simplifies this situation.
- settings are no longer propogated to system properties
- system properties can not be used to set settings
- the "es." prefix on settings is no longer required (nor permitted)
- test logging has a dedicated system property (tests.logger.level)
Relates #18198
Two of the snippets in validate weren't working properly so they are
marked as skip and linked to this:
https://github.com/elastic/elasticsearch/issues/18254
We didn't properly handle empty parameter values. We were sending
them as the literal string "null". Now we do better and send them
as the empty string.
This commit refactors the JvmGcMonitorService so that it can be
tested. In particular, hooks are added to verify that the
JvmMonitorService correctly observes slow GC events, and that the
JvmGcMonitorService logs the correct messages.
Relates #18378
(same name and UUID) exists in the cluster state. This resolves a
situation where if an index data folder was copied into a node's data
directory while the node is running and that index had a tombstone in
the cluster state, the index would still get imported.
Closes#18250Closes#18249
This makes it much easier to apply to other projects.
Fixes to doc tests infrastructure:
* Fix comparing lists. Was totally broken.
* Fix order of actual vs expected parameters.
* Allow multiple `// TESTRESPONSE` lines with substitutions to join
into one big list of subtitutions. This makes lets the docs look
tidier.
* Exclude build from snippet scanning
* Allow subclasses of ESRestTestCase access to the admin execution context
This is a follow up to #18173 and includes adding pom generation to the
fake build-tools project, which is really just buildSrc, but builds
during normal builds.
In preparation for a unified release process, we need to be able to
generate the pom files independently of trying to actually publish. This
change adds back the maven-publish plugin just for that purpose. The
nexus plugin still exists for now, so that we do not break snapshots,
but that can be removed at a later time once snapshots are happenign
through the unified tools. Note I also changed the dir jars are written
into so that all our artifacts are under build/distributions.
This was broken recently as part of making the vagrant tasks extend
LoggedExec. This change fixes the progress logger to not be started
until we start seeing output from vagrant.
o/e/snapshots/Snapshot and o/e/snapshots/SnapshotInfo contain the same
fields and represent the same information. Snapshot was used to
maintain snapshot information to the snapshot repository, while
SnapshotInfo was used to represent the snapshot information as presented
through the REST layer. This removes the Snapshot class and combines
all uses into the SnapshotInfo class.
Closes#18167
Adds infrastructure so `gradle :docs:check` will extract tests from
snippets in the documentation and execute the tests. This is included
in `gradle check` so it should happen on CI and during a normal build.
By default each `// AUTOSENSE` snippet creates a unique REST test. These
tests are executed in a random order and the cluster is wiped between
each one. If multiple snippets chain together into a test you can annotate
all snippets after the first with `// TEST[continued]` to have the
generated tests for both snippets joined.
Snippets marked as `// TESTRESPONSE` are checked against the response
of the last action.
See docs/README.asciidoc for lots more.
Closes#12583. That issue is about catching bugs in the docs during build.
This catches *some* bugs in the docs during build which is a good start.
This change makes the vagrant tasks extend LoggedExec, so that the
entire vagrant output can be dumped on failure (and completely logged
when using --info). It should help for debugging issues like #18122.
This change fixes the generation of plugin properties files when the
version changes. Before, it would not regenerate, and running integTest
would fail with an incompatibile version error.
Gradle has a "shortcut" which omits the target and source compatibility
we set when it thinks it is not necessary (eg gradle is running on the
same version as the target compat). However, the way we compile against
java 9 is to set javac to use java 9, while gradle still runs on java 8.
This change makes -source and -target explicit for now, until these
"optimizations" can be removed from gradle.
closes#18039
Switch something from an explicit toString to Strings.toString which
is the same thing but with more code reuse.
Also renamed a constant to be CONSTANT_CASE.
* Add isSearchable and isAggregatable (collapsed to true if any of the instances of that field are searchable or aggregatable).
* Accept wildcards in field names.
* Add a section named conflicts for fields with the same name but with incompatible types (instead of throwing an exception).
`ip` fields currently fail range queries when either bound is inclusive. This
commit makes ranges also work in the exclusive case to be consistent with other
data types.
Replace with a constructor that takes StreamInput or a static method.
In one case (ValuesSourceType) we no longer need to serialize the data
at all!
Relates to #17085
Extracts all the replication logic that is done on the Primary to a separated class called ReplicationOperation. The goal
here is to make unit testing of this logic easier and in the future allow setting up tests that work directly on IndexShards
without the need for networking.
Closes#16492
The checkstyle configuration files were being accessed as resources within the project and
being converted from a URL to a File by gradle. This works when the build tools project is being
referenced as a local project. However, when using the published jar the URL points to a resource
in the jar file, that URL cannot be converted to a File object and causes the build to fail.
This change copies the files into a `checkstyle` directory in the project build folder and always uses
File objects pointing to the copied files.
This makes all numeric fields including `date`, `ip` and `token_count` use
points instead of the inverted index as a lookup structure. This is expected
to perform worse for exact queries, but faster for range queries. It also
requires less storage.
Notes about how the change works:
- Numeric mappers have been split into a legacy version that is essentially
the current mapper, and a new version that uses points, eg.
LegacyDateFieldMapper and DateFieldMapper.
- Since new and old fields have the same names, the decision about which one
to use is made based on the index creation version.
- If you try to force using a legacy field on a new index or a field that uses
points on an old index, you will get an exception.
- IP addresses now support IPv6 via Lucene's InetAddressPoint and store them
in SORTED_SET doc values using the same encoding (fixed length of 16 bytes
and sortable).
- The internal MappedFieldType that is stored by the new mappers does not have
any of the points-related properties set. Instead, it keeps setting the index
options when parsing the `index` property of mappings and does
`if (fieldType.indexOptions() != IndexOptions.NONE) { // add point field }`
when parsing documents.
Known issues that won't fix:
- You can't use numeric fields in significant terms aggregations anymore since
this requires document frequencies, which points do not record.
- Term queries on numeric fields will now return constant scores instead of
giving better scores to the rare values.
Known issues that we could work around (in follow-up PRs, this one is too large
already):
- Range queries on `ip` addresses only work if both the lower and upper bounds
are inclusive (exclusive bounds are not exposed in Lucene). We could either
decide to implement it, or drop range support entirely and tell users to
query subnets using the CIDR notation instead.
- Since IP addresses now use a different representation for doc values,
aggregations will fail when running a terms aggregation on an ip field on a
list of indices that contains both pre-5.0 and 5.0 indices.
- The ip range aggregation does not work on the new ip field. We need to either
implement range aggs for SORTED_SET doc values or drop support for ip ranges
and tell users to use filters instead. #17700Closes#16751Closes#17007Closes#11513
This commit contains the following improvements/fixes:
1. Renaming method names and variables to better reflect the purpose
of the method and the semantics of the variable.
2. For deleting indexes, replace the closed parameter passed to the
delete index/store methods with obtaining the index's state from the
IndexSettings that is already passed in.
3. Added tests to the IndexWithShadowReplicaIT suite, some of which
show issues in the shadow replica delete process that are captured in
Github issue 17695.
Closes#17638
This commit removes the last remaining uses of JAVA_OPTS. Now searching
the codebase for the regex '(?<!ES_)JAVA_OPTS' only shows the uses
warning of its removal and the note about it in the migration docs.
* Tokens in the same position are grouped into a SynonymQuery..
* The default operator is applied on tokens in different positions.
* The wildcard is applied to the terms in the last position only.
Fixes#2183
When an index is recovered from disk it's metadata is imported first and the master reaches out to the nodes looking for shards of that index. Sometimes those requests reach other nodes before the cluster state is processed by them. At the moment, that situation disables the checking of the store, which requires the meta data (indices with custom path need to know where the data is). When corruption hits this means we may assign a shard to node with corrupted store, which will be caught later on but causes confusion. Instead we can try loading the meta data from disk in those cases.
Relates to #17630
* upgrades numerics to new Point format
* updates geo api changes
* adds GeoPointDistanceRangeQuery as XGeoPointDistanceRangeQuery
* cuts over to ES GeoHashUtils
* Create one AllField field per field eligible for _all.
* Add a positionIncrementGap (with a size of 100, not configurable) between
each entry in order to distinguish fields when doing phrase query on _all.
This removes PROTOTYPEs from ScoreFunctionsBuilders. To do so we rework
registration so it doesn't need PROTOTYPEs and lines up with the recent
changes to query registration.
* [TEST] check registered queries one by one in SearchModuleTests
* Switch to using ParseField to parse query names
If we have a deprecated query name, at the moment we don't have a way to log any deprecation warning nor fail when we are in strict mode. With this change we use ParseField, which will take care of the camel casing that we currently do manually (so that one day we can remove it more easily). This also means, that each query will have a unique preferred name, and all the other names are deprecated.
Terms query "in" synonym is now formally deprecated, as well as fuzzy_match, match_fuzzy, match_phrase and match_phrase_prefix for match query, mlt for more_like_this and geo_bbox for geo_bounding_box. All these will be removed in 6.0.
Every QueryParser holds now a ParseField constant called QUERY_NAME_FIELD that holds the name for it. The first name is the preferred one, all the others are deprecated. The first name is taken from the NAME constant already present in each query builder object, so that we somehow keep the serialization constant separated from ParseField. This change also allowed us to remove the names method from the QueryParser interface.
This commit introduces SearchOperationListeneres which allow to hook
into search operation lifecycle and execute operations like slow-logs
and statistic collection in a transparent way. SearchOperationListenrs
can be registered on the IndexModule just like IndexingOperationListeners.
The main consumers (slow log) have already been moved out of IndexService
into IndexModule which reduces the dependency on IndexService as well as
IndexShard and makes slowlogging transparent.
Closes#17398
Today the basic node settings like `node.data` and `node.master` can't really be fully validated
since we allow to specify custom user attributes on the node level. We have to, in order to
support that, add a wildcard setting for `node.*` to let these setting pass validation.
Instead we should require a more contraint prefix like `node.attr.` that defines a namespace
that is reserved for user attributes.
This commit adds a new namespace for attributes in `node.attr`.
Closes#17280
When we test we add `-Djna.nosys=true` to the system properties but
we don't add it to system properties when running the naming conventions
test. This was causing the build to fail on a newly minted Ubuntu 15.10
machine, presumably because I made the mistake of installing maven using
the system package manager.
This adds a new `/_cluster/allocation/explain` API that explains why a
shard can or cannot be allocated to nodes in the cluster. Additionally,
it will show where the master *desires* to put the shard, according to
the `ShardsAllocator`.
It looks like this:
```
GET /_cluster/allocation/explain?pretty
{
"index": "only-foo",
"shard": 0,
"primary": false
}
```
Though, you can optionally send an empty body, which means "explain the
allocation for the first unassigned shard you find".
The output when a shard is unassigned looks like this:
```
{
"shard" : {
"index" : "only-foo",
"index_uuid" : "KnW0-zELRs6PK84l0r38ZA",
"id" : 0,
"primary" : false
},
"assigned" : false,
"unassigned_info" : {
"reason" : "INDEX_CREATED",
"at" : "2016-03-22T20:04:23.620Z"
},
"nodes" : {
"V-Spi0AyRZ6ZvKbaI3691w" : {
"node_name" : "Susan Storm",
"node_attributes" : {
"bar" : "baz"
},
"final_decision" : "NO",
"weight" : 0.06666675,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
},
"Qc6VL8c5RWaw1qXZ0Rg57g" : {
"node_name" : "Slipstream",
"node_attributes" : {
"bar" : "baz",
"foo" : "bar"
},
"final_decision" : "NO",
"weight" : -1.3833332,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists"
} ]
},
"PzdyMZGXQdGhqTJHF_hGgA" : {
"node_name" : "The Symbiote",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 2.3166666,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
}
}
}
```
And when the shard *is* assigned, the output looks like:
```
{
"shard" : {
"index" : "only-foo",
"index_uuid" : "KnW0-zELRs6PK84l0r38ZA",
"id" : 0,
"primary" : true
},
"assigned" : true,
"assigned_node_id" : "Qc6VL8c5RWaw1qXZ0Rg57g",
"nodes" : {
"V-Spi0AyRZ6ZvKbaI3691w" : {
"node_name" : "Susan Storm",
"node_attributes" : {
"bar" : "baz"
},
"final_decision" : "NO",
"weight" : 1.4499999,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
},
"Qc6VL8c5RWaw1qXZ0Rg57g" : {
"node_name" : "Slipstream",
"node_attributes" : {
"bar" : "baz",
"foo" : "bar"
},
"final_decision" : "CURRENTLY_ASSIGNED",
"weight" : 0.0,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists"
} ]
},
"PzdyMZGXQdGhqTJHF_hGgA" : {
"node_name" : "The Symbiote",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 3.6999998,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
}
}
}
```
Only "NO" decisions are returned by default, but all decisions can be
shown by specifying the `?include_yes_decisions=true` parameter in the
request.
Resolves#14593
Node roles are now serialized as well, they are not part of the node attributes anymore. DiscoveryNodeService takes care of dividing settings into attributes and roles. DiscoveryNode always requires to pass in attributes and roles separately.
readFrom is confusing because it requires an instance of the type that it
is reading but it doesn't modify it. But we also have (deprecated) methods
named readFrom that *do* modify the instance. The "right" way to implement
the non-modifying readFrom is to delegate to a constructor that takes a
StreamInput so that the read object can be immutable. Now that we have
`@FunctionalInterface`s it is fairly easy to register things by referring
directly to the constructor.
This change modifying NamedWriteableRegistry so that it does that. It keeps
supporting `registerPrototype` which registers objects to be read by
readFrom but deprecates it and delegates it to a new `register` method
that allows passing a simple functional interface. It also cuts Task.Status
subclasses over to using that method.
The start of #17085
This commit fixes a line-length checkstyle violation in
YamlSettingsLoaderTests.java and removes this file from the checkstyle
line-length suppressions.
This commit fixes a line-length checkstyle violation in
JsonSettingsLoaderTests.java and removes this file from the checkstyle
line-length suppressions.
This commit fixes a line-length checkstyle violation in
PropertiesSettingsLoader.java and removes this file from the checkstyle
line-length suppressions.
Change version, required a minor fix in the RPM building.
In case of a alpha/beta version, the release will contain alpha/beta
as the RPM version cannot contains dashes/tildes.
We already archive index level settings if we find an unknown or invalid/broken
value for a setting on node startup. The same could potentially happen for persistent
cluster level settings if we remove a setting or if we add validation to a setting that
didn't exist in the past. To ensure that only valid settings are recovered into the cluster
state we archive them (prefix them with `archive.` and log a warning. Tools that check the
cluster settings can then warn users that they have broken settings in their clusterstate that
got archived.
This commit fixes string formatting issues in the error handling and
provides a bettter error message if malformed input is detected.
This commit also adds tests for both situations.
Relates to #17212
this is the last step to remove node level service from IndexShard.
This means that tests can now more easily create an IndexShard instance
without starting a node and removes the dependency between IndexShard and Client/ScriptService
Today if something is wrong with the IndexMetaData we detect it very
late and most of the time if that happens we already allocated the index
and get endless loops and full log files on data-nodes. This change tries
to verify IndexService creattion during initial state recovery on the master
and if the recovery fails the index is imported as `closed` and won't be allocated
at all.
Closes#17187
We current have a ClusterService interface, implemented by InternalClusterService and a couple of test classes. Since the decoupling of the transport service and the cluster service, one can construct a ClusterService fairly easily, so we don't need this extra indirection.
Closes#17183
Also replaced the PercolatorQueryRegistry with the new PercolatorQueryCache.
The PercolatorFieldMapper stores the rewritten form of each percolator query's xcontext
in a binary doc values field. This make sure that the query rewrite happens only during
indexing (some queries for example fetch shapes, terms in remote indices) and
the speed up the loading of the queries in the percolator query cache.
Because the percolator now works inside the search infrastructure a number of features
(sorting fields, pagination, fetch features) are available out of the box.
The following feature requests are automatically implemented via this refactoring:
Closes#10741Closes#7297Closes#13176Closes#13978Closes#11264Closes#10741Closes#4317
This commit fixes the line-length checkstyle violations in
InstallPluginCommand.java and removes this from the list of files for
which the line-length check is suppressed.
Today we allow to set all kinds of index level settings on the node level which
is error prone and difficult to get right in a consistent manner.
For instance if some analyzers are setup in a yaml config file some nodes might
not have these analyzers and then index creation fails.
Nevertheless, this change allows some selected settings to be specified on a node level
for instance:
* `index.codec` which is used in a hot/cold node architecture and it's value is really per node or per index
* `index.store.fs.fs_lock` which is also dependent on the filesystem a node uses
All other index level setting must be specified on the index level. For existing clusters the index must be closed
and all settings must be updated via the API on each of the indices.
Closes#16799
Adds support for scheduling commands to run at a later time on another
thread pool in the current thread's context:
```java
Runnable someCommand = () -> {System.err.println("Demo");};
someCommand = threadPool.getThreadContext().preserveContext(someCommand);
threadPool.schedule(timeValueMinutes(1), Names.GENERAL, someCommand);
```
This happens automatically for calls to `threadPool.execute` but `schedule`
and `scheduleWithFixedDelay` don't do that, presumably because scheduled
tasks are usually context-less. Rather than preserve the current context
on all scheduled tasks this just makes it possible to preserve it using
the syntax above.
To make this all go it moves the Runnables that wrap the commands from
EsThreadPoolExecutor into ThreadContext.
This, or something like it, is required to support reindex throttling.
The build currently uses the old maven support in gradle. This commit
switches to use the newer maven-publish plugin. This will allow future
changes, for example, easily publishing to artifactory.
An additional part of this change makes publishing of build-tools part
of the normal publishing, instead of requiring a separate upload step
from within buildSrc. That also sets us up for a follow up to enable
precomit checks on the buildSrc code itself.
Today, certain bootstrap properties are set and read via system
properties. This action-at-distance way of managing these properties is
rather confusing, and completely unnecessary. But another problem exists
with setting these as system properties. Namely, these system properties
are interpreted as Elasticsearch settings, not all of which are
registered. This leads to Elasticsearch failing to startup if any of
these special properties are set. Instead, these properties should be
kept as local as possible, and passed around as method parameters where
needed. This eliminates the action-at-distance way of handling these
properties, and eliminates the need to register these non-setting
properties. This commit does exactly that.
Additionally, today we use the "-D" command line flag to set the
properties, but this is confusing because "-D" is a special flag to the
JVM for setting system properties. This creates confusion because some
"-D" properties should be passed via arguments to the JVM (so via
ES_JAVA_OPTS), and some should be passed as arguments to
Elasticsearch. This commit changes the "-D" flag for Elasticsearch
settings to "-E".
This change adds the infrastructure to run the rest tests on a multi-node
cluster that users 2 different minor versions of elasticsearch. It doesn't implement
any dedicated BWC tests but rather leverages the existing REST tests.
Since we don't have a real version to test against, the tests uses the current version
until the first minor / RC is released to ensure the infrastructure works.
Relates to #14406Closes#17072
Today we use hardcoded ports to form a cluster in the mulit-node case.
The hardcoded URIs are passed to the unicast host list which is error prone and
might cause problems if those ports are exhausted etc. This commit moves to a
less error prone way of forming the cluster where all nodes are started with port `0`
and all but the first node wait for the first node to write it's ports file to form a
cluster. This seed node is enough to form a cluster.
We removed leniencey from version parsing which caught problems with
-SNAPSHOT suffixes on plugin properies. This commit removes the -SNAPSHOT
from both es and the extension version and adds tests to ensure we can
parse older versions that allowed -SNAPSHOT in BWC way.
Closes#16964
Squashed commit of the following:
commit a23f9d2d29220991aa498214530753d7a5a148c6
Merge: eec9c4e 0b0a251
Author: Robert Muir <rmuir@apache.org>
Date: Mon Mar 7 04:12:02 2016 -0500
Merge branch 'master' into lucene6
commit eec9c4e5cd11e9c3e0b426f04894bb2a6dae4f21
Merge: bc67205 675d940
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 13:45:00 2016 -0500
Merge branch 'master' into lucene6
commit bc67205bdfe1526eae277ab7856fc050ecbdb7b2
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 09:56:31 2016 -0500
fix test bug
commit a60723b007ff12d97b1810cef473bd7b553a0327
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Mar 4 15:35:35 2016 +0100
Fix SimpleValidateQueryIT to put braces around boosted terms
commit ae3a49d7ba7ced448d2a5262e5d8ec98671a9090
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Mar 4 15:27:25 2016 +0100
fix multimatchquery
commit ae23fdb88a8f6d3fb7ba60fd1aaf3fd72d899aa5
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Mar 4 15:20:49 2016 +0100
Rewrite DecayFunctionScoreIT to be independent of the similarity used
This test relied a lot on the term scoring and compared scores
that are dependent on the similarity. This commit changes the base query
to be a predictable constant score query.
commit 366c2d518c35d31251033f1b6f6a93f6e2ae327d
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Mar 4 14:06:14 2016 +0100
Fix scoring in tests due to changes to idf calculation.
Lucene 6 uses a different default similarity as well as a different
way to calculate IDF. In contrast to older version lucene 6 uses docCount per field
to calculate the IDF not the # of docs in the index to overcome the sparse field
cases.
commit dac99fd64ac2fa71b8d8d106fe68825e574c49f8
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 08:21:57 2016 -0500
don't hardcoded expected termquery score
commit 6e9f340ba49ab10eed512df86d52a121aa775b0f
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 08:04:45 2016 -0500
suppress deprecation warning until migrated to points
commit 3ac8908424b3fdad44a90a4f7bdb3eff7efd077d
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 07:21:43 2016 -0500
Remove invalid test: all commits have IDs, and its illegal to do this.
commit c12976288124ad1a26467e7e848fb810548e7eab
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 07:06:14 2016 -0500
don't test with unsupported back compat
commit 18bbfe76128570bc70883bf91ff4c44c82d27817
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 07:02:18 2016 -0500
remove now invalid lucene 4 backcompat test
commit 7e730e572886f0ef2d3faba712e4256216ff01ec
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 06:58:52 2016 -0500
remove now invalid lucene 4 backwards test
commit 244d2ab6868ba5ac9e0bcde3c2833743751a25ec
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 06:47:23 2016 -0500
use 6.0 codec
commit 5f64d4a431a6fdaa1234adca23f154c2a1de8284
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 06:43:08 2016 -0500
compile, javadocs, forbidden-apis, etc
commit 1f273cd62a7fe9ca8f8944acbbfc5cbdd3d81ccb
Merge: cd33921 29e3443
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Mar 4 10:45:29 2016 +0100
Merge branch 'master' into lucene6
commit cd33921ac742ef9fb351012eff35f3c7dbda7264
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:58:37 2016 -0500
fix hunspell dictionary loading
commit c7fdbd837b01f7defe9cb1c24e2ec65604b0dc96
Merge: 4d4190f d8948ba
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:41:53 2016 -0500
Merge branch 'master' into lucene6
commit 4d4190fd82601aaafac6b8254ccb3edf218faa34
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:39:14 2016 -0500
remove nocommit
commit 77ca69e288b1a41aa9595c921ed166c272a00ea8
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:38:24 2016 -0500
clean up numericutils vs legacynumericutils
commit a466d696fbaad04b647ffbc0857a9439b583d0bf
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:32:43 2016 -0500
upgrade spatial4j
commit 5412c747a8cfe638bacedbc8233163cb75cc3dc5
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:19:28 2016 -0500
move to 6.0.0-snapshot-8eada27
commit b32bfe924626b87e540692375ece09e7c2edb189
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 11:30:09 2016 +0100
Fix some test compile errors.
commit 6ccde35e9840b03c68d1a2cd47c7923a06edf64a
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 11:25:51 2016 +0100
Current Lucene version is 6.0.0.
commit f62e1015d931b4cc04c778298a8fa1ba65e97ad9
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 11:20:48 2016 +0100
Fix compile errors in NGramTokenFilterFactory.
commit 6837c6eabf96075f743649da9b9b52dd39611c58
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 10:50:59 2016 +0100
Fix the edge ngram tokenizer/filter.
commit ccd7f070de5efcdfbeb34b9555c65c4990bf1ba6
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 10:42:44 2016 +0100
The missing value is now accessible through a getter.
commit bd3b77f9b28e5b05daa3d49683a9922a6baf2963
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 10:41:51 2016 +0100
Remove IndexCacheableQuery.
commit 05f3091c347aeae80eeb16349ac51d2b53cf86f7
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 10:39:43 2016 +0100
Fix compilation of function_score queries.
commit 81cda79a2431ac78f56b0cc5a5765387f662d801
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 10:35:02 2016 +0100
Fix compile errors in BlendedTermQuery.
commit 70994ce8dd1eca0b995870974a38e20f26f96a7b
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 23:33:03 2016 -0500
add bug ID
commit 29d4f1a71f36f646b5a6060bed3db019564a279d
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 21:02:32 2016 -0500
easy .store changes
commit 5e1a1e6fd665fa455e88d3a8987362fad5f44bb1
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 20:47:24 2016 -0500
cleanups mostly around boosting
commit 333a669ec6c305ada5645d13ed1da0e19ec1d053
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 20:27:56 2016 -0500
more simple fixes
commit bd5cd98a1e089c866b6b4a5e159400b110140ce6
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 19:49:38 2016 -0500
more easy fixes and removal of ancient cruft
commit a68f419ee47da5f9c9ce5b372f01d707e902474c
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 19:35:02 2016 -0500
cutover numerics
commit 4ca5dc1fa47dd5892db00899032133318fff3116
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 18:34:18 2016 -0500
fix some constants
commit 88710a17817086e477c6c021ec346d0534b7fb88
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 18:14:25 2016 -0500
Add spatial-extras jar as a core dependency
commit c8cd6726583e5ce3f546ed355d4eca037164a30d
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 18:03:33 2016 -0500
update to lucene 6 jars
The suffix TransportAction is misleading as it may make think that it extends TransportAction, but it does not. This class makes accessible the different search operations exposed by SearchService through the transport layer. Also resolved few compiler warnings in the class itself.
The big win here is catching tests that are incorrectly named and will
be skipped by gradle, providing a false sense of security.
The whole thing takes about 10 seconds on my Macbook Air, not counting
compiling the test classes, which seems worth it. Because this runs as
a gradle task with propery UP-TO-DATE handling it can be skipped if the
tests haven't been changed which should save some time.
I chose to keep this in test:framework rather than a new subproject of
buildSrc because ESIntegTestCase and doesn't inroduce any additional
dependencies.
TransportSearchTypeAction and subclasses are not actually transport actions, but just support classes useful for their inner async actions that can easily be extracted out so that we get rid of one too many level of abstraction.
Same pattern can be applied to TransportSearchScrollQueryAndFetchAction & TransportSearchScrollQueryThenFetchAction which we could remove in favour of keeping only their inner classes named SearchScrollQueryAndFetchAsyncAction and SearchScrollQueryThenFetchAsyncAction.
Remove org.elasticsearch.action.search.type package, collapsed remaining classes into existing org.elasticsearch.action.search package
Make also ParsedScrollId ScrollIdForNode and TransportSearchHelper classes and their methods package private.
Closes#11710
Removes all our logger wrappers except the wrapper for log4j1.2. If you
depend on Elasticsearch's jar in your application you'll need to declare
log4j 1.2 and/or some bridge to your favorite logger.
We did this to simplify our builds and code. No more commons-logging like
log implementation sniffing. No more optional dependency hacks in gradle.
We might one day want to use j.u.l instead of log4j. If we do want that
we can recover its wrapper by studying this commit. We didn't go directly
to j.u.l in this commit because that is a bigger change. Our logging
configuration is based on log4j1.2 and people are used to it. So it'd
be a much more fraught breaking change to do that conversion.
This commit works around an issue with hostname verification in HttpClient when using IPv6
addresses in URLs. When an IPv6 address is used in a URL it is typically wrapped with square
brackets. The hostname verifier for HttpClient does not recognize these as valid IPv6 addresses
and instead treats them as a DNS name. We wrap the strict hostname verifier for this version
of HttpClient and strip brackets if we need to.
The corresponding issue in HttpClient is https://issues.apache.org/jira/browse/HTTPCLIENT-1698
but the fix has not been released yet in a stable version.
The extra config copy task for integ tests was previously overwriting
the source and destination for each file to be copied, leaving only the
last. This fixes it to create sub copy specs (which is what was
originally intended) for each file.
Today we have the notion of a snapshot inside Version.java which makes
releasing complicated since to do a release Version.java must be changed.
This commit removes all notions of snapshot from the code and allows to
switch between snapshot and release build by specifying a system property on
the build. For instance running:
```
gradle run -Dbuild.snapshot=false
```
will build and package a release build while the default always
builds snapshots. Calls to the main rest action will still get the snapshot
information rendered out with the response.
The current logic for doing recovery from a source to a target shourd is tightly coupled with the underlying network pipes. This changes decouple the two, making it easier to add unit tests for shard recovery that doesn't involve the node and network environment.
On top that, RecoveryTarget is renamed to RecoveryTargetService leaving space to renaming RecoveryStatus to RecoveryTarget (and thus avoid the confusion we have today with RecoveryState).
Correspondingly RecoverySource is renamed to RecoverySourceService.
Closes#16605
This commit moves IndicesRequestCache into o.e.indics and makes all API in this
class package private. All references to SearchReqeust, SearchContext etc. have been factored
out and relevant glue code has been added to IndicesService. The IndicesRequestCache is not a
simple class without any hard dependencies on ThreadPool nor SearchService or IndexShard. This now
allows to add unittests.
This commit also removes two settings `indices.requests.cache.clean_interval` and `indices.fielddata.cache.clean_interval`
in favor of `indices.cache.clean_interval` which cleans both caches.
That is like some kind of cardinal sin or something, right?
We had two violations though they weren't super likely to be keys in a hashmap
any time soon.
This commit forbids the usage of java.security.MessageDigest#clone() in
favor of getting a thread local instance using
org.elasticsearch.common.hash.MessageDigests. This is to prevent running
into java.lang.CloneNotSupportedExceptions for message digest providers
that do not support clone.
Closes#16543
This is a simple port of the mapper attachment plugin to the ingest
functionality, no new features. The only option is to limit
the number of chars to prevent indexing of huge documents.
Fields can be selected in the processor as well.
Close#16303
For the ongoing search refactoring (#10217) the PhraseSuggestionBuilder
gets a way of parsing from xContent that will eventually replace the
current SuggestParseElement. This PR adds the fromXContent method
to the PhraseSuggestionBuilder and also adds parsing code for the
common suggestion parameters to SuggestionBuilder.
Also adding links from the Suggester implementations registeres in the
Suggesters registry to the corresponding prototype that is going to
be used for parsing once the refactoring is done and we switch from
parsing on shard to parsing on coordinating node.
140 characters is our official line length from CONTRIBUTING.md. It is a
little longer than fits well in github but you can use a userstyle to fix
that. It is perfect for Eclipse and unified diffs but too wide for side by
side diffs. Regardless, its the official limit we've had for years. We just
haven't enforced it automatically and we haven't enforced it using github
because line limits are hard to notice there.
This only hits about 2/3 of the java files - those that didn't have failures
already. If they did have a failure it suppresses them. We should pick files
off that list as time goes on.
For posterity I generated the suppressions by running checkstyle with
ignoreFailures = true, piping the output to a file, and then running it
through this:
perl -ne 'print if s{.*\[ant:checkstyle\] /.+/elasticsearch/}{ <suppress files="} && s{\.java.+}{\.java" checks="LineLength" />}' | uniq
Early versions of JDK 8 have a compiler bug that prevents assignment to
a definitely unassigned final variable inside the body of a lambda. This
commit adds an early-out to the build process which also gives a useful
error message.
Closes#16418
This commit removes and forbids the use of lowercase ells ('l') in long
literals because they are often hard to distinguish from the digit
representing one ('1').
Closes#16329
This adds a small pile of checkstyle checks that all pass without any changes.
It moves some checks from ForbiddenPatterns that were java only into
checkstyle.
Don't duplicate forbiddenPatterns in checkstyle
* `test.cluster.node.seed` was only used in one place where it wasn't adding value and was now replaced with a constant
* `tests.portsfile` is now an official setting and has been renamed to `node.portsfile`
this commit also convert `search.default_keep_alive` and `search.keep_alive_interval` to the new settings infrastrucutre
This is consistent with what happens in elasticsearch.sh/.bat, and it means
tests will work even if there is a crazy "system" JNA installed on the machine.
The rest test framework, because it used to be tightly integrated with
ESIntegTestCase, currently expects the addresses for the test cluster to
be passed using the transport protocol port. However, it only uses this
to then find the http address.
This change makes ESRestTestCase extend from ESTestCase instead of
ESIntegTestCase, and changes the sysprop used to tests.rest.cluster,
which now takes the http address.
closes#15459
Site plugins used to be used for things like kibana and marvel, but
there is no longer a need since kibana (and marvel as a kibana plugin)
uses node.js. This change removes site plugins, as well as the flag for
jvm plugins. Now all plugins are jvm plugins.
With this commit we do not check only if an endpoint is up but
we also check that the cluster status is green. Previously,
builds sporadically failed to pass this condition.
1. Uses forbidden patterns to prevent things from referencing
java.io.Serializable or from mentioning serialVersionUID.
2. Uses -Xlint:-serial so we don't have to hear from javac that we aren't
declaring serialVersionUID on any classes that we make that happen to extend
Serializable.
3. Remove Serializable and serialVersionUID declarations.
I didn't use forbidden apis because it doesn't look like it has a way to ban
explicitly implementing Serializable. If you try to ban Serializable with
forbidden apis you end up banning all Exceptions and all Strings.
Closes#15847
This commit removes and now forbids use of
org.apache.lucene.index.IndexWriter#isLocked as this method was
deprecated in LUCENE-6508. The deprecation is due to the fact that
checking if a lock is held before acquiring that lock is subject to a
time-of-check-to-time-of-use race condition. There were three uses of
IndexWriter#isLocked in the code base:
- a logging statement in o.e.i.e.InternalEngine where we are already in
an exceptional condition that the lock was held; in this case,
logging whether or not the directory is locked is superfluous
- in o.e.c.l.u.VersionsTests where we were verifying that a write lock
is released upon closing an IndexWriter; in this case, the check is
not needed as successfully closing an IndexWriter releases its
write lock
- in o.e.t.s.MockFSDirectoryService where we were verifying that a
directory is not write-locked before (implicitly) trying to obtain
such a write lock in org.apache.lucene.index.CheckIndex#<init> (this
is the exact type of a situation that is subject to a race
condition); in this case we can proceed by just (implicitly) trying
to obtain the write lock and failing if we encounter a
LockObtainFailedException
This commit removes and now forbids all uses of
java.util.concurrent.ThreadLocalRandom across the codebase. The
underlying issue with ThreadLocalRandom is that it can not be
seeded. This means that if ThreadLocalRandom is used in production code,
then tests that cover any code path containing ThreadLocalRandom will be
prevented from being reproducible by use of ThreadLocalRandom. Instead,
using org.elasticsearch.common.random.Randomness#get will give
reproducible sources of random when running under tests and otherwise
still give an instance of ThreadLocalRandom when running as production
code.
This was originally intended to be general purpose in #15555, but
that still had problems. Instead, this change fixes the issue explicitly
for slf4j-api, since that is the problematic dep that is not actually
included in the distributions.
This change adds a Fixture class for use by gradle. A Fixture is an
external process that integration tests will use. It can be added as a
dependsOn for integTest, and will automatically be shutdown upon success
or failure, as well as relevant information dumped on failure. There is
also an example fixture in this change.
This new task allows setting code, similar to a doLast or doFirst,
except it is specifically geared at running ant (and thus called doAnt).
It adjusts the ant logging while running the ant so that the log
level/behavior can be tweaked, and automatically buffers based on gradle
logging level, and dumps the ant output upon failure.
This fixes the `lenient` parameter to be `missingClasses`. I will remove this boolean and we can handle them via the normal whitelist.
It also adds a check for sheisty classes (jar hell with the jdk).
This is inspired by the lucene "sheisty" classes check, but it has false positives. This check is more evil, it validates every class file against the extension classloader as a resource, to see if it exists there. If so: jar hell.
This jar hell is a problem for several reasons:
1. causes insanely-hard-to-debug problems (like bugs in forbidden-apis)
2. hides problems (like internal api access)
3. the code you think is executing, is not really executing
4. security permissions are not what you think they are
5. brings in unnecessary dependencies
6. its jar hell
The more difficult problems are stuff like jython, where these classes are simply 'uberjared' directly in, so you cant just fix them by removing a bogus dependency. And there is a legit reason for them to do that, they want to support java 1.4.
This change removes hardcoded ports from cluster formation. It passes
port 0 for http and transport, and then uses a special property to have
the node log the ports used for http and transport (just for tests).
This does not yet work for multi node tests. This brings us one step
closer to working with --parallel.
This commit removes and now forbids all uses of
Collections#shuffle(List) and Random#<init>() across the codebase. The
rationale for removing and forbidding these methods is to increase test
reproducibility. As these methods use non-reproducible seeds, production
code and tests that rely on these methods contribute to
non-reproducbility of tests.
Instead of Collections#shuffle(List) the method
Collections#shuffle(List, Random) can be used. All that is required then
is a reproducible source of randomness. Consequently, the utility class
Randomness has been added to assist in creating reproducible sources of
randomness.
Instead of Random#<init>(), Random#<init>(long) with a reproducible seed
or the aforementioned Randomess class can be used.
Closes#15287
Wildcard imports are terrible, they cause ambiguity in the code,
make it not compile with the future versions of java in many cases.
We should simply fail the build on this, it is messiness, caused by
messy Intellij configuration
We have some tests which have crazy dependencies, like on other plugins.
This change adds a "messy-test" gradle plugin which can be used for qa
projects that these types of tests can run in. What this adds over
regular standalone tests is the plugin properties and metadata on the
classpath, so that the plugins are properly initialized.
We have eclipse settings added to all projects when running gradle
eclipse, but buildSrc is its own special project that is not
encapsulated by allprojects blocks. This adds eclipse settings to
buildSrc.
This is a relic from shading where it was trickier to implement.
Third party signatures are already in e.g. the test list, there
is no reason to separate them out.
Instead, we could have a third party signatures that does
something different... like keep tabs on third party libraries.
This commit removes and now forbids all uses of the type-unsafe empty
Collections fields Collections#EMPTY_LIST, Collections#EMPTY_MAP, and
Collections#EMPTY_SET. The type-safe methods Collections#emptyList,
Collections#emptyMap, and Collections#emptySet should be used instead.
Typical failure:
```
:test-framework:dependencyLicenses (Thread[main,5,main]) started.
:test-framework:dependencyLicenses
Executing task ':test-framework:dependencyLicenses' (up-to-date check took 0.0 secs) due to:
Task has not declared any outputs.
:test-framework:dependencyLicenses FAILED
:test-framework:dependencyLicenses (Thread[main,5,main]) completed. Took 0.023 secs.
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':test-framework:dependencyLicenses'.
> Licences dir /mnt/jenkins/workspace/es_core_master_strong/test-framework/licenses does not exist, but there are dependencies
```
Related to #15168
This change attempts to simplify the gradle tasks for precommit. One
major part of that is using a "less groovy style", as well as being more
consistent about how tasks are created and where they are configured. It
also allows the things creating the tasks to set up inter task
dependencies, instead of assuming them (ie decoupling from tasks
eleswhere in the build).
We had increased this in maven, but it was lost in the transition to
gradle. This change adds it as a configurable setting the the logger for
randomized testing and bumps it to 25.
The current delay waits until later after normal configuration, but
still just after resolution (ie when paths would be known for
dependencies but not actual execution). This delays the checks further
to be done right before we actually execute the copy task.
closes#15068
Currently if running with --info, the command line for ES, along with
env vars, are logged before they may be ammended to add debug options.
This moves the adding JAVA_OPTS to before we print the command.
This adds the standalone tests so they will compile (and thus can be
modified with import completion) within IntelliJ. It also explicitly
sets up buildSrc as a module.
Note that this does *not* mean eg evil-tests can be run from intellij.
These are special tests that require special settings (eg disabling
security manager). They need to be run from the command line.
closes#15075
The Exec task outputs stdout/stderr to the standard streams by default.
However, to keep output short, we currently capture this, and only
output if the task failed. This change makes a small wrapper around Exec
to facilitate this behavior anywhere we use Exec.
This change delays the lookup for whatever is passed to extra config as
the source file to happen at execution time. This allows using eg a task
which generates a file, but maintains the checks that the file is not a
dir and that it exists at runtime.
This change allows copy extra files into the integ test cluster before
it runs. However, it explicitly forbids overwriting elasticsearch.yml,
since that is generated.
The current mechanism for adding plugins to the integTest cluster is to
have a FileCollection. This works well for the integTests for a single
plugin, which automatically adds itself to be installed. However, for qa
tests where many plugins may be installed, and from other projects, it
is cumbersome to add configurations, dependencies and dependsOn
statements over and over. This simplifies installing a plugin from
another project by moving this common setup into the cluster
configuration code.
The current wait condition for an integ test cluster being up is a
simple http get on the root path for elasticsearch. However, it is
useful to allow having arbitrary wait conditions. This change reworks
the wait task to first check that each node process started successfully
and has a socket up, followed by an arbitrary wait condition which
defaults to the current http get.
Also, cluster settings are allowed to be added, and overriden. Finally,
custom setup commands are made relative to the elasticsearch home dir
for each node.
This change removes the subdirectory support for extra-plugins, and
replaces it with an iteration of sibling directories with the prefix
"extra-plugin-".
This change adds back the last of the missing test options to the juni4
wrapper. leaveTemporary is important in that setting it to true (which
is how we had it set in maven) removes the warnings we currently get
about a leftover file that cannot be deleted (from jna).
This change adds back the multi node smoke test, as well as making the
cluster formation for any test allow multiple nodes. The main changes in
cluster formation are abstracting out the node specific configuration to
a helper struct, as well as making a single wait task that waits for all
nodes after their start tasks have run. The output on failure was also
improved to log which node's info is being printed.
* Forbid System.setProperties & co in forbidden APIs.
* Ban property write access at runtime with security manager.
Plugins that need to modify system properties will need to request permission in their plugin-security.policy
As part of the refactoring to allow --debug-jvm with gradle run, the way
java options are passed for integ tests was changed. However, we need to
make sure the jvm argline passed goes to ES_GC_OPTS because this
allows overriding things like which garbage collector we run, which we
do for testing from jenkins. This change adds back ES_GC_OPTS.
Because jar hell checks run during static initialization of tests, a
failure will result in all tests failing. However, only the first test
within each jvm shows the jarhell failure, and later tests get a class
not found exception for the class that failed to load when static init
failed.
This change adds a task to run as part of precommit, which checks the
test runtime classpath for jarhell.
closes#14721
If there is a failure in the elasticsearch start script, we currently
completely lose the failure. This is due to how spawning works with ant.
This change avoids the issue by introducing an intermediate script,
built dynamically before running ant exec, which runs elasticsearch and
redirects the output to a log file. This essentially causes us to run
elasticsearch in the foreground and capture the output, but at the same
time keep a running script which ant can pump streams from (which will
always be empty).
There were a number of subtle issues with the existing logging that
wraps events from Junit4 and ant. This change:
* Tweaks at what level certain events are logged
* Fixes -Dtests.output=always to force everything to be logged
* Makes -Dtests.class imply -Dtests.output=always
* Captures ant logging from junit4, so that direct jvm output will be
logged on failure when not using gradle info logging
When the build tries to start an elasticsearch instance but the start fails
if it fails to find the log file then we log a line about how we can't find
the file.
Sometimes when running elasticsearch, it is useful to attach a remote
debugger. This change adds a --debug-jvm option (the same name gradle
uses for its tests debug option), which adds java agent config for a
remote debugger. The configuration is set to hava java suspend until the
remove debugger is attached.
closes#14772
If you build elasticsearch without a git repository it was creating a null
shortHash which was causing Elasticsearch not to be able to form transport
connections.
Closes#14748
This makes the rest tests **tons** more responsive.
Also stop test progress output from jumping by using formating. The progess
now looks like:
Suites [004/549], Tests [0019|0|0], in 1.58s J2 completed UpdateNumberOfReplicasTests
The changes included are:
1. The suites, total tests, and JVM id are now padded based on their maximum
size. The maximum number of tests is just a guess because that data isn't
easily available when the suite starts. JVM id rarely matters because only
the most crazy individuals use more than 10 JVMs.
2. The suite information is reordered. Now its runtime, jvm id, suite name,
and, optionally, method name. This reordering is useful because the thing
that varies in length, the suite and method name, are on the right hand
side. This means that nothing jumps around during the test run.
We recently got a run command with gradle, but it is sometimes useful to
run ES with a specific plugin. This is a start, by making each esplugin
have a run command which installs the plugin and runs elasticsearch in
the foreground.
This makes forbidden patterns a little smarter, so it does not need to
run on every build. It works because the marker file timestamp will be
compared against the source files (which are the inputs to forbidden
patterns).
closes#14788
We currently enforce JAVA_HOME is set, but use gradle's java version to
compile, and JAVA_HOME version to test. This change makes the compile
and test both use JAVA_HOME, and clarifies which java version is used by
gradle and that used for the compile/test.
With gradle, deploying to maven means first generating poms. These are
filled in based on dependencies of the project. Recently, we started
disallowing transitive dependencies. However, this configuration does
not translate to maven poms because maven has no concept of excluding
all transitive dependencies.
This change adds exclusions for each of the transitive deps of each
dependency being added to the maven pom. It does so by creating dummy
configurations for each direct dependency (which does not have
transitive deps excluded), so that we can iterate the transitive deps
when building the pom.
Note, this should be simpler (just modifying maven's pom model), but
gradle tries to hide that from their api, causing us to need to
manipulate the xml directly.
https://discuss.gradle.org/t/modifying-maven-pom-generation-to-add-excludes/12744
Currently elasticsearch in integ tests is started using an ant task on
windows, or gradle exec on everything else. However, gradle exec has
some flaws, one being Ctrl-C does not run finalizedBy tasks, which means
interrupting integ tests will leak a jvm. This change makes all systems
use ant exec. One caveat is, if there is any output by the jvm, we lose
it in ant bit heaven. But this is no different than what we had with
gradle. In the future, we should look at using a separate thread to
pump streams from the elasticsearch process.
closes#14701
closes#14726
Squashed commit of the following:
commit 5b591e98570e3fa481b2816a44063b98bff36ddf
Author: Robert Muir <rmuir@apache.org>
Date: Fri Nov 13 00:54:08 2015 -0500
add assumption for self-signing in PluginManagerTests
commit ed11e5371b6f71591dc41c6f60d033502cfcf029
Author: Robert Muir <rmuir@apache.org>
Date: Fri Nov 13 00:20:59 2015 -0500
show error output from integ test startup
commit d8b187a10e95d89a0e775333dcbe1aaa903fb376
Author: Robert Muir <rmuir@apache.org>
Date: Thu Nov 12 22:14:11 2015 -0500
fix gradle check under jigsaw
If we use JAVA_HOME consistently for tests, we can run tests with a
different version of java than gradle runs with. For example, this
enables running tests with jigsaw, but building with java 8. The only
caveat is intellij does not set JAVA_HOME. This change enforces
JAVA_HOME is set, but ignores for intellij.
This moves the min java version used by elasticsearch to one place, a
constant in BuildPlugin. For me on java 9, this fixed my jar to have the
correct target/source versions.
closes#14702
Transitive dependencies can be confusing and hard to deal with when
conflicts arise between them. This change removes transitive
dependencies from elasticsearch, and forces any dependency conflicts to
be resolved manually, instead of automatically by gradle.
closes#14627
In gradle 2.7 (or groovy 2.3.10, not sure which), there appears to be a
bug on linux where using a fully qualified class (without an import
statement) does not work. This change forces gradle 2.8 or above. It
also moves the logic around a little for the version check so the build
info is printed before checks against that info.
Some dependencies must be specified in a couple places in the build.
e.g. randomized runner is specified both in buildSrc (for the gradle
wrapper plugin), as well as in the test-framework.
This change creates buildSrc/versions.properties which acts similar to
the set of shared version properties we used to have in the maven parent
pom.
The esplugin gradle plugin automatically adds the pluging being built to
the integTest cluster. However, there were two issues with this. First
was a bug in the name, which should have been the configured
esplugin.name instead of the project name. Second, the files
configuration was overcomplicated (trying to use the groovy spreader
operator after delaying calls to singleFile). Instead, we can just pass
the file collections (which will just be a single file at execution
time).
This commit addresses an issue in getting a path to the jps bin. The
solution is to get the path to the JDK relative to the JVM running
Gradle.
Closes#14614
Latest version of lucene deprecated Query#setBoost and Query#getBoost which made queries effectively immutable. Those methods need to be replaced with `BoostQuery` that wraps any query that needs boosting.
This commit replaces usages of setBoost with BoostQuery and adds it to forbidden-apis for prod code.
Usages of `getBoost` are only partially removed, as some will have to stay for backwards compatibility.
Closes#14264
run.sh and run.bat were calling out to the old maven build system.
This is no longer in place, so we've created new gradle tasks to
start an elasticsearch node from the current codebase.
fixed#14423
The plugin name currently defaults to the gradle project name. But the
gradle project name for standalone repo (like an external plugin would
be) defaults to the directory name of the repo. This is trappy, since it
depends on how the repo was checked out.
This change enforces the plugin name is always set.
closes#14603
This makes it a groovy project that works in eclipse.
You will have to install a plugin for groovy language support
(I used a snapshot build from https://github.com/groovy/groovy-eclipse/wiki)
Many other improvements:
* Use spaces in ES path
* Use space in path for plugin file installation
* Use a different cwd than ES home
* Use jps to ensure process being stopped is actually elasticsearch
* Stop ES if pid file already exists
* Delete pid file when successfully killed
Also, refactored the cluster formation code to be a little more organized.
closes#14464
This gets the tar and tar_plugins tests working in gradle. It does so by
adding a subproject, qa/vagrant, which adds the following tasks:
Verification
------------
checkPackages - Check the packages against a representative sample of the
linux distributions we have in our Vagrantfile
checkPackagesAllDistros - Check the packages against all the linux
distributions we have in our Vagrantfile
Package Verification
--------------------
checkCentos6 - Run packaging tests against centos-6
checkCentos7 - Run packaging tests against centos-7
checkDebian8 - Run packaging tests against debian-8
checkFedora22 - Run packaging tests against fedora-22
checkOel7 - Run packaging tests against oel-7
checkOpensuse13 - Run packaging tests against opensuse-13
checkSles12 - Run packaging tests against sles-12
checkUbuntu1204 - Run packaging tests against ubuntu-1204
checkUbuntu1404 - Run packaging tests against ubuntu-1404
checkUbuntu1504 - Run packaging tests against ubuntu-1504
Vagrant
-------
smokeTestCentos6 - Smoke test the centos-6 VM
smokeTestCentos7 - Smoke test the centos-7 VM
smokeTestDebian8 - Smoke test the debian-8 VM
smokeTestFedora22 - Smoke test the fedora-22 VM
smokeTestOel7 - Smoke test the oel-7 VM
smokeTestOpensuse13 - Smoke test the opensuse-13 VM
smokeTestSles12 - Smoke test the sles-12 VM
smokeTestUbuntu1204 - Smoke test the ubuntu-1204 VM
smokeTestUbuntu1404 - Smoke test the ubuntu-1404 VM
smokeTestUbuntu1504 - Smoke test the ubuntu-1504 VM
vagrantHaltCentos6 - Shutdown the vagrant VM running centos-6
vagrantHaltCentos7 - Shutdown the vagrant VM running centos-7
vagrantHaltDebian8 - Shutdown the vagrant VM running debian-8
vagrantHaltFedora22 - Shutdown the vagrant VM running fedora-22
vagrantHaltOel7 - Shutdown the vagrant VM running oel-7
vagrantHaltOpensuse13 - Shutdown the vagrant VM running opensuse-13
vagrantHaltSles12 - Shutdown the vagrant VM running sles-12
vagrantHaltUbuntu1204 - Shutdown the vagrant VM running ubuntu-1204
vagrantHaltUbuntu1404 - Shutdown the vagrant VM running ubuntu-1404
vagrantHaltUbuntu1504 - Shutdown the vagrant VM running ubuntu-1504
vagrantSmokeTest - Smoke test some representative distros from the Vagrantfile
vagrantSmokeTestAllDistros - Smoke test all distros from the Vagrantfile
vagrantUpCentos6 - Startup a vagrant VM running centos-6
vagrantUpCentos7 - Startup a vagrant VM running centos-7
vagrantUpDebian8 - Startup a vagrant VM running debian-8
vagrantUpFedora22 - Startup a vagrant VM running fedora-22
vagrantUpOel7 - Startup a vagrant VM running oel-7
vagrantUpOpensuse13 - Startup a vagrant VM running opensuse-13
vagrantUpSles12 - Startup a vagrant VM running sles-12
vagrantUpUbuntu1204 - Startup a vagrant VM running ubuntu-1204
vagrantUpUbuntu1404 - Startup a vagrant VM running ubuntu-1404
vagrantUpUbuntu1504 - Startup a vagrant VM running ubuntu-1504
It does not make the "check" task depend on "checkPackages" so running the
vagrant tests is still optional. They are slow and depend on vagrant and
virtualbox.
The Package Verification tasks are useful for testing individual distros.
The Vagrant tasks are listed in `gradle tasks` primarily for discoverability.
Gradle defaults to tgz extension when tar is compressed. This changes
the tar distribution back to tar.gz. Note that this also means the maven
packaging type is now tar.gz.
The gradle task to generate plugin properties files now is a simple copy
with expansion (ie maven filtering). It also no longer depends on
compiling.
closes#14450
This adds a generated-resources dir that the plugin properties are
generated into. This must be outside of the build dir, since intellij
has build as "excluded".
closes#14392
When generating a try-catch block, the eclipse default is something like this:
```
try {
something();
} catch (Exception e) {
// TODO: auto-generated stub
e.printStackTrace();
}
which is terrible, so the ES eclipse changes this to rethrow a RuntimeException instead.
```
try {
something();
} catch (Exception e) {
throw new RuntimeException();
}
```
Unfortunately, this loses the original exception entirely, instead it should be:
```
try {
something();
} catch (Exception e) {
throw new RuntimeException(e);
}
```
The RR gradle plugin is at
https://github.com/randomizedtesting/gradle-randomized-testing-plugin.
However, we currently have a copy of this, since the plugin is still in
heavy development. This change moves the files around so they can be
copied directly from the elasticsearch fork to that repo, for ease of
syncing.