EsExecutorsTests had a test that was failing spuriously due to threadpools
being threadpools. This weakens the assertions that the test makes to what
should always be true.
It might turn out to be useful to have the actual commit hash of the version we are
looking for if our download manager can just redirect to the right staging repository.
This is purely for maintainance reasons since it easier to see if we can drop
certain stageing urls if we have the version next to the hash.
I also removed the gpg passphrase from the example URL since it's better to get prompted?
This is caused by sending the same file to the chunk handler with offset
`0` which in-turn opens a new outputstream and waits for bytes. But the next round
will send 0 bytes again with offset 0. This commit adds some checks / validators that those
settings are positive byte values and fixes the RecoveryStatus to throw an IAE if the same file
is opened twice.
What is the problem we are trying to solve ?
===========================================
When we are doing aggregations against a field name as shown in
https://github.com/HarishAtGitHub/elasticsearch-tester/blob/master/12135.py#L37-L46
search = {
"aggs": {
"NAME": {
"terms": {
"field": "ip_str",
"size": 10
}
}
}
}
and when the field "ip_str" has values of different types in different indices
. say one is of type StringTerms type and other is of IP(LongTerms type) then
the aggregation fails as the types do not match(incompatible).
The failure throws a class cast exception as follows:
{
"error": {
"root_cause": [],
"type": "reduce_search_phase_exception",
"reason": "[reduce] ",
"phase": "query",
"grouped": true,
"failed_shards": [],
"caused_by": {
"type": "class_cast_exception",
"reason": "org.elasticsearch.search.aggregations.bucket.terms.LongTerms$Bucket cannot be cast to org.elasticsearch.search.aggregations.bucket.terms.StringTerms$Bucket"
}
},
"status": 503
}
which is hard to understand . User cannot infer anything about the cause of the problem and what he should do from seeing the
class cast exception.
What can be the possible solution ?
===================================
Make the exception more readable by showing him the root cause of the problem so that he can
understand which area actually caused the problem, so that he can take necessary steps further.
Code Analysis
=============
Debugging code shows that:
the query /{indices}/_search?search_type=count involves two phases
1) search phase
***************
searchService.sendExecuteQuery(...) [Ref: TransportSearchCountAction]
what happens here ?
the phase 1, which is the search phase goes without error.
In this phase the shards for the given indexes are collected and the search is done on all asynchronously
and finally collected in the variable "firstResults" and given to meger phase.
[Flow: .... -> TransportSearchTypeAction -> method performFirstPhase]
2) merge phase
**************
searchPhaseController.merge(...firstResults...) [Ref: TransportSearchCountAction]
what happens here ?
the "firstresults" QuerySearchResults are now to be aggregated and combined.
[Flow: SearchPhaseController.merge(...) -> ..... -> InternalTerms.doReduce(...)]
the phase 1, which is the search phase goes without error.
The problem comes in phase 2, which is merge phase.
Now the individual term buckets are available.
As per the test case , there are two indices cast and cast2, so by default 10 shards.
cast has ip_str of type StringTerms
cast2 has ip_str of type ip which is actually LongTerms
so here two types of Buckets exist. StringTerms_Bucket and LongTerms_Bucket.
Now the aggregation is to be put inside the BucketPriorityQueue(size 2: as out of 10, 2 has hits) finally.
(docs of PriorityQueue: https://lucene.apache.org/core/4_4_0/core/org/apache/lucene/util/PriorityQueue.html#insertWithOverflow(T))
Now first the LongTerms$Bucket is put inside.
then the StringTerms$Bucket is to be put in.
This is the area where exception is thrown. What happens is when adding the StringTerms$Bucket now it has to
goes through the code "lessThan(element, heap[1])"
which finally calls
---------------------------------------------------------------------------------------------
| StringTerms$Bucket.compareTerms(other) <---------------- Area of exception |
| |
--------------------------------------------------------------------------------------------
where when comparing one to other a type cast is done and it fails as StringTerms$Bucket and LongTerms$Bucket are
incompatible.
Approach to solve:
==================
The best way is to make user understand that the problem is when reducing/merging/aggregating the buckets which came as a result of
querying different shards, so that this will make them infer that the problem is because the values of the fields are of different types.
The message is also user friendly and much better than the indecipherable classcastexception.
The only place to infer correctly that the aggregation has failed is in the place where aggregations take place.
so
at InternalTerms.java -> (BucketPriorityQueue)ordered.insertWithOverflow(b);
so here I can throw AggregationExecutionException saying it is because the buckets are of different
types.
But when can I infer at this point that the failure is due to mismatch of types of buckets ???
it can be possible only if at this point it is informed that the problem which occurred deep inside
is due to buckets that were incomparable.
so from just a classCastException we cannot make such a pointed exact inference, because
as class cast exception can be due to a number of scenarios and at a number of places.
so unless we inform the exact problem to InternalTerms it will not be able to infer properly.
so infer the classCastException at the compareTerms function itself that it is a IncomparableTermBucktesTypeException.
This is the best place to infer classCastException as this the place which generated the exception.
Best inference of exceptions can be done only at the source/origin of the exception.
so IncomparableTermBucktesTypeException to InternalTerms-> will make it infer and conclude on why
aggregation failed and give best information to user.
Close#12821
The IndicesModule was made up of two submodules, one which
handled registering queries, and the other for registering
hunspell dictionaries. This change moves those into
IndicesModule. It also adds a new extension point type,
InstanceMap. This is simply a Map<K,V>, where K and V are
actual objects, not classes like most other extension points.
I also added a test method to help testing instance map extensions.
This was particularly painful because of how guice binds the key
and value as separate bindings, and then reconstitutes them
into a Map at injection time. In order to gain access to the
object which links the key and value, I had to tweak our
guice copy to not use an anonymous inner class for the Provider.
Note that I also renamed the existing extension point types, since
they were very redundant. For example, ExtensionPoint.MapExtensionPoint
is now ExtensionPoint.ClassMap.
See #12783.
shard variable dependency from processFirstPhaseResults as shard is no more
needed here . it only deals with the results obtained from the synchronous search on each shard.
The ClusterModule contained a couple submodules. This moves the
functionality from those modules into ClusterModule. Two of those
had to do with DynamicSettings. This change also cleans up
how DynamicSettings are built, and enforces they are added, with
validators, in ClusterModule.
See #12783.
Previously settings specified in index templates were not validated upon
template creation. Creating an index from an index template with invalid
settings could lead to cluster stability issues because creation of such
indexes would bypass index settings validation.
This commit adds validation of settings specified in index templates at
template creation time. This works by routing the index template
settings through the index settings validation mechanism.
Closes#12865
Users with IPv6 preferred over IPv4 may have `localhost` resolve to
`::1` instead of `127.0.0.1`, so we should be explicit so they don't run
into issues.
Today on a failure the reproduce line printed out by the test framework
will build all projects and might fail if the test class is not present.
This commit adds a reactor filter to the reproduction line to ensure
unrelated projects are skipped.
Closes#12838
In order to match the paths of official plugins, we need to fix
the broken test by removing the elasticsearch prefix from the official
plugin names before testing.
This PR adds a timezone field to ValueParser.DateMath that is
set to UTC by default but can be set using the existing constructors.
This makes it possible for extended bounds setting in DateHistogram
to also use date math expressions that e.g. round by day and apply
this rounding in the time zone specified in the date histogram
aggregation request.
Closes#12278
In order to create releases without actually changing the version
as part of a commit, we also need to reflect the path of the potentially
changing S3 repo.
This method has multiple modes of resolving config files by
first looking in the config directory, then on the classpath,
and finally by prefixing with "config/" on the classpath.
Most of the places taking advantage of this were tests, so they
did not have to setup a real home dir with config. The only place
that was really relying on it was the code which loads names.txt
to randomly choose a node name.
This change fixes test to setup fake home dirs with their config
files. It also makes the logic for finding names.txt explicit:
look in config dir, and if it doesn't exist, load /config/names.txt
from the classpath.
TermsQueryParser still parses those values although deprecated. These need to be present in the java api as well to get ready for the query refactoring, where the builders are the intermediate query format that we parse our json queries into. Whatever the parser supports need to be supported by the builder as well.
Closes#12870
TermsQueryParser doesn't support the cache field anymore, so if it gets set through java api, the subsequent parsing of that query will throw error
Relates to #12870
Refactored a part out of the release script, so the user can
change the version locally as well as move the documentation
and change the Version.java
The background of this change is to have a very simple release
process that puts stuff into a staging environment, so the beta
release can be tested, before it is officially released.
This means the build_release script can be removed soon.
Settings currently has a classloader member, which any user (plugin
or core ES code) can access to load classes/resources. This is extremely
error prone as setting the classloder on the Settings instance is a
public method. Furthermore, it is not really necessary. Classes that
need resources should load resources using normal means
(getClass().getResourceAsStream). Those that need classes
should use Class.forName, which will load the class with the
same classloader as the calling class. This means, in the few
places where classes are loaded by string name, they will use
the appropriate loader: either the default classloader which loads
core ES code, or a child classloader for each plugin.
This change removes the classloader member from Settings, as
well as other classloader related uses (except for a handful
of cases which must use a classloader, at least for now).
We previous used something like Class.forName to load mock classes,
where tests would set a setting that was *supposed* to only be used by
tests. This change make these impls package private so that only tests
can change out these implementations, through test plugins.
closes#12784
No need to load catch this query since it's cheap and not reused.
If we cache it, it can cause assertions to be tripped since this
method is executed during postRecovery phase and might still run while
nodes are shutdown in tests.
This commit tries to add some infrastructure to streamline how extension
points should be strucutred. It's a simple approache with 4 implementations
for `highlighter`, `suggester`, `allocation_decider` and `shards_allocator`.
It simplifies adding new extension points and forces to register classes instead
of strings.
Build fails with maven 3.3.1 and 3.3.3. To reproduce, install one of the 3.3.x versions of maven and run `mvn clean verify` in the root directory of the project. The build will fail in the QA: Smoke Test Shaded Jar module with the following error:
```
Started J0 PID(99979@flea.local).
Suite: org.elasticsearch.shaded.test.ShadedIT
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testJodaIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.06s | ShadedIT.testJodaIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:3A9404F1F69FD80]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testGuavaIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.01s | ShadedIT.testGuavaIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:C2502FD54D83433D]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testjsr166eIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.01s | ShadedIT.testjsr166eIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:35593286F4269392]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: leaving temporary files on disk at: /Users/Shared/Jenkins/Home/workspace/elasticsearch-master/qa/smoke-test-shaded/target/J0/temp/org.elasticsearch.shaded.test.ShadedIT_2F4D23A7462CF921-001
2> NOTE: test params are: codec=CheapBastard, sim=DefaultSimilarity, locale=, timezone=Asia/Baku
2> NOTE: Mac OS X 10.10.4 x86_64/Oracle Corporation 1.8.0_25 (64-bit)/cpus=8,threads=1,free=482137936,total=514850816
2> NOTE: All tests run in this JVM: [ShadedIT]
Completed [1/1] in 6.61s, 5 tests, 3 failures <<< FAILURES!
Tests with failures:
- org.elasticsearch.shaded.test.ShadedIT.testJodaIsNotOnTheCP
- org.elasticsearch.shaded.test.ShadedIT.testGuavaIsNotOnTheCP
- org.elasticsearch.shaded.test.ShadedIT.testjsr166eIsNotOnTheCP
```
Please note that build doesn't fail with maven 3.2.x and it doesn't fail if mvn command is executed inside the qa/smoke-test-shaded directory. Only when the build is started from the root directory the error above can be observed.
The reason is because of the shaded version which depends on elasticsearch core.
When Maven build the module only, then elasticsearch core is not added to the dependency tree.
```sh
mvn dependency:tree -pl :smoke-test-shaded
```
```
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ smoke-test-shaded ---
[INFO] org.elasticsearch.qa:smoke-test-shaded:jar:2.0.0-beta1-SNAPSHOT
[INFO] +- org.elasticsearch.distribution.shaded:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | +- org.apache.lucene:lucene-core:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-backward-codecs:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-analyzers-common:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queries:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-memory:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-highlighter:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queryparser:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-sandbox:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-suggest:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-misc:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-join:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-grouping:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-spatial:jar:5.2.1:compile
[INFO] | \- com.spatial4j:spatial4j:jar:0.4.1:compile
[INFO] +- org.hamcrest:hamcrest-all:jar:1.3:test
[INFO] \- org.apache.lucene:lucene-test-framework:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-codecs:jar:5.2.1:test
[INFO] +- com.carrotsearch.randomizedtesting:randomizedtesting-runner:jar:2.1.16:test
[INFO] +- junit:junit:jar:4.11:test
[INFO] \- org.apache.ant🐜jar:1.8.2:test
```
But if shaded plugin is involved during the build, it modifies the `projectArtifactMap`:
```sh
mvn dependency:tree -pl org.elasticsearch.distribution.shaded:elasticsearch,:smoke-test-shaded
```
```
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ smoke-test-shaded ---
[INFO] org.elasticsearch.qa:smoke-test-shaded:jar:2.0.0-beta1-SNAPSHOT
[INFO] +- org.elasticsearch.distribution.shaded:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | \- org.elasticsearch:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | +- org.apache.lucene:lucene-backward-codecs:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-analyzers-common:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queries:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-memory:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-highlighter:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queryparser:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-sandbox:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-suggest:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-misc:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-join:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-grouping:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-spatial:jar:5.2.1:compile
[INFO] | | \- com.spatial4j:spatial4j:jar:0.4.1:compile
[INFO] | +- com.google.guava:guava:jar:18.0:compile
[INFO] | +- com.carrotsearch:hppc:jar:0.7.1:compile
[INFO] | +- joda-time:joda-time:jar:2.8:compile
[INFO] | +- org.joda:joda-convert:jar:1.2:compile
[INFO] | +- com.fasterxml.jackson.core:jackson-core:jar:2.5.3:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-smile:jar:2.5.3:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:jar:2.5.3:compile
[INFO] | | \- org.yaml:snakeyaml:jar:1.12:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:jar:2.5.3:compile
[INFO] | +- io.netty:netty:jar:3.10.3.Final:compile
[INFO] | +- com.ning:compress-lzf:jar:1.0.2:compile
[INFO] | +- com.tdunning:t-digest:jar:3.0:compile
[INFO] | +- org.hdrhistogram:HdrHistogram:jar:2.1.6:compile
[INFO] | +- org.apache.commons:commons-lang3:jar:3.3.2:compile
[INFO] | +- commons-cli:commons-cli:jar:1.3.1:compile
[INFO] | \- com.twitter:jsr166e:jar:1.1.0:compile
[INFO] +- org.hamcrest:hamcrest-all:jar:1.3:test
[INFO] \- org.apache.lucene:lucene-test-framework:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-codecs:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-core:jar:5.2.1:compile
[INFO] +- com.carrotsearch.randomizedtesting:randomizedtesting-runner:jar:2.1.16:test
[INFO] +- junit:junit:jar:4.11:test
[INFO] \- org.apache.ant🐜jar:1.8.2:test
```
A fix could consist of fixing something on Maven side. Probably something changed in a recent version and introduced this "issue" but it might be not really an issue. More a fix.
There are two workarounds:
1) exclude manually elasticsearch core from shaded version in smoke-test-shaded module and add manually each lucene lib needed by elasticsearch
2) add a new `elasticsearch-lucene` (lucene) POM module which simply declares all needed lucene libs in subprojects (such as the smoke tester one).
I choose the later.
Closes#12791.
This allows `path.shared_data` to be added to the security manager while
still allowing a custom `data_path` for indices using shadow replicas.
For example, configuring `path.shared_data: /tmp/foo`, then created an
index with:
```
POST /myindex
{
"index": {
"number_of_shards": 1,
"number_of_replicas": 1,
"data_path": "/tmp/foo/bar/baz",
"shadow_replicas": true
}
}
```
The index will then reside in `/tmp/foo/bar/baz`.
`path.shared_data` defaults to `null` if not specified.
Resolves#12714
Relates to #11065
The upper bound must be 0-based since we are corrupting an offset into
the file but it can be a 1-based length of the file which results in an
uncorrupted file.
There were two submodules of AllocationModule. This combines them into a
single module, adds a base test case for module testing, and adds back
the ability for plugins to provide custom ShardsAllocators.
closes#12781
Instead of logging the entire `_source` in the indexing slowlog we log by
default just the first 1000 characters - this is controlled by the
`index.indexing.slowlog.source` settings and can be set to `true` to log the
whole `_source`, `false` to log none of it, and a number to log at most that
many characters.
Closes#4485
Removed attempt of parsing of `field` rather than `fields` and attempted support of the following syntax:
```
{
"simple_query_string": {
"body" : {
"query": "foo bar"
}
}
}
```
Both these two syntaxes were undocumented, untested and not working.
Added test for case when `fields` is not specified, then the default field is queried.
Closes#12794Closes#12798
This commit includes the stacktrace into the structured exception rendering
to ensure we can find the reason / cause for certain things quicker. This
is enabled by default and is very verbose. Users can disable it via `rest.exception.stacktrace.skip = true|false`
Closes#12239
We have a way to allow a plugin to specify additional settings. These
settings should only be applied if they are not already existing in the
node settings.
The equalTo logic of ShardRouting doesn't take version and unassignedInfo into the account when compares shard routings. Since cluster state diff relies on equal to detect the changes that needs to be sent to other cluster, this omission might lead to changes not being properly propagated to other nodes in the cluster.
Closes#12387
In order to test the way plugins with configuration are installed and removed
we need a plugin with configuration in the repository. The simplest way to
get one is to make an "example" plugin.
In the process of making this example it became aparent that cat actions were
difficult to create outside of the org.elasticsearch.rest.action.cat package
because key methods in AbstractCatAction were package private. This makes them
protected and uses them to create the example configured plugin.
Relates to #12717 but is only one step of many to close it.
the default classloader. It had all kinds of leniency in how the
classname was found, and simply cannot work with plugins having isolated
classloaders.
This change removes that method. Some of the uses of it were for custom
extension points, like custom repository or discovery types. A lot were
just there to plugin mock implementations for tests. For the settings
that were legitimate, all now support plugins adding the given setting
via onModule. For those that were specific to tests for mocks, they now
use Classes.loadClass (a helper around Class.forName). This is a
temporary measure until (in a future PR) tests can change the
implementation via package private statics.
I also removed a number of unnecessary intermediate modules, added a
"jvm-example" plugin that can be filled in in the future as a smoke test
for breaking plugins, and gave some documentation to "spawn" modules
interface.
closes#12643closes#12656
This was a straight up bug found in #12753. If only one type existed,
the compatibility check for a new type was not strict, so changes to
an updateable setting like search_analyzer got through (but only
partially). This change fixes the check and adds tests (which were
previously a TODO).
This also fixes a bug in dynamic field creation which woudln't copy
fielddata settings when duplicating a pre-existing field with the
same name.
closes#12753
This PR prevents setting timezone on ValueFormatter.DateTime. Instead
the timezone information needed when printing buckets key-as-string
information is provided at constrution time of the ValueFormatter, making
sure we don't overwrite any constants. This, however, made it necessary to
be able to access the timezone information when resolving the format
in ValueSourseParser, so the `time_zone` parameter is now parsed alongside
the `format` parameter in ValueSourceParser rather than in DateHistogramParser.
Closes#12531
Today we accept store types like `nio_fs`, `nioFs`, `niofs` etc.
this commit removes the leniency and only accepts plain values without underscore.
Yet, this commit also has a BWC layer that upgrades existing indices to the new version.
Affected enums are:
* `nio_fs`
* `mmap_fs`
* `simple_fs`
If an URL specified with --url on the command line cannot be reached,
the plugin manager tries URLs at download.elastic.co automatically.
This can lead to download wrong plugins and also exposes potentially
the name of an internal plugin to the download service.
This fix ensures, that the plugin manager simply aborts, if the specified
URL cannot be downloaded.
Moved the license checker config into the parent pom, and overrede
the license dir/target-to-check in distributions/pom.
Disabled the license checker explicitly for projects which run integration
tests but have no licenses dir:
* core
* distribution
* qa
* plugins/delete-by-query
* plugins/mapper-size
* plugins/site-example
Closes#12752Closes#12754
This commit adds basic support to track the number of times scripts are
compiled and compiled scripts are evicted from the script cache. These
statistics are tracked at the node level.
Closes#12673
ClusterState has 3 different methods to access RoutingNodes:
* #routingNodes() - mutable version
* #getRoutingNodes() - delegates to #getReadOnlyRoutingNodes()
* #getReadOnlyRoutingNodes() - it's docs say `NOTE, the routing nodes are mutable, use them just for read operations`
The latter also reuses the instance that it creates. This has several problems beside the obvious:
* creating RoutingNodes is costly and should be done only if really needed ie. use cached version as much as possible
* the common case is ReadOnly but all kinds of things are called
* mutable version are only needed in one place and should only be used in the AllocationService
* RoutingNodes can freeze it's ShardRoutings but doesn't
* RoutingNodes should check if it's read-only or not
This commit fixed all the problems and special cases the mutable case such that all accesses via ClusterState#getRoutingNodes()
is read-only and RoutingNodes enforces this.
This is one of our esoteric metadata mappers so I think we should distribute
it in a plugin rather than in elasticsearch core.
This introduces one limitation: the value of the `_size` parameter is not
retrievable for documents that are only in the transaction log.
The `multi_match` query groups terms that have the same analyzer together and
then applies the boost of the first query in each group. This is not necessary
given that boosts for each term are already applied another way.
Versions can be tricky with linux vendors and such. To help debug any possible issues, we should output a better version.
Today:
```
[elasticsearch] java.lang.RuntimeException: Java version: 1.7.0_55 suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
[elasticsearch] Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
[elasticsearch] If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
[elasticsearch] Upgrading is preferred, this workaround will result in degraded performance.
[elasticsearch] at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:121)
[elasticsearch] at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:270)
[elasticsearch] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
```
With patch:
```
java.lang.RuntimeException: Java version: Oracle Corporation 1.7.0_40 [Java HotSpot(TM) 64-Bit Server VM 24.0-b56] suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
Upgrading is preferred, this workaround will result in degraded performance.
at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:121)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
```
This commit adds a new API to allow scripts to say whether they need scores.
In practice, only the `expression` script engine makes use of it correctly,
other engines just return `true` since they can't predict whether they'll
need scores. This should make scripted aggregations and `function_query`
faster as we'll now be able to pass needsScores=false to Query.createWeight.
Adding a named query that is null can lead to a NullPointerException
when copying the named queries. This is due to an implementation detail
in QueryParseContent.copyNamedQueries. In particular, this method uses
com.google.common.collect.ImmutableMap.copyOf. A documented requirement
of ImmutableMap is that none of the entries have a null key nor null
value. Therefore, we should not add such queries to the namedQueries
map. This will not change any behavior since Map.get returns null if no
entry with the given key exists anyway.
Closes#12683
This change improves the `function_score` query to not compute scores at all
when they are not needed, and to not compute scores on the underlying query
when the combine function is to replace the score with the scores of the
functions.
This commit makes it possible to serialize arbitrary objects by having them extend Writeable. When reading them though, we need to be able to identify which object we have to create, based on its name. This is useful for queries once we move to parsing on the coordinating node, as well as with aggregations and so on.
Introduced a new abstraction called NamedWriteable, which is supported by StreamOutput and StreamInput through writeNamedWriteable and readNamedWriteable methods. A new NamedWriteableRegistry is introduced also where named writeable prototypes need to be registered so that we are able to retrieve the proper instance of the writeable given its name and then de-serialize it calling readFrom against it.
Closes#12393
In order to avoid extra reroutes, `RoutingService` should avoid
scheduling a reroute of any shards where the delay is negative. To make
sure that we don't encounter a race condition between the
GatewayAllocator thinking a shard is delayed and RoutingService thinking
it is not, the GatewayAllocator will update the RoutingService with the
last time it checked in order to use a consistent "view" of the delay.
Resolves#12456
Relates to #12515 and #12456
When postFilter generates a token that is identical to the input term
DirectCandidateGenerator should not preFilter this token. If postFilter
and preFilter are the same analyzer instance it would fail with :
"TokenStream contract violation: close() call missing"
This is a forward port of @nomoa's #12670
Today we try to detect if there is an index existing in the directory
and if not we create one. This can be tricky and errof prone since we
rely on the filesystem without taking the context into account when the
engine gets created. We know in all situations if the index should be created
so we can just use this infromation and rely on the lucene index writer to barf
if we hit a situations where we can't append to an index while we should.
Today we miss to throw / rethrow an recovery exception if it happens during
the finalization of phase 1 if the source files are not affected. Even worse
this can cause some dataloss if the reason for this exception is a failure of
deleting a corruption marker or similar pre-existing corruptions since we continue
with the recovery and mark the target shared as started which will in-turn open
an engine with an empty index.
This makes the output of EsThreadPoolExecutor#toString less pretty but
we no longer have funky hacky that rely on the specific format of the
toString produced by ThreadPoolExecutor which isn't part of its API and
could change with any JVM version and break the output.
Improving the toString allows for nicer error reporting. Also cleaned up
the way that EsRejectedExecutionException notices that it was rejected
from a shutdown thread pool. I left javadocs about how its not 100% correct
but good enough for most uses.
The improved toString on EsThreadPoolExecutor mean every one of them needs
a name. In most cases the name to use is obvious. In tests I use the name
of the test method and in real thread pools I use the name of the thread
pool. In non-ThreadPool executors I use the thread's name.
Closes#9732
In order to support the URL notation including a user/pass combination
(like http://user:pass@host/plugin.zip) the auth info needs to be added
manually.
Today this is "unofficial" as conf/scripts, but some people
want to share scripts across different nodes and so on. Because
they cannot configure it, they are forced to use dirty hacks
like symbolic links, which isnt going to work: we aren't going
to recursively scan conf/ and add permissions to all link targets
underneath it, thats crazy.
I really hate adding yet another configuration knob here, but
users resorting to using symlinks are going to be frustrated,
and do things in a more insecure way.
The URL to download the main elasticsearch plugins did not match
what the S3 wageon is supposed to write to.
In addition this PR adds support for snapshots to access the
snapshot S3 bucket, so we can possibly download snapshot versions
of plugins.
The format of the URLs stems from #12270Closes#12632
Conflicting mappings that were allowed before v2.0 can cause runaway shard failures on upgrade. This commit adds a check that prevents a cluster from starting if it contains such indices as well as restoring such indices from a snapshot into already running cluster.
Closes#11857
Currently when we delete files belonging to deleted snapshots we issue one delete command to underlying snapshot store at a time. Some repositories can benefit from bulk deletes of multiple files.
Closes#12533
This method syncs the translog unless it's already synced. If the engine
is alreayd closed we are guaranteed to be synced already such that we can just
ignore this exception.
Closes#12603
this change was added recently which uses default timezone for the creation
date on CAT endpoints. We should be consistent and use UTC across the board.
This commit adds #getDefaultTimzone() to forbidden API and fixes the REST tests.
Relates to #11688
The Streams.copyTo(String|Bytes)FromClasspath() methods resolve resources using org.elasticsearch.io.Streams classloader. This is fine in elasticsearch core and when running tests but if used in a plugin this can lead to FileNotFoundExceptions at runtime because plugin are loaded in a dedicated classloader.
Most of the abstract base test classes we have were previously @Ignored.
However, there were also some other tests ignored. Having two ways to
quiet tests is confusing, and clearly it has caused some tests
to get lost in the fold.
This change moves all base test classes to use the "TestCase" suffix,
which is not picked up by the test class name pattern. It also removes
@Ignore from (almost) all tests, and adds it to forbidden apis.
And since we were renaming, I shorted base test class names to use
"ES" instead of "Elasticsearch". I type this a lot of types a day,
and I have heard others express a similar desire for a shorter name.
closes#10659
Now that integ tests are moved into `mvn verify`, we don't really have
a need for @Slow, and especially not @Integration. This removes
uses of the first, and completely removes uses of the latter.
Tasks can be registered with a timeout, which runs as a task in a separate
threadpool. The idea is that the timeout runner cancels the main task when
the time is out, and the timeout runner is cancelled when the main task
starts executing. However, the following statement:
```java
timeoutFuture = timer.schedule(new Runnable() {
@Override
public void run() {
if (remove(TieBreakingPrioritizedRunnable.this)) {
runAndClean(timeoutCallback);
}
}
}, timeValue.nanos(), TimeUnit.NANOSECONDS);
```
is not atomic: the removal task is first started, and then the (volatile)
variable is assigned. As a consequence, there is a short window that allows
a timeout task to wait until the time is out even if the task is already
completed.
See http://build-us-00.elastic.co/job/es_core_17_centos/496/ for an example of
such a failure.
Previously only the first aggregation in a buckets_path was check to make sure the aggregation existed. Now the whole path is checked to ensure an aggregation exists at each element in the buckets_path
Closes#12360
Its not going to work: its blocked by security policy
and will just add a confusing SecurityException to the mix, and
bogusly give an exit status of 0 when in fact something bad happened.
Finally, if ES can't startup, it is a serious problem, there is
no sense in hiding the reason why: deliver the full stack trace.
The repository verification process should create a subdirectory to make sure we check permission of newly created directories in case elasticsearch processes on different nodes are running using different uids and creating blobs with incompatible permissions.
Closes#11611
Previously we issued a reroute when a node went over the high watermark
in order to move shards away from the node. This change tracks nodes
that have previously been over the high or low watermarks and issues a
reroute when the node goes back underneath the watermark.
This allows shards that may be unassigned to be assigned back to a node
that was previously over the low watermark but no longer is.
Resolves#12422