It might turn out to be useful to have the actual commit hash of the version we are
looking for if our download manager can just redirect to the right staging repository.
This is purely for maintainance reasons since it easier to see if we can drop
certain stageing urls if we have the version next to the hash.
I also removed the gpg passphrase from the example URL since it's better to get prompted?
This is caused by sending the same file to the chunk handler with offset
`0` which in-turn opens a new outputstream and waits for bytes. But the next round
will send 0 bytes again with offset 0. This commit adds some checks / validators that those
settings are positive byte values and fixes the RecoveryStatus to throw an IAE if the same file
is opened twice.
What is the problem we are trying to solve ?
===========================================
When we are doing aggregations against a field name as shown in
https://github.com/HarishAtGitHub/elasticsearch-tester/blob/master/12135.py#L37-L46
search = {
"aggs": {
"NAME": {
"terms": {
"field": "ip_str",
"size": 10
}
}
}
}
and when the field "ip_str" has values of different types in different indices
. say one is of type StringTerms type and other is of IP(LongTerms type) then
the aggregation fails as the types do not match(incompatible).
The failure throws a class cast exception as follows:
{
"error": {
"root_cause": [],
"type": "reduce_search_phase_exception",
"reason": "[reduce] ",
"phase": "query",
"grouped": true,
"failed_shards": [],
"caused_by": {
"type": "class_cast_exception",
"reason": "org.elasticsearch.search.aggregations.bucket.terms.LongTerms$Bucket cannot be cast to org.elasticsearch.search.aggregations.bucket.terms.StringTerms$Bucket"
}
},
"status": 503
}
which is hard to understand . User cannot infer anything about the cause of the problem and what he should do from seeing the
class cast exception.
What can be the possible solution ?
===================================
Make the exception more readable by showing him the root cause of the problem so that he can
understand which area actually caused the problem, so that he can take necessary steps further.
Code Analysis
=============
Debugging code shows that:
the query /{indices}/_search?search_type=count involves two phases
1) search phase
***************
searchService.sendExecuteQuery(...) [Ref: TransportSearchCountAction]
what happens here ?
the phase 1, which is the search phase goes without error.
In this phase the shards for the given indexes are collected and the search is done on all asynchronously
and finally collected in the variable "firstResults" and given to meger phase.
[Flow: .... -> TransportSearchTypeAction -> method performFirstPhase]
2) merge phase
**************
searchPhaseController.merge(...firstResults...) [Ref: TransportSearchCountAction]
what happens here ?
the "firstresults" QuerySearchResults are now to be aggregated and combined.
[Flow: SearchPhaseController.merge(...) -> ..... -> InternalTerms.doReduce(...)]
the phase 1, which is the search phase goes without error.
The problem comes in phase 2, which is merge phase.
Now the individual term buckets are available.
As per the test case , there are two indices cast and cast2, so by default 10 shards.
cast has ip_str of type StringTerms
cast2 has ip_str of type ip which is actually LongTerms
so here two types of Buckets exist. StringTerms_Bucket and LongTerms_Bucket.
Now the aggregation is to be put inside the BucketPriorityQueue(size 2: as out of 10, 2 has hits) finally.
(docs of PriorityQueue: https://lucene.apache.org/core/4_4_0/core/org/apache/lucene/util/PriorityQueue.html#insertWithOverflow(T))
Now first the LongTerms$Bucket is put inside.
then the StringTerms$Bucket is to be put in.
This is the area where exception is thrown. What happens is when adding the StringTerms$Bucket now it has to
goes through the code "lessThan(element, heap[1])"
which finally calls
---------------------------------------------------------------------------------------------
| StringTerms$Bucket.compareTerms(other) <---------------- Area of exception |
| |
--------------------------------------------------------------------------------------------
where when comparing one to other a type cast is done and it fails as StringTerms$Bucket and LongTerms$Bucket are
incompatible.
Approach to solve:
==================
The best way is to make user understand that the problem is when reducing/merging/aggregating the buckets which came as a result of
querying different shards, so that this will make them infer that the problem is because the values of the fields are of different types.
The message is also user friendly and much better than the indecipherable classcastexception.
The only place to infer correctly that the aggregation has failed is in the place where aggregations take place.
so
at InternalTerms.java -> (BucketPriorityQueue)ordered.insertWithOverflow(b);
so here I can throw AggregationExecutionException saying it is because the buckets are of different
types.
But when can I infer at this point that the failure is due to mismatch of types of buckets ???
it can be possible only if at this point it is informed that the problem which occurred deep inside
is due to buckets that were incomparable.
so from just a classCastException we cannot make such a pointed exact inference, because
as class cast exception can be due to a number of scenarios and at a number of places.
so unless we inform the exact problem to InternalTerms it will not be able to infer properly.
so infer the classCastException at the compareTerms function itself that it is a IncomparableTermBucktesTypeException.
This is the best place to infer classCastException as this the place which generated the exception.
Best inference of exceptions can be done only at the source/origin of the exception.
so IncomparableTermBucktesTypeException to InternalTerms-> will make it infer and conclude on why
aggregation failed and give best information to user.
Close#12821
The IndicesModule was made up of two submodules, one which
handled registering queries, and the other for registering
hunspell dictionaries. This change moves those into
IndicesModule. It also adds a new extension point type,
InstanceMap. This is simply a Map<K,V>, where K and V are
actual objects, not classes like most other extension points.
I also added a test method to help testing instance map extensions.
This was particularly painful because of how guice binds the key
and value as separate bindings, and then reconstitutes them
into a Map at injection time. In order to gain access to the
object which links the key and value, I had to tweak our
guice copy to not use an anonymous inner class for the Provider.
Note that I also renamed the existing extension point types, since
they were very redundant. For example, ExtensionPoint.MapExtensionPoint
is now ExtensionPoint.ClassMap.
See #12783.
shard variable dependency from processFirstPhaseResults as shard is no more
needed here . it only deals with the results obtained from the synchronous search on each shard.
The ClusterModule contained a couple submodules. This moves the
functionality from those modules into ClusterModule. Two of those
had to do with DynamicSettings. This change also cleans up
how DynamicSettings are built, and enforces they are added, with
validators, in ClusterModule.
See #12783.
Before:
PluginInfo{name='cloud-aws', description='The Amazon Web Service (AWS) Cloud plugin allows to use AWS API for the unicast discovery mechanism and add S3 repositories.', site=false, jvm=true, classname=org.elasticsearch.plugin.cloud.aws.CloudAwsPlugin, isolated=true, version='2.1.0-SNAPSHOT'}
After:
- Plugin information:
Name: cloud-aws
Description: The Amazon Web Service (AWS) Cloud plugin allows to use AWS API for the unicast discovery mechanism and add S3 repositories.
Site: false
Version: 2.1.0-SNAPSHOT
JVM: true
* Classname: org.elasticsearch.plugin.cloud.aws.CloudAwsPlugin
* Isolated: true
Previously settings specified in index templates were not validated upon
template creation. Creating an index from an index template with invalid
settings could lead to cluster stability issues because creation of such
indexes would bypass index settings validation.
This commit adds validation of settings specified in index templates at
template creation time. This works by routing the index template
settings through the index settings validation mechanism.
Closes#12865
Users with IPv6 preferred over IPv4 may have `localhost` resolve to
`::1` instead of `127.0.0.1`, so we should be explicit so they don't run
into issues.
Today on a failure the reproduce line printed out by the test framework
will build all projects and might fail if the test class is not present.
This commit adds a reactor filter to the reproduction line to ensure
unrelated projects are skipped.
Closes#12838
In order to match the paths of official plugins, we need to fix
the broken test by removing the elasticsearch prefix from the official
plugin names before testing.
This PR adds a timezone field to ValueParser.DateMath that is
set to UTC by default but can be set using the existing constructors.
This makes it possible for extended bounds setting in DateHistogram
to also use date math expressions that e.g. round by day and apply
this rounding in the time zone specified in the date histogram
aggregation request.
Closes#12278
In order to create releases without actually changing the version
as part of a commit, we also need to reflect the path of the potentially
changing S3 repo.
This method has multiple modes of resolving config files by
first looking in the config directory, then on the classpath,
and finally by prefixing with "config/" on the classpath.
Most of the places taking advantage of this were tests, so they
did not have to setup a real home dir with config. The only place
that was really relying on it was the code which loads names.txt
to randomly choose a node name.
This change fixes test to setup fake home dirs with their config
files. It also makes the logic for finding names.txt explicit:
look in config dir, and if it doesn't exist, load /config/names.txt
from the classpath.
TermsQueryParser still parses those values although deprecated. These need to be present in the java api as well to get ready for the query refactoring, where the builders are the intermediate query format that we parse our json queries into. Whatever the parser supports need to be supported by the builder as well.
Closes#12870
TermsQueryParser doesn't support the cache field anymore, so if it gets set through java api, the subsequent parsing of that query will throw error
Relates to #12870
Refactored a part out of the release script, so the user can
change the version locally as well as move the documentation
and change the Version.java
The background of this change is to have a very simple release
process that puts stuff into a staging environment, so the beta
release can be tested, before it is officially released.
This means the build_release script can be removed soon.
Settings currently has a classloader member, which any user (plugin
or core ES code) can access to load classes/resources. This is extremely
error prone as setting the classloder on the Settings instance is a
public method. Furthermore, it is not really necessary. Classes that
need resources should load resources using normal means
(getClass().getResourceAsStream). Those that need classes
should use Class.forName, which will load the class with the
same classloader as the calling class. This means, in the few
places where classes are loaded by string name, they will use
the appropriate loader: either the default classloader which loads
core ES code, or a child classloader for each plugin.
This change removes the classloader member from Settings, as
well as other classloader related uses (except for a handful
of cases which must use a classloader, at least for now).
We previous used something like Class.forName to load mock classes,
where tests would set a setting that was *supposed* to only be used by
tests. This change make these impls package private so that only tests
can change out these implementations, through test plugins.
closes#12784
No need to load catch this query since it's cheap and not reused.
If we cache it, it can cause assertions to be tripped since this
method is executed during postRecovery phase and might still run while
nodes are shutdown in tests.
Leftover after adding the query builder type to QueryParser. getBuilderPrototype is better typed now and doesn't require unchecked cast anymore. Fixed also some Tuple usage without types.
This commit tries to add some infrastructure to streamline how extension
points should be strucutred. It's a simple approache with 4 implementations
for `highlighter`, `suggester`, `allocation_decider` and `shards_allocator`.
It simplifies adding new extension points and forces to register classes instead
of strings.
When parsing inner array of filters, AndQueryParser seems to
check for correct "filters" field name but then does the same
kind of operation in the `else` branch of the stament. This
seems like it can be removed.
Build fails with maven 3.3.1 and 3.3.3. To reproduce, install one of the 3.3.x versions of maven and run `mvn clean verify` in the root directory of the project. The build will fail in the QA: Smoke Test Shaded Jar module with the following error:
```
Started J0 PID(99979@flea.local).
Suite: org.elasticsearch.shaded.test.ShadedIT
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testJodaIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.06s | ShadedIT.testJodaIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:3A9404F1F69FD80]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testGuavaIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.01s | ShadedIT.testGuavaIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:C2502FD54D83433D]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testjsr166eIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.01s | ShadedIT.testjsr166eIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:35593286F4269392]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: leaving temporary files on disk at: /Users/Shared/Jenkins/Home/workspace/elasticsearch-master/qa/smoke-test-shaded/target/J0/temp/org.elasticsearch.shaded.test.ShadedIT_2F4D23A7462CF921-001
2> NOTE: test params are: codec=CheapBastard, sim=DefaultSimilarity, locale=, timezone=Asia/Baku
2> NOTE: Mac OS X 10.10.4 x86_64/Oracle Corporation 1.8.0_25 (64-bit)/cpus=8,threads=1,free=482137936,total=514850816
2> NOTE: All tests run in this JVM: [ShadedIT]
Completed [1/1] in 6.61s, 5 tests, 3 failures <<< FAILURES!
Tests with failures:
- org.elasticsearch.shaded.test.ShadedIT.testJodaIsNotOnTheCP
- org.elasticsearch.shaded.test.ShadedIT.testGuavaIsNotOnTheCP
- org.elasticsearch.shaded.test.ShadedIT.testjsr166eIsNotOnTheCP
```
Please note that build doesn't fail with maven 3.2.x and it doesn't fail if mvn command is executed inside the qa/smoke-test-shaded directory. Only when the build is started from the root directory the error above can be observed.
The reason is because of the shaded version which depends on elasticsearch core.
When Maven build the module only, then elasticsearch core is not added to the dependency tree.
```sh
mvn dependency:tree -pl :smoke-test-shaded
```
```
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ smoke-test-shaded ---
[INFO] org.elasticsearch.qa:smoke-test-shaded:jar:2.0.0-beta1-SNAPSHOT
[INFO] +- org.elasticsearch.distribution.shaded:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | +- org.apache.lucene:lucene-core:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-backward-codecs:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-analyzers-common:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queries:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-memory:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-highlighter:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queryparser:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-sandbox:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-suggest:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-misc:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-join:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-grouping:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-spatial:jar:5.2.1:compile
[INFO] | \- com.spatial4j:spatial4j:jar:0.4.1:compile
[INFO] +- org.hamcrest:hamcrest-all:jar:1.3:test
[INFO] \- org.apache.lucene:lucene-test-framework:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-codecs:jar:5.2.1:test
[INFO] +- com.carrotsearch.randomizedtesting:randomizedtesting-runner:jar:2.1.16:test
[INFO] +- junit:junit:jar:4.11:test
[INFO] \- org.apache.ant🐜jar:1.8.2:test
```
But if shaded plugin is involved during the build, it modifies the `projectArtifactMap`:
```sh
mvn dependency:tree -pl org.elasticsearch.distribution.shaded:elasticsearch,:smoke-test-shaded
```
```
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ smoke-test-shaded ---
[INFO] org.elasticsearch.qa:smoke-test-shaded:jar:2.0.0-beta1-SNAPSHOT
[INFO] +- org.elasticsearch.distribution.shaded:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | \- org.elasticsearch:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | +- org.apache.lucene:lucene-backward-codecs:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-analyzers-common:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queries:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-memory:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-highlighter:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queryparser:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-sandbox:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-suggest:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-misc:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-join:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-grouping:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-spatial:jar:5.2.1:compile
[INFO] | | \- com.spatial4j:spatial4j:jar:0.4.1:compile
[INFO] | +- com.google.guava:guava:jar:18.0:compile
[INFO] | +- com.carrotsearch:hppc:jar:0.7.1:compile
[INFO] | +- joda-time:joda-time:jar:2.8:compile
[INFO] | +- org.joda:joda-convert:jar:1.2:compile
[INFO] | +- com.fasterxml.jackson.core:jackson-core:jar:2.5.3:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-smile:jar:2.5.3:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:jar:2.5.3:compile
[INFO] | | \- org.yaml:snakeyaml:jar:1.12:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:jar:2.5.3:compile
[INFO] | +- io.netty:netty:jar:3.10.3.Final:compile
[INFO] | +- com.ning:compress-lzf:jar:1.0.2:compile
[INFO] | +- com.tdunning:t-digest:jar:3.0:compile
[INFO] | +- org.hdrhistogram:HdrHistogram:jar:2.1.6:compile
[INFO] | +- org.apache.commons:commons-lang3:jar:3.3.2:compile
[INFO] | +- commons-cli:commons-cli:jar:1.3.1:compile
[INFO] | \- com.twitter:jsr166e:jar:1.1.0:compile
[INFO] +- org.hamcrest:hamcrest-all:jar:1.3:test
[INFO] \- org.apache.lucene:lucene-test-framework:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-codecs:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-core:jar:5.2.1:compile
[INFO] +- com.carrotsearch.randomizedtesting:randomizedtesting-runner:jar:2.1.16:test
[INFO] +- junit:junit:jar:4.11:test
[INFO] \- org.apache.ant🐜jar:1.8.2:test
```
A fix could consist of fixing something on Maven side. Probably something changed in a recent version and introduced this "issue" but it might be not really an issue. More a fix.
There are two workarounds:
1) exclude manually elasticsearch core from shaded version in smoke-test-shaded module and add manually each lucene lib needed by elasticsearch
2) add a new `elasticsearch-lucene` (lucene) POM module which simply declares all needed lucene libs in subprojects (such as the smoke tester one).
I choose the later.
Closes#12791.
This allows `path.shared_data` to be added to the security manager while
still allowing a custom `data_path` for indices using shadow replicas.
For example, configuring `path.shared_data: /tmp/foo`, then created an
index with:
```
POST /myindex
{
"index": {
"number_of_shards": 1,
"number_of_replicas": 1,
"data_path": "/tmp/foo/bar/baz",
"shadow_replicas": true
}
}
```
The index will then reside in `/tmp/foo/bar/baz`.
`path.shared_data` defaults to `null` if not specified.
Resolves#12714
Relates to #11065
The upper bound must be 0-based since we are corrupting an offset into
the file but it can be a 1-based length of the file which results in an
uncorrupted file.
There were two submodules of AllocationModule. This combines them into a
single module, adds a base test case for module testing, and adds back
the ability for plugins to provide custom ShardsAllocators.
closes#12781
Instead of logging the entire `_source` in the indexing slowlog we log by
default just the first 1000 characters - this is controlled by the
`index.indexing.slowlog.source` settings and can be set to `true` to log the
whole `_source`, `false` to log none of it, and a number to log at most that
many characters.
Closes#4485
In the query refactoring branch we've been introducing getter methods for every bit that you can set to each query. The naming is not every consistent at the moment. The applied naming convention are the following:
- `innerQuery()` for any inner query, when there's only one of them
- when there's more than one inner query, use a prefix that identifies which query it is, and the `query` suffix (e.g. `positiveQuery` or `littleQuery`)
- `fieldName()` for the name of the field to be queried
- `value()` for the actual query
These changes don't break bw comp given that these getters were all introduced with the query refactoring which hasn't been released yet. Also we are modifying getters that don't have a corresponding setter, as the fields are final, hence we are not breaking consistency between getter and setter.
Closes#12800
Removed attempt of parsing of `field` rather than `fields` and attempted support of the following syntax:
```
{
"simple_query_string": {
"body" : {
"query": "foo bar"
}
}
}
```
Both these two syntaxes were undocumented, untested and not working.
Added test for case when `fields` is not specified, then the default field is queried.
Closes#12794Closes#12798
This commit includes the stacktrace into the structured exception rendering
to ensure we can find the reason / cause for certain things quicker. This
is enabled by default and is very verbose. Users can disable it via `rest.exception.stacktrace.skip = true|false`
Closes#12239
We have a way to allow a plugin to specify additional settings. These
settings should only be applied if they are not already existing in the
node settings.
The equalTo logic of ShardRouting doesn't take version and unassignedInfo into the account when compares shard routings. Since cluster state diff relies on equal to detect the changes that needs to be sent to other cluster, this omission might lead to changes not being properly propagated to other nodes in the cluster.
Closes#12387
In order to test the way plugins with configuration are installed and removed
we need a plugin with configuration in the repository. The simplest way to
get one is to make an "example" plugin.
In the process of making this example it became aparent that cat actions were
difficult to create outside of the org.elasticsearch.rest.action.cat package
because key methods in AbstractCatAction were package private. This makes them
protected and uses them to create the example configured plugin.
Relates to #12717 but is only one step of many to close it.
the default classloader. It had all kinds of leniency in how the
classname was found, and simply cannot work with plugins having isolated
classloaders.
This change removes that method. Some of the uses of it were for custom
extension points, like custom repository or discovery types. A lot were
just there to plugin mock implementations for tests. For the settings
that were legitimate, all now support plugins adding the given setting
via onModule. For those that were specific to tests for mocks, they now
use Classes.loadClass (a helper around Class.forName). This is a
temporary measure until (in a future PR) tests can change the
implementation via package private statics.
I also removed a number of unnecessary intermediate modules, added a
"jvm-example" plugin that can be filled in in the future as a smoke test
for breaking plugins, and gave some documentation to "spawn" modules
interface.
closes#12643closes#12656
This was a straight up bug found in #12753. If only one type existed,
the compatibility check for a new type was not strict, so changes to
an updateable setting like search_analyzer got through (but only
partially). This change fixes the check and adds tests (which were
previously a TODO).
This also fixes a bug in dynamic field creation which woudln't copy
fielddata settings when duplicating a pre-existing field with the
same name.
closes#12753
This PR prevents setting timezone on ValueFormatter.DateTime. Instead
the timezone information needed when printing buckets key-as-string
information is provided at constrution time of the ValueFormatter, making
sure we don't overwrite any constants. This, however, made it necessary to
be able to access the timezone information when resolving the format
in ValueSourseParser, so the `time_zone` parameter is now parsed alongside
the `format` parameter in ValueSourceParser rather than in DateHistogramParser.
Closes#12531
Today we accept store types like `nio_fs`, `nioFs`, `niofs` etc.
this commit removes the leniency and only accepts plain values without underscore.
Yet, this commit also has a BWC layer that upgrades existing indices to the new version.
Affected enums are:
* `nio_fs`
* `mmap_fs`
* `simple_fs`
If an URL specified with --url on the command line cannot be reached,
the plugin manager tries URLs at download.elastic.co automatically.
This can lead to download wrong plugins and also exposes potentially
the name of an internal plugin to the download service.
This fix ensures, that the plugin manager simply aborts, if the specified
URL cannot be downloaded.
Currently there is a flag in the QueryParseContext that keeps track
of whether an inner query sits inside a filter and should therefore
produce an unscored lucene query. This is done in the
parseInnerFilter...() methods that are called in the fromXContent()
methods or the parse() methods we haven't refactored yet. This needs
to move to the toQuery() method in the refactored builders, since the
query builders themselves have no information about the parent query
they might be nested in.
This PR moves the isFilter flag from the QueryParseContext to the re-
cently introduces QueryShardContext. The parseInnerFilter... methods
need to stay in the QueryParseContext for now, but they already delegate
to the flag that is kept in QueryShardContext. For refactored queries
(like BoolQueryBuilder) references to isFilter() are moved from the
parser to the corresponding builder. Builders where the inner query
was previously parsed using parseInnerFilter...() now use a newly
introduces toFilter(shardContext) method that produces the nested lucene
query with the filter context flag switched on.
Closes#12731
Moved the license checker config into the parent pom, and overrede
the license dir/target-to-check in distributions/pom.
Disabled the license checker explicitly for projects which run integration
tests but have no licenses dir:
* core
* distribution
* qa
* plugins/delete-by-query
* plugins/mapper-size
* plugins/site-example
Closes#12752Closes#12754
This commit adds basic support to track the number of times scripts are
compiled and compiled scripts are evicted from the script cache. These
statistics are tracked at the node level.
Closes#12673
ClusterState has 3 different methods to access RoutingNodes:
* #routingNodes() - mutable version
* #getRoutingNodes() - delegates to #getReadOnlyRoutingNodes()
* #getReadOnlyRoutingNodes() - it's docs say `NOTE, the routing nodes are mutable, use them just for read operations`
The latter also reuses the instance that it creates. This has several problems beside the obvious:
* creating RoutingNodes is costly and should be done only if really needed ie. use cached version as much as possible
* the common case is ReadOnly but all kinds of things are called
* mutable version are only needed in one place and should only be used in the AllocationService
* RoutingNodes can freeze it's ShardRoutings but doesn't
* RoutingNodes should check if it's read-only or not
This commit fixed all the problems and special cases the mutable case such that all accesses via ClusterState#getRoutingNodes()
is read-only and RoutingNodes enforces this.
This is one of our esoteric metadata mappers so I think we should distribute
it in a plugin rather than in elasticsearch core.
This introduces one limitation: the value of the `_size` parameter is not
retrievable for documents that are only in the transaction log.
The `multi_match` query groups terms that have the same analyzer together and
then applies the boost of the first query in each group. This is not necessary
given that boosts for each term are already applied another way.
Versions can be tricky with linux vendors and such. To help debug any possible issues, we should output a better version.
Today:
```
[elasticsearch] java.lang.RuntimeException: Java version: 1.7.0_55 suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
[elasticsearch] Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
[elasticsearch] If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
[elasticsearch] Upgrading is preferred, this workaround will result in degraded performance.
[elasticsearch] at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:121)
[elasticsearch] at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:270)
[elasticsearch] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
```
With patch:
```
java.lang.RuntimeException: Java version: Oracle Corporation 1.7.0_40 [Java HotSpot(TM) 64-Bit Server VM 24.0-b56] suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
Upgrading is preferred, this workaround will result in degraded performance.
at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:121)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
```
This commit adds a new API to allow scripts to say whether they need scores.
In practice, only the `expression` script engine makes use of it correctly,
other engines just return `true` since they can't predict whether they'll
need scores. This should make scripted aggregations and `function_query`
faster as we'll now be able to pass needsScores=false to Query.createWeight.
Adding a named query that is null can lead to a NullPointerException
when copying the named queries. This is due to an implementation detail
in QueryParseContent.copyNamedQueries. In particular, this method uses
com.google.common.collect.ImmutableMap.copyOf. A documented requirement
of ImmutableMap is that none of the entries have a null key nor null
value. Therefore, we should not add such queries to the namedQueries
map. This will not change any behavior since Map.get returns null if no
entry with the given key exists anyway.
Closes#12683
This change improves the `function_score` query to not compute scores at all
when they are not needed, and to not compute scores on the underlying query
when the combine function is to replace the score with the scores of the
functions.
This commit makes it possible to serialize arbitrary objects by having them extend Writeable. When reading them though, we need to be able to identify which object we have to create, based on its name. This is useful for queries once we move to parsing on the coordinating node, as well as with aggregations and so on.
Introduced a new abstraction called NamedWriteable, which is supported by StreamOutput and StreamInput through writeNamedWriteable and readNamedWriteable methods. A new NamedWriteableRegistry is introduced also where named writeable prototypes need to be registered so that we are able to retrieve the proper instance of the writeable given its name and then de-serialize it calling readFrom against it.
Closes#12393
In order to avoid extra reroutes, `RoutingService` should avoid
scheduling a reroute of any shards where the delay is negative. To make
sure that we don't encounter a race condition between the
GatewayAllocator thinking a shard is delayed and RoutingService thinking
it is not, the GatewayAllocator will update the RoutingService with the
last time it checked in order to use a consistent "view" of the delay.
Resolves#12456
Relates to #12515 and #12456
When postFilter generates a token that is identical to the input term
DirectCandidateGenerator should not preFilter this token. If postFilter
and preFilter are the same analyzer instance it would fail with :
"TokenStream contract violation: close() call missing"
This is a forward port of @nomoa's #12670
Today we try to detect if there is an index existing in the directory
and if not we create one. This can be tricky and errof prone since we
rely on the filesystem without taking the context into account when the
engine gets created. We know in all situations if the index should be created
so we can just use this infromation and rely on the lucene index writer to barf
if we hit a situations where we can't append to an index while we should.
Today we miss to throw / rethrow an recovery exception if it happens during
the finalization of phase 1 if the source files are not affected. Even worse
this can cause some dataloss if the reason for this exception is a failure of
deleting a corruption marker or similar pre-existing corruptions since we continue
with the recovery and mark the target shared as started which will in-turn open
an engine with an empty index.
This makes the output of EsThreadPoolExecutor#toString less pretty but
we no longer have funky hacky that rely on the specific format of the
toString produced by ThreadPoolExecutor which isn't part of its API and
could change with any JVM version and break the output.
Improving the toString allows for nicer error reporting. Also cleaned up
the way that EsRejectedExecutionException notices that it was rejected
from a shutdown thread pool. I left javadocs about how its not 100% correct
but good enough for most uses.
The improved toString on EsThreadPoolExecutor mean every one of them needs
a name. In most cases the name to use is obvious. In tests I use the name
of the test method and in real thread pools I use the name of the thread
pool. In non-ThreadPool executors I use the thread's name.
Closes#9732
In order to support the URL notation including a user/pass combination
(like http://user:pass@host/plugin.zip) the auth info needs to be added
manually.
We are currently splitting the parse() method in the query parsers into a
query parsing part (fromXContent(QueryParseContext)) and a new method that
generates the lucene query (toQuery(QueryParseContext)). At some point we
would like to be able to excute these two steps in different phases, one
on the coordinating node and one on the shards.
This PR tries to anticipate this by renaming the existing QueryParseContext
to QueryShardContext to make it clearer that this context object will provide
the information needed to generate the lucene queries on the shard. A new
QueryParseContext is introduced and all the functionality needed for parsing
the XContent in the request is moved there (parser, helper methods for parsing
inner queries).
While the refactoring is not finished, the new QueryShardContext will expose the
QueryParseContext via the parseContext() method. Also, for the time beeing the
QueryParseContext will contain a link back to the QueryShardContext it wraps, which
eventually will be deleted once the refactoring is done.
Closes#12527
Relates to #10217
Today this is "unofficial" as conf/scripts, but some people
want to share scripts across different nodes and so on. Because
they cannot configure it, they are forced to use dirty hacks
like symbolic links, which isnt going to work: we aren't going
to recursively scan conf/ and add permissions to all link targets
underneath it, thats crazy.
I really hate adding yet another configuration knob here, but
users resorting to using symlinks are going to be frustrated,
and do things in a more insecure way.
The URL to download the main elasticsearch plugins did not match
what the S3 wageon is supposed to write to.
In addition this PR adds support for snapshots to access the
snapshot S3 bucket, so we can possibly download snapshot versions
of plugins.
The format of the URLs stems from #12270Closes#12632
Conflicting mappings that were allowed before v2.0 can cause runaway shard failures on upgrade. This commit adds a check that prevents a cluster from starting if it contains such indices as well as restoring such indices from a snapshot into already running cluster.
Closes#11857
Currently when we delete files belonging to deleted snapshots we issue one delete command to underlying snapshot store at a time. Some repositories can benefit from bulk deletes of multiple files.
Closes#12533
This method syncs the translog unless it's already synced. If the engine
is alreayd closed we are guaranteed to be synced already such that we can just
ignore this exception.
Closes#12603
this change was added recently which uses default timezone for the creation
date on CAT endpoints. We should be consistent and use UTC across the board.
This commit adds #getDefaultTimzone() to forbidden API and fixes the REST tests.
Relates to #11688
The Streams.copyTo(String|Bytes)FromClasspath() methods resolve resources using org.elasticsearch.io.Streams classloader. This is fine in elasticsearch core and when running tests but if used in a plugin this can lead to FileNotFoundExceptions at runtime because plugin are loaded in a dedicated classloader.
Most of the abstract base test classes we have were previously @Ignored.
However, there were also some other tests ignored. Having two ways to
quiet tests is confusing, and clearly it has caused some tests
to get lost in the fold.
This change moves all base test classes to use the "TestCase" suffix,
which is not picked up by the test class name pattern. It also removes
@Ignore from (almost) all tests, and adds it to forbidden apis.
And since we were renaming, I shorted base test class names to use
"ES" instead of "Elasticsearch". I type this a lot of types a day,
and I have heard others express a similar desire for a shorter name.
closes#10659
Now that integ tests are moved into `mvn verify`, we don't really have
a need for @Slow, and especially not @Integration. This removes
uses of the first, and completely removes uses of the latter.
Tasks can be registered with a timeout, which runs as a task in a separate
threadpool. The idea is that the timeout runner cancels the main task when
the time is out, and the timeout runner is cancelled when the main task
starts executing. However, the following statement:
```java
timeoutFuture = timer.schedule(new Runnable() {
@Override
public void run() {
if (remove(TieBreakingPrioritizedRunnable.this)) {
runAndClean(timeoutCallback);
}
}
}, timeValue.nanos(), TimeUnit.NANOSECONDS);
```
is not atomic: the removal task is first started, and then the (volatile)
variable is assigned. As a consequence, there is a short window that allows
a timeout task to wait until the time is out even if the task is already
completed.
See http://build-us-00.elastic.co/job/es_core_17_centos/496/ for an example of
such a failure.
Previously only the first aggregation in a buckets_path was check to make sure the aggregation existed. Now the whole path is checked to ensure an aggregation exists at each element in the buckets_path
Closes#12360
Its not going to work: its blocked by security policy
and will just add a confusing SecurityException to the mix, and
bogusly give an exit status of 0 when in fact something bad happened.
Finally, if ES can't startup, it is a serious problem, there is
no sense in hiding the reason why: deliver the full stack trace.
The repository verification process should create a subdirectory to make sure we check permission of newly created directories in case elasticsearch processes on different nodes are running using different uids and creating blobs with incompatible permissions.
Closes#11611
Previously we issued a reroute when a node went over the high watermark
in order to move shards away from the node. This change tracks nodes
that have previously been over the high or low watermarks and issues a
reroute when the node goes back underneath the watermark.
This allows shards that may be unassigned to be assigned back to a node
that was previously over the low watermark but no longer is.
Resolves#12422
As windows has different line endings and this has
already been fixed by another test, the method has been
moved into CliToolTestCase.
In addition one test has been removed, as it was redundant.
In order to ensure, we have the same experience across operating systems
and shells, this commit uses the java CLI parser instead of the shell
getopt parsing to parse arguments.
This also allows for support for paths, which contain spaces.
Also commons-cli depdency was upgraded to 1.3.1 and tests have been added.
Changes
* new exit code, OK_AND_EXIT, allowing to tell the caller to exit, as everything
went as expected (e.g. when running a version output)
BWC breaking:
* execute() returns an ExitStatus instead of an integer, otherwise there is no
possibility to signal by a command, if the JVM should be exited after a run.
This affects plugins, that have command line tools
* -v used to be version, but is a verbose flag by default in the current CLI infra,
must be -V or --version now
* -X has been removed - the current implementation was useless anyway, as
it prefixed those properties with "es.". You should use
ES_JAVA_OPTS/JAVA_OPTS for JVM configuration
Date math index name resolution enables you to search a range of time-series indices, rather than searching all of your time-series indices and filtering the the results or maintaining aliases. Limiting the number of indices that are searched reduces the load on the cluster and improves execution performance. For example, if you are searching for errors in your daily logs, you can use a date math name template to restrict the search to the past two days.
The added `ExpressionResolver` implementation that is responsible for resolving date math expressions in index names. This resolver is evaluated before wildcard expressions are evaluated.
The supported format: `<static_name{date_math_expr{date_format|timezone_id}}>` and the date math expressions must be enclosed within angle brackets. The `date_format` is optional and defaults to `YYYY.MM.dd`. The `timezone_id` id is optional too and defaults to `utc`.
The `{` character can be escaped by places `\\` before it.
Closes#12059
When we rewrite to a MatchNoTermsQuery we were throwing out the boost which
could could lead to funky changes when the query against _all was in a
bool query.
Previously we would write the entire ByteBuffer to the stream to serialise the HDRHistogram even if it was not all needed. Now we only write the bytes that are actually written to in the ByteBuffer.
We currently test the toQuery method by comparing its return value with the output of createExpectedQuery. This seemed at first like a good deep test for the lucene generated queries, but ends up causing a lot of code duplication between tests and prod code, which smells like this test is not that useful after all.
This commit removes the requirement for createExpectedQuery, we test a bit less the lucene generated queries, but we do still have a testToQuery method in BaseQueryTestCase. This test acts as smoke test to verify that there are no issues with our toQuery method implementation for each query: it verifies that two equal queries should result in the same lucene query. It also verifies that changing the boost value of a query affects the result of the toQuery.
Part of the previous createExpectedQuery can be moved to assertLuceneQuery using normal assertions, we do it when it makes sense only though.
This commit also adds support for boost and queryName to SpanMultiTermQueryParser and SpanMultiTermQueryBuilder, plus it removes no-op setters for queryName and boost from QueryFilterBuilder. Instead we simply declare in our test infra that query filter doesn't support boost and queryName and we disable tests aroudn these fields in that specific case. Boost and queryName support is also moved down to the QueryBuilder interface.
Closes#12473
I suggest updating the BulkProcessor so that it prevents users from building a BulkProcessor with a null client. If the client is null, then the class should throw an exception with a message that describes the cause of the exception. Earlier today, when I passed a null client to the builder() method, later received an exception, and attempted to get the exception's message in my afterBulk() listener (e.g. e.getMessage()), the message was null. It took me a while to pinpoint the cause of the problem.
This can happen in two ways:
1. The _all field is disabled.
2. There are documents in the index, the _all field is enabled, but there are
no fields in any of the documents.
In both of these cases we now rewrite the query to a MatchNoDocsQuery which
should be safe because there isn't anything to match.
Closes#12439
When a node discovers shard content on disk which isn't used, we reach out to all other nodes that supposed to have the shard active. Only once all of those have confirmed the shard active, the shard has no unassigned copies *and* no cluster state change have happened in the mean while, do we go and delete the shard folder.
Currently, after removing a shard, the IndicesStores checks the indices services if that has no more shard active for this index and if so, it tries to delete the entire index folder (unless on master node, where we keep the index metadata around). This is wrong as both the check and the protections in IndicesServices.deleteIndexStore make sure that there isn't any shard *in use* from that index. However, it may be the we erroneously delete other unused shard copies on disk, without the proper safety guards described above.
Normally, this is not a problem as the missing copy will be recovered from another shard copy on another node (although a shame). However, in extremely rare cases involving multiple node failures/restarts where all shard copies are not available (i.e., shard is red) there are race conditions which can cause all shard copies to be deleted.
Instead, we should change the decision to clean up an index folder to based on checking the index directory for being empty and containing no shards.
This change creates a proper `distribution` modules in which we have today packaging for
all of our four current packages:
* zip
* tar.gz
* rpm
* deb
Licenes have moved into the distribution project as well. So have the config/ and the bin/ directory
from the core/ project.
The RPM package is now built, if rpmbuild exists.
The bats tests have been moved as well.
Also the zip distribution now executes the REST integration tests.
As Robert pointed out on #12465, it has the undesirable property of relying on
the operating system. So it would be better to use a simple rule such as
checking whether the file name starts with a dot.
When an index is opened it will not be assigned to a node but also not have closed state
anymore. Before we only checked if an index either is closed or assigned to the data node
and therefore the change from close->open was not written.
Some of the test for meta data are redundant. Also, since they
somewhat test service disruptions (start master with empty
data folder) we might move them to DiscoveryWithServiceDisruptionsTests.
Also, this commit adds a test for
https://github.com/elastic/elasticsearch/issues/11665
If we cache the type filter and we e.g. set its boost which is now settable on all queries, the boost will change for all subsequent queries. We should rather create a new query every time.
Major changes:
* Changed MetaData to holds alias and index lookup information into a single TreeMap instead of two separate maps.
* Moved the building of the alias / index lookup to the metadata builder.
Settings are currently parsed by looping over the tokens until an END_OBJECT token is reached. However, this does not mean that the end of
the settings stream was reached. This can occur, for example, when parsing a YAML settings file with inconsistent indentation. Currently
in this case, some settings will be silently ignored. This commit forces a check that we have in fact reached the end of the settings
stream.
Closes#12382
HDRHistogram has been added as an option in the percentiles and percentile_ranks aggregation. It has one option `number_significant_digits` which controls the accuracy and memory size for the algorithm
Closes#8324
If we cache the type filter and we e.g. set its boost which is now settable on all queries, the boost will change for all subsequent queries. We should rather create a new query every time.
Following up to #12427, this PR does same changes, moving null-checks from construtors
and setters in query builder to the validate() method.
PR against query-refactoring branch
The ThrottlingAllocationDecider is responsible to limit the number of incoming/local recoveries on a node. It therefore shouldn't count shards marked as relocating which represent the source of the recovery.
Closes#12409
If there are multiple levels of filtered leaf readers then with the unwrap() method it immediately returns the most inner leaf reader and thus skipping of over any other filtered leaf reader that may be instance of ElasticsearchLeafReader. By using #getDelegate() method we can check each filter reader layer if it is instance of ElasticsearchLeafReader, so that we never skip over any wrapped filtered leaf reader and lose the shard id.
Also, as set of utility methods was introduced on ShardRouting to do various types of matching with other shard routings, giving control about what exactly should be matched (same shard id, same allocation id, all but version and shard info etc.). This is useful here, but also prepares the grounds for the change needed in #12387 (making ShardRouting.equals be strict and perform exact equality).
Closes#12397
When a replica is initializing from the primary, and we find a better node that has full sync id match, it is better to cancel the existing replica allocation and allocate it to the new node with sync id match (eventually)
Today, when a user provides settings and specifies a classloader to be used, the classloader gets
dropped when we copy the settings to check for prompt entries. This change copies the classloader
when replacing the prompt placeholders and adds a test to ensure the InternalSettingsPreparer
always retains the classloader.
Closes#12340
JarHell has a low level check, but its more of a best effort one,
only checking if X-Compile-Target-JDK is set in the manifest. This
is the case for all lucene- and elasticsearch- generated jars, but
lets just be explicit for plugins.
The TransportSingleCustomOperationAction `prefer_local` option has been removed as it isn't worth the effort.
The TransportSingleShardAction will execute the operation on the receiving node if a concrete list doesn't provide a list of candite shards routings to perform the operation on.
Change #12371 broke fielddata on the `_parent` child for indices created before
2.0. This pull request adds back caching of the `_parent` fielddata for indices
created before 2.0 and cleans some related stuff. For instance
DocumentTypeListener doesn't need to take care of removed mappings anymore since
mappings can't be removed in 2.0.
The rangeQuery() method in MappedFieldType and some overwriting subtypes takes
a nullable QueryParseContext argument which is unused and can be deleted.
This is also useful for the current query parsing refactoring, since
we want to avoid passing the context object around unnecessarily.
IndicesStoreIntegrationTests.indexCleanup() tests if the shard files on disk are actually removed
after relocation. In particular it tests the following:
Whenever a node deletes a shard because it was relocated somewhere else, it first
checks if enough other copies are started somewhere else. The node sends a
ShardActiveRequest to the nodes that should have a copy.
The nodes that receive this request check if the shard is in state STARTED in which case
they respond with true. If they have the shard in POST_RECOVERY they register a cluster state
observer that checks at each update if the shard has moved to STARTED and respond with true when
this happens.
To test that the cluster state observer mechanism actually works, the latter can be triggered by
blocking the cluster state processing when a recover starts and only unblocking it shortly
after the node receives the ShardActiveRequest.
This is more reliable than using random cluster state processing delays
because the random delays make it hard to reason about different timeouts
that can be reached.
closes#11989
Closes#12367
Squashed commit of the following:
commit 9453c411798121aa5439c52e95301f60a022ba5f
Merge: 3511a9c 828d8c7
Author: Robert Muir <rmuir@apache.org>
Date: Wed Jul 22 08:22:41 2015 -0400
Merge branch 'master' into refactor_pluginservice
commit 3511a9c616503c447de9f0df9b4e9db3e22abd58
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 21:50:15 2015 -0700
Remove duplicated constant
commit 4a9b5b4621b0ef2e74c1e017d9c8cf624dd27713
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 21:01:57 2015 -0700
Add check that plugin must specify at least site or jvm
commit 19aef2f0596153a549ef4b7f4483694de41e101b
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 20:52:58 2015 -0700
Change plugin "plugin" property to "classname"
commit 07ae396f30ed592b7499a086adca72d3f327fe4c
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:36:05 2015 -0400
remove test with no methods
commit 550e73bf3d0f94562f4dde95239409dc5a24ce25
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:31:58 2015 -0400
fix loading to use classname
commit 04463aed12046da0da5cac2a24c3ace51a79f799
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:24:19 2015 -0400
rename to classname
commit 9f3afadd1caf89448c2eb913757036da48758b2d
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 20:18:46 2015 -0700
moved PluginInfo and refactored parsing from properties file
commit df63ccc1b8b7cc64d3e59d23f6c8e827825eba87
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:08:26 2015 -0400
fix test
commit c7febd844be358707823186a8c7a2d21e37540c9
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:03:44 2015 -0400
remove test
commit 017b3410cf9d2b7fca1b8653e6f1ebe2f2519257
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 22:58:31 2015 -0400
fix test
commit c9922938df48041ad43bbb3ed6746f71bc846629
Merge: ad59af4 01ea89a
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 22:37:28 2015 -0400
Merge branch 'master' into refactor_pluginservice
commit ad59af465e1f1ac58897e63e0c25fcce641148a7
Author: Areek Zillur <areek.zillur@elasticsearch.com>
Date: Tue Jul 21 19:30:26 2015 -0400
[TEST] Verify expected number of nodes in cluster before issuing shardStores request
commit f0f5a1e087255215b93656550fbc6bd89b8b3205
Author: Lee Hinman <lee@writequit.org>
Date: Tue Jul 21 11:27:28 2015 -0600
Ignore EngineClosedException during translog fysnc
When performing an operation on a primary, the state is captured and the
operation is performed on the primary shard. The original request is
then modified to increment the version of the operation as preparation
for it to be sent to the replicas.
If the request first fails on the primary during the translog sync
(because the Engine is already closed due to shadow primaries closing
the engine on relocation), then the operation is retried on the new primary
after being modified for the replica shards. It will then fail due to the
version being incorrect (the document does not yet exist but the request
expects a version of "1").
Order of operations:
- Request is executed against primary
- Request is modified (version incremented) so it can be sent to replicas
- Engine's translog is fsync'd if necessary (failing, and throwing an exception)
- Modified request is retried against new primary
This change ignores the exception where the engine is already closed
when syncing the translog (similar to how we ignore exceptions when
refreshing the shard if the ?refresh=true flag is used).
commit 4ac68bb1658688550ced0c4f479dee6d8b617777
Author: Shay Banon <kimchy@gmail.com>
Date: Tue Jul 21 22:37:29 2015 +0200
Replica allocator unit tests
First batch of unit tests to verify the behavior of replica allocator
commit 94609fc5943c8d85adc751b553847ab4cebe58a3
Author: Jason Tedor <jason@tedor.me>
Date: Tue Jul 21 14:04:46 2015 -0400
Correctly list blobs in Azure storage to prevent snapshot corruption and do not unnecessarily duplicate Lucene segments in Azure Storage
This commit addresses an issue that was leading to snapshot corruption for snapshots stored as blobs in Azure Storage.
The underlying issue is that in cases when multiple snapshots of an index were taken and persisted into Azure Storage, snapshots subsequent
to the first would repeatedly overwrite the snapshot files. This issue does render useless all snapshots except the final snapshot.
The root cause of this is due to String concatenation involving null. In particular, to list all of the blobs in a snapshot directory in
Azure the code would use the method listBlobsByPrefix where the prefix is null. In the listBlobsByPrefix method, the path keyPath + prefix
is constructed. However, per 5.1.11, 5.4 and 15.18.1 of the Java Language Specification, the reference null is first converted to the string
"null" before performing the concatenation. This leads to no blobs being returned and therefore the snapshot mechanism would operate as if
it were writing the first snapshot of the index. The fix is simply to check if prefix is null and handle the concatenation accordingly.
Upon fixing this issue so that subsequent snapshots would no longer overwrite earlier snapshots, it was discovered that the snapshot metadata
returned by the listBlobsByPrefix method was not sufficient for the snapshot layer to detect whether or not the Lucene segments had already
been copied to the Azure storage layer in an earlier snapshot. This led the snapshot layer to unnecessarily duplicate these Lucene segments
in Azure Storage.
The root cause of this is due to known behavior in the CloudBlobContainer.getBlockBlobReference method in the Azure API. Namely, this method
does not fetch blob attributes from Azure. As such, the lengths of all the blobs appeared to the snapshot layer to be of length zero and
therefore they would compare as not equal to any new blobs that the snapshot layer is going to persist. To remediate this, the method
CloudBlockBlob.downloadAttributes must be invoked. This will fetch the attributes from Azure Storage so that a proper comparison of the
blobs can be performed.
Closeselastic/elasticsearch-cloud-azure#51, closeselastic/elasticsearch-cloud-azure#99
commit cf1d481ce5dda0a45805e42f3b2e0e1e5d028b9e
Author: Lee Hinman <lee@writequit.org>
Date: Mon Jul 20 08:41:55 2015 -0600
Unit tests for `nodesAndVersions` on shared filesystems
With the `recover_on_any_node` setting, these unit tests check that the
correct node list and versions are returned.
commit 3c27cc32395c3624f7c794904d9ea4faf2eccbfb
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 14:15:59 2015 -0400
don't fail junit4 integration tests if there are no tests.
instead fail the failsafe plugin, which means the external cluster will still get shut down
commit 95d2756c5a8c21a157fa844273fc83dfa3c00aea
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 17:16:53 2015 +0200
Testing: Fix help displaying tests under windows
The help files are using a unix based file separator, where as
the test relies on the help being based on the file system separator.
This commit fixes the test to remove all `\r` characters before
comparing strings.
The test has also been moved into its own CliToolTestCase, as it does
not need to be an integration test.
commit 944f06ea36bd836f007f8eaade8f571d6140aad9
Author: Clinton Gormley <clint@traveljury.com>
Date: Tue Jul 21 18:04:52 2015 +0200
Refactored check_license_and_sha.pl to accept a license dir and package path
In preparation for the move to building the core zip, tar.gz, rpm, and deb as separate modules, refactored check_license_and_sha.pl to:
* accept a license dir and path to the package to check on the command line
* to be able to extract zip, tar.gz, deb, and rpm
* all packages except rpm will work on Windows
commit 2585431e8dfa5c82a2cc5b304cd03eee9bed7a4c
Author: Chris Earle <pickypg@users.noreply.github.com>
Date: Tue Jul 21 08:35:28 2015 -0700
Updating breaking changes
- field names cannot be mapped with `.` in them
- fixed asciidoc issue where the list was not recognized as a list
commit de299b9d3f4615b12e2226a1e2eff5a38ecaf15f
Author: Shay Banon <kimchy@gmail.com>
Date: Tue Jul 21 13:27:52 2015 +0200
Replace primaryPostAllocated flag and use UnassignedInfo
There is no need to maintain additional state as to if a primary was allocated post api creation on the index routing table, we hold all this information already in the UnassignedInfo class.
closes#12374
commit 43080bff40f60bedce5bdbc92df302f73aeb9cae
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 15:45:05 2015 +0200
PluginManager: Fix bin/plugin calls in scripts/bats test
The release and smoke test python scripts used to install
plugins in the old fashion.
Also the BATS testing suite installed/removed plugins in that
way. Here the marvel tests have been removed, as marvel currently
does not work with the master branch.
In addition documentation has been updated as well, where it was
still missing.
commit b81ccba48993bc13c7678e6d979fd96998499233
Author: Boaz Leskes <b.leskes@gmail.com>
Date: Tue Jul 21 11:37:50 2015 +0200
Discovery: make sure NodeJoinController.ElectionCallback is always called from the update cluster state thread
This is important for correct handling of the joining thread. This causes assertions to trip in our test runs. See http://build-us-00.elastic.co/job/es_g1gc_master_metal/11653/ as an example
Closes#12372
commit 331853790bf29e34fb248ebc4c1ba585b44f5cab
Author: Boaz Leskes <b.leskes@gmail.com>
Date: Tue Jul 21 15:54:36 2015 +0200
Remove left over no commit from TransportReplicationAction
It asks to double check thread pool rejection. I did and don't see problems with it.
commit e5724931bbc1603e37faa977af4235507f4811f5
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 15:31:57 2015 +0200
CliTool: Various PluginManager fixes
The new plugin manager parser was not called correctly in the scripts.
In addition the plugin manager now creates a plugins/ directory in case
it does not exist.
Also the integration tests called the plugin manager in the deprecated way.
commit 7a815a370f83ff12ffb12717ac2fe62571311279
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 13:54:18 2015 +0200
CLITool: Port PluginManager to use CLITool
In order to unify the handling and reuse the CLITool infrastructure
the plugin manager should make use of this as well.
This obsolets the -i and --install options but requires the user
to use `install` as the first argument of the CLI.
This is basically just a port of the existing functionality, which
is also the reason why this is not a refactoring of the plugin manager,
which will come in a separate commit.
commit 7f171eba7b71ac5682a355684b6da703ffbfccc7
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Tue Jul 21 10:44:21 2015 +0200
Remove custom execute local logic in TransportSingleShardAction and TransportInstanceSingleOperationAction and rely on transport service to execute locally. (forking thread etc.)
Change TransportInstanceSingleOperationAction to have shardActionHandler to, so we can execute locally without endless spinning.
commit 0f38e3eca6b570f74b552e70b4673f47934442e1
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 17:36:12 2015 -0700
More readMetadata tests and pickiness
commit 880b47281bd69bd37807e8252934321b089c9f8e
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 14:42:09 2015 -0700
Started unit tests for plugin service
commit cd7c8ddd7b8c4f3457824b493bffb19c156c7899
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 07:21:07 2015 -0400
fix tests
commit 673454f0b14f072f66ed70e32110fae4f7aad642
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 06:58:25 2015 -0400
refactor pluginservice
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217
Moving the query building functionality from the parser to the builders
new doToQuery() method analogous to other recent query refactorings.
Relates to #10217Closes#12365
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217Closes#12182
When performing an operation on a primary, the state is captured and the
operation is performed on the primary shard. The original request is
then modified to increment the version of the operation as preparation
for it to be sent to the replicas.
If the request first fails on the primary during the translog sync
(because the Engine is already closed due to shadow primaries closing
the engine on relocation), then the operation is retried on the new primary
after being modified for the replica shards. It will then fail due to the
version being incorrect (the document does not yet exist but the request
expects a version of "1").
Order of operations:
- Request is executed against primary
- Request is modified (version incremented) so it can be sent to replicas
- Engine's translog is fsync'd if necessary (failing, and throwing an exception)
- Modified request is retried against new primary
This change ignores the exception where the engine is already closed
when syncing the translog (similar to how we ignore exceptions when
refreshing the shard if the ?refresh=true flag is used).
The help files are using a unix based file separator, where as
the test relies on the help being based on the file system separator.
This commit fixes the test to remove all `\r` characters before
comparing strings.
The test has also been moved into its own CliToolTestCase, as it does
not need to be an integration test.
There is no need to maintain additional state as to if a primary was allocated post api creation on the index routing table, we hold all this information already in the UnassignedInfo class.
closes#12374
The release and smoke test python scripts used to install
plugins in the old fashion.
Also the BATS testing suite installed/removed plugins in that
way. Here the marvel tests have been removed, as marvel currently
does not work with the master branch.
In addition documentation has been updated as well, where it was
still missing.
This dependency was used in order for mapping updates that change the fielddata
format to take effect immediately. And the way it worked was by clearing the
cache of fielddata instances that were already loaded. However, we do not need
to cache the already loaded (logical) fielddata instances, they are cheap to
regenerate. Note that the fielddata _caches_ are still kept around so that we
don't keep on rebuilding costly (physical) fielddata values.
The new plugin manager parser was not called correctly in the scripts.
In addition the plugin manager now creates a plugins/ directory in case
it does not exist.
Also the integration tests called the plugin manager in the deprecated way.
In order to unify the handling and reuse the CLITool infrastructure
the plugin manager should make use of this as well.
This obsolets the -i and --install options but requires the user
to use `install` as the first argument of the CLI.
This is basically just a port of the existing functionality, which
is also the reason why this is not a refactoring of the plugin manager,
which will come in a separate commit.
The `_index` field is now a completely virtual field thanks
to #12027. It is no longer necessary to index the actual value
of the index name.
closes#12329
The index name was passed along through many levels of mapping parsing,
just so that it could be used for _index. However, the index name
is really metadata that should exist alongside things like type and
id in SourceToParse.
This change moves index name to SourceToParse, and eliminates it from the
DocumentMapperParser.
Today we grant read+write+delete access to any files underneath the home.
But we have to remove this, if we want to have improved security of files
underneath elasticsearch.
Fold ignored unassigned to a UnassignedShards and have simpler handling of them. Also remove the trapy way of adding an ignored unassigned shards today directly to the list, and have dedicated methods for it.
This change also removes the useless moving of unassigned shards to the end, since anyhow we first, sort those unassigned shards, and second, we now have persistent "store exceptions" that should not cause "dead letter" shard allocation.
Break it into more manageable code by separating allocation primaries and allocating replicas. Start adding basic unit tests for primary shard allocator.
Our thread pools have support for timeout on a task. To support this, a special background task is schedule to run at timeout. That background task fires and check if the main task is still in the executor queue and then cancels it if needed. Currently we schedule this background task before adding the main task to the queue. If the timeout is very small (in tests we often use numbers like 2 ms) the background task can fire before the main one is added to the queue causing the timeout to be missed.
See http://build-us-00.elastic.co/job/es_g1gc_master_metal/11780/testReport/junit/org.elasticsearch.cluster/ClusterServiceTests/testTimeoutUpdateTask/Closes#12319
On top of that:
1) A relocation target shards' allocation id is changed to include the allocation id of the source shard under relocatingId (similar to shard routing semantics)
2) The logic around state change for finalize shard relocation is simplified - one simple start the target shard (we previously had unused logic around relocating state)
Closes#12299
While the GeoJSON spec does say a polygon is represented as an array of LinearRings (where a LinearRing is defined as a 'closed' array of points), the coerce parameter provides users with flexibility to have ES automatically close polygons. This addresses situations like those integrated with twitter (where GeoJSON polygons are not closed) such that our users do not have to write extra code to close the polygon. This code change adds the optional coerce parameter to the GeoShapeFieldMapper.
closes#11131
Currently this target is "yet another way" to run elasticsearch,
which we can't maintain. It also has the problem that it doesnt
ensure its running on the latest source code, doesn't configure
any scratch space properly, won't work with securitymanager, list
goes on.
Even if we made it work, it would break every day, since its untested.
Instead, `mvn package -Drun -DskipTests` will run packaging, and then
startup bin/elasticsearch (like integration tests, but in foreground).
It also enables debugger socket on port 8000, for people that like
IDE debuggers and not system.out.println.
Its a little slower to get started because of all the shading/RPM/DEB
building going on in `package` but that is just what it is right now
until that stuff is moved out.
failsafe uses surefire, which sucks. It also mean integ tests act alien right now.
I would rather have the consistency, e.g. things formatted the same way, running integ tests under security manager, etc.
Store information reports on which nodes shard copies exist, the shard
copy version, indicating how recent they are, and any exceptions
encountered while opening the shard index or from earlier engine failure.
closes#10952
When adding a script to the Groovy classloader, the script name is used
as the class identifier in the classloader. This means that in order not
to break JVM Classloader convention, that script must always be
available by that name. As a result, modifying a script with the same
content over and over causes it to be loaded with a different name (due
to the incrementing integer).
This is particularly bad when something like chef or puppet replaces the
on-disk script file with the same content over and over every time a
machine is converged.
This change makes the script name the SHA1 hash of the script itself,
meaning that replacing a script with the same text will use the same
script name.
Resolves#12212
Just like specifying `?preference=_primary`, this adds the ability to
specify `?preference=_replica` or `?preference=_replica_first` on
requests that support it.
Resolves#12222
This action is a liveness test added in #8763 . It should be excluded, just like the fault detection logic or things become overly chatty.
Closes#12291
Most of our tests call assertSearchResponse which checks whether all shards
were successful. However this is usually not necessary given that all shards
that received documents should be available given the way our indexing works.
The only shards that might not be available are those that did not index
any documents. So removing this assertion would allow us to remove most
ensureGreen/ensureYellow calls while still being able to assert on the content
of the search response since all data have been taken into account.
For now I only removed the ensureYellow/Green calls from SearchQueryTests in
order to not de-stabilize the build, but eventually we should remove most of
them.
Today, the unicast zen test configuration will try to find a open port starting at the internal test cluster's
base port and continuing for 1000 ports. The internal test cluster class assigns a port range of 100 ports
to each JVM. This means that the unicast zen test configuration will try ports in the range for another JVM
and can lead to port conflicts. This change uses the same value for both so that the unicast configuration
does not go into another JVM's port range.
Add a unique allocation id for a shard, helping to uniquely identify a specific allocation taking place to a node.
A special case is relocation, where a transient relocationId is kept around to make sure the target initializing shard (when using RoutingNodes) is using it for its id, and when relocation is done, the transient relocationId becomes the actual id of it.
closes#12242
If an index name is reused but a leftover shard still exists on any node
we fail repeatedly to allocate the shard since we now check the index UUID
before reusing data. This commit allows to recover even if there is such a
leftover shard by deleting the leftover shard.
Closes#10677
During master election each node pings in order to discover other nodes and validate the liveness of existing nodes. Based on this information the node either discovers an existing master or, if enough nodes are found (based on `discovery.zen.minimum_master_nodes>>) a new master will be elected.
Currently, the node that is elected as master will currently update it the cluster state to indicate the result of the election. Other nodes will submit a join request to the newly elected master node. Instead of immediately processing the election result, the elected master
node should wait for the incoming joins from other nodes, thus validating the elections result is properly applied. As soon as enough nodes have sent their joins request (based on the `minimum_master_nodes` settings) the cluster state is modified.
Note that if `minimum_master_nodes` is not set, this change has no effect.
Closes#12161
Require urls for URL repository to be listed in repositories.url.allowed_urls setting. This change ensures that only authorized URLs can be accessed by elasticsearch
The previous strategy (target/xxx + .m2/repository) is obviously broken for multi-module
builds.
Includes hack for crazy jython, which "finds its own jar" then looks for a Lib/ beside it
Lucene deprecated this in 4.0 and we only try best effort to support it.
Folks should only use edit distance rather than some length based
similarity. Yet the formular is simple enough such that users can
still do it in the client if they really need to.
Closes#10638
The old script syntax has been removed from the Java API but the metrics aggregations were missed. This change removes the old script API from the ValuesSourceMetricsAggregationBuilder and removes the relevant test methods for the metrics aggregations.
Today we have a intermediate hierarchy for shard and index exceptions
which makes it hard to introduce generic exceptions like ResourceNotFoundException
intoduced in this commit. This commit breaks up the hierarchy by adding index and shard
as a special internal header that gets rendered for every exception that fills that header.
This commit removes dedicated exceptions like `IndexMissingException` or
`IndexShardMissingException` in favour of `ResourceNotFoundException`
Change the default delayed allocation timeout from 0 (no delayed allocation) to 1m. The value came from a test of having a node with 50 shards being indexed into (so beefy translog requiring flush on shutdown), then shutting it down and starting it back up and waiting for it to join the cluster. This took, on a slow machine, about 30s.
The value is conservatively low and does not try to address a virtual machine / OS restart for now, in order to not have the affect of node going away and users being concerned that shards are not being allocated to the rest of the cluster as a result of that. The setting can always be changed in order to increase the delayed allocation if needed.
closes#12166
The MetaData.clusterUUID is guaranteed to be unique across clusters and is handy (which may or may not have the same human readable cluster name).
Closes#11832
As explained in #11831, we currently have uuid fields on the cluster state, meta data and index metadata. The latter two are persistent across changes are being effectively used as a persistent uuid for this cluster and a persistent uuid for an index. The first (ClusterState.uuid) is ephemeral and changes with every change to the cluster state. This is confusing,
We settled on having the following, new names:
-> ClusterState.uuid -> stateUUID (transient)
-> MetaData.uuid -> clusterUUID (persistent)
-> IndexMetaData.uuid -> indexUUID (persistent).
Closes#11914Closes#11831
Today we throw ElasticsearchException if we can't lock the index. This can cause
problems since some places where we have logic to deal with IOException on shard
deletion won't schedule a retry if we can't lock the index dir for removal. This
is the case on shadow replicas for instance if a shared FS is used. The result
of this is that the delete of an index is never acked.
A change in #12116 introduces closing / cleaning of search ctx even if
the index service was closed due to a relocation of it's last shard. This
is not desired since in that case it's fine to serve the pending requests from
the relocated shard. This commit adds an extra check to ensure that the index is
either removed (delete) or closed via API.
The term query parser was too lenient during parsing and allowed to specify
more than one field, even though this expected to filter only for a single field.
This commit returns an exception if a query has been specified more than once.
Closes#12184
Modified ScriptEngineService to pass in a CompiledScript object
with newly added name and type member variables.
This can in turn be used to give better scripting error messages
with the type of script used and the name of the script.
Required slight modifications to the caching mechanism.
Note that this does not enforce good behavior in that plugins will
have to write exceptions that also output the name of the script
in order to be effective. There was no way to wrap the script
methods in a try/catch block properly further up the chain because
many have script-like objects passed back that can be run at a
later time.
closes#6653closes#11449
Changes in a nutshell:
* All expression logic is now encapsulated by ExpressionResolver interface.
* MetaData#convertFromWildcards() gets replaced by WildcardExpressionResolver.
* All of the indices expansion methods are being moved from MetaData class to the new IndexNameExpressionResolver class.
* All single index expansion optimisations are removed.
The logic for resolving a concrete index name from an expression has been moved from MetaData to IndexExpressionResolver. The logic has been cleaned up and simplified were was possible without breaking bwc.
Also the notion of aliasOrIndex has been changed to index expression.
The IndexNameExpressionResolver translates index name expressions into concrete indices. The list of index name expressions are first delegated to the known ExpressionResolverS. An ExpressionResolver is responsible for translating if possible an expression into another expression (possibly but not required this can be concrete indices or aliases) otherwise the expressions are left untouched. Concretely this means converting wildcard expressions into concrete indices or aliases, but in the future other implementations could convert expressions based on different rules.
To prevent many overloading of methods, DocumentRequest extends now from IndicesRequest. All implementation of DocumentRequest already did implement IndicesRequest indirectly.
This allows the creation of the RPM artifact as part of the
maven package phase. The result of this is that we get checksum and
name correction for-free as it's all build an installed into the m2
repository. This also publishes the RPM together with .deb to the mvn
mirror.
Note: this will only build the RPM as part of the package phase if
`-Dpackage.rpm=true` since the binaries to build the RPM are not
availabel on all platforms.
Today we only clear search contexts for deleted indies. Yet, we should
do the same for closed indices to ensure they can be reopened quickly.
Closes#12116
A method for the new Script API were missing in the ValuesSourceMetricsAggregationBuilder. This change adds the missing method and deprecates the old Script API methods
When a node sends a shard started message to the master, the master goes through the routing table looking for the shard to start. At the moment we validate the indexUUID, the node the shard is assigned to and the fact that the shard is initializing. This check goes wrong if a relocating replica shard finishes recovery just at the moment the source node leaves the cluster. In this case the master will cancel the recovery and will likely assign a new initializing replica to the same target node. In this case the message from the relocation recovery can activate the new replica wrongfully.
Also, the logic for decided whether an incoming shard started message will be applied was split between ShardStateAction and the AllocationService.
This commit does the following:
1) Let ShardStateAction only filter basic stuff like index existence and indexUUID.
2) Move the trickier shard started matching logic to the AllocationService and make it stricter
3) Unify ShardStateAction filtering logic for both shard started and shard failed.
4) Add unit tests for all of the above.
For an example test failure see: http://build-us-00.elastic.co/job/es_core_16_centos/388/Closes#11999
Today everything is tight to having the next version as the latest.
In order to work towards 2.0.0.beta1 we need to fix all the usage of
2.0.0-SNAPSHOT to reflect the version we will release soon.
Usually we do this on the release branch but to simplify things I wanna
keep this on master for now and move to 2.1.0-SNAPSHOT on master once
we created a 2.0 branch.
Closes#12148
This information was stored with the snapshot but wasn't available on the interface. Knowing the version of elasticsearch that created the snapshot can be useful to determine the minimal version of the cluster that is required in order to restore this snapshot.
Closes#11980
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217
This change fixes the plugin manager to trim `elasticsearch-` and `es-` prefixes from plugin names
for our official plugins. This restores the old behavior prior to #11805.
Closes#12143
Today it will remove all permissions and only set execute bit:
---x--x--x
Instead we should preserve existing permissions, and just add
read and execute to whatever is there.
Closes#12142
This change allows custom settings to be passed to the client for the external test cluster,
which is necessary when additional settings need to be passed to the client in order to
properly communicate with the external test cluster.
Before inner_hits existed named queries has support to also verify if inner queries of nested query matched with returned documents. This logic was broken and became obsolete from the moment inner hits get released. #10694 fixed named queries for nested docs in the top_hits agg, but it didn't fix the named query support for nested inner hits. This commit fixes that and on top of this also adds support for parent/child inner hits.
We allow setting the node's name a few different ways: the `name` system
property, the setting `name`, and the setting `node.name`. There is an order
of preference to these settings that gets applied, which can copy values from the
system property or `node.name` setting to the `name` setting. When setting
only `node.name` to one of the prompt placeholders, the user would be
prompted twice as the value of `node.name` is copied to `name` prior to
prompting for input. Additionally, the value entered by the user for `node.name`
would not be used and only the value entered for `name` would be used.
This fix changes the behavior to only prompt once when `node.name is set` and
`name` is not set. This is accomplished by waiting until all values have been
prompted and replaced, then the logic for determining the node's name is
executed.
Closes#11564
This commit adds support to retrieve fields when using the bulk update API. This functionality was previously available for the update API
but not for the bulk update API.
Closes#11527
Fixed documentation since the default rewrite method for fuzzy queries is to
select top terms, fixed usage of the fuzzy rewrite method, and removed unused
`rewrite` parameter.
Close#6932
We still want one mapping with an object with two inner fields for e.g. testing one
code path in ExistsQueryBuilder. Using the dot notation for field names was
forbidden with recent changes from master coming in.
This rewrite method is interesting because it computes scores as if all terms
had the same frequencies, which avoids disappointments with ranking when a fuzzy
query ranks typos first given that they are less frequent than the correct term.
Today shards are responsible for producing one sort value per document, which
is later used on the coordinating node to resolve the global top documents.
However, this is problematic on string fields with
`missing: _first, order: desc` or `missing: _last, order: asc` given that there
is no such thing as a string that compares greater than any other string. Today
we use a string containing a single code point which is the maximum allowed code
point but this is a hack: instead we should inform the coordinating node that
the document had no value and let it figure out how it should be sorted
depending on whether missing values should be sorted first or last.
Close#9155
Simplify and consolidate ShardRouting construction. Make sure that there is really only one place it gets created, when a shard is first created in unassigned state, and from there on, it is either copy constructed or built internally as a target for relocation.
This change helps make sure within our codebase data carries over by the ShardRouting is not lost as the shard goes through transitions, and can help simplify the addition of more data on it (like uuid).
For testing, a centralized TestShardRouting allows to create testable versions of ShardRouting, that are not needed to be as strict as the non test codebase. This can be cleanup more later on, but it is a good start.
closes#12125
This makes FuzzyQueryBuilder and Parser take an Object as a value using the
same logic as termQuery, so that numbers, dates or Strings would be properly
handled.
Relates #11865Closes#12020
This converts the tracking of jars and classes in JarHell to use
Path objects, instead of URL. This makes for nicer printing
of the underlying path when an error does occur.
This commit defaults fuzzy_transpositions on fuzzy queries to true. This means that by default, tranpositions will now count as a single
edit.
Closes#9278
the failure type. This change marks the engine as corrupted only when the failure
is caused by an actual index corrruption. When an engine is failed for other
reasons, the engine is only closed without removing the shard state.
closes#11788
Plugin Manager can now use another simplified form when a user wants to install an official plugin hosted at elasticsearch download service.
The form we use is:
```sh
bin/plugin install pluginname
```
As plugins share now the same version as elasticsearch, we can automatically guess what is the exact current version of the plugin manager script.
Also, download service will now use `/org.elasticsearch.plugins/pluginName/pluginName-version.zip` URL path to download a plugin.
If the older form is provided (`user/plugin/version` or `user/plugin`), we will still use:
* elasticsearch download service at `/user/plugin/plugin-version.zip`
* maven central with groupIp=user, artifactId=plugin and version=version
* github with user=user, repoName=plugin and tag=version
* github with user=user, repoName=plugin and branch=master if no version is set
Note that community plugin providers can use other download services by using `--url` option.
If you try to use the new form with a non core elasticsearch plugin, the plugin manager will reject
it and will give you all known core plugins.
```
Usage:
-u, --url [plugin location] : Set exact URL to download the plugin from
-i, --install [plugin name] : Downloads and installs listed plugins [*]
-t, --timeout [duration] : Timeout setting: 30s, 1m, 1h... (infinite by default)
-r, --remove [plugin name] : Removes listed plugins
-l, --list : List installed plugins
-v, --verbose : Prints verbose messages
-s, --silent : Run in silent mode
-h, --help : Prints this help message
[*] Plugin name could be:
elasticsearch-plugin-name for Elasticsearch 2.0 Core plugin (download from download.elastic.co)
elasticsearch/plugin/version for elasticsearch commercial plugins (download from download.elastic.co)
groupId/artifactId/version for community plugins (download from maven central or oss sonatype)
username/repository for site plugins (download from github master)
Elasticsearch Core plugins:
- elasticsearch-analysis-icu
- elasticsearch-analysis-kuromoji
- elasticsearch-analysis-phonetic
- elasticsearch-analysis-smartcn
- elasticsearch-analysis-stempel
- elasticsearch-cloud-aws
- elasticsearch-cloud-azure
- elasticsearch-cloud-gce
- elasticsearch-delete-by-query
- elasticsearch-lang-javascript
- elasticsearch-lang-python
```
AbstractFieldMapper is the only direct base class of FieldMapper.
This change moves all AbstractFieldMapper functionality into
FieldMapper, since there is no need for 2 levels of abstraction.
Each scroll on a scan causes a query to be executed. This commit adds support for these indirect queries to count against the search stats.
Additionally, this commit adds three new search stats: scroll_count, scroll_time_in_millis, and scroll_current. scroll_count tracks the
number of completed scrolls. scroll_time_in_millis tracks the total time that scrolls were held open. scroll_current tracks the number of
scrolls currently open.
Closes#9109
In testing infra, one can simulate node GCs, network issues and other problems by adding a disruption to the test cluster. Those disruption are automatically removed after the test is done. At the moment each disruption indicates how long it will take the cluster to heal once the disruption is removed and the test cluster waits for this amount of time. However, more often than not this is an upper bound, causing a much longer wait than needed. Instead we should push the responsibility of healing to the disruption it self, where we can be smarter about what we wait for.
Closes#12071
When using `awaitBusy`, sometimes, you might not want to double time between two runs in an infinitive manner.
For example, let's say it will probably take 30 seconds to run a test.
When doubling all the time, you will most likely wait for a bigger time than needed:
|iteration|ms |s |duration (ms)|duration (s)|
|-----------|-------------|-----------|-----------|-----------|
|1|1|0,001|1|0,001|
|2|2|0,002|3|0,003|
|3|4|0,004|7|0,007|
|4|8|0,008|15|0,015|
|5|16|0,016|31|0,031|
|6|32|0,032|63|0,063|
|7|64|0,064|127|0,127|
|8|128|0,128|255|0,255|
|9|256|0,256|511|0,511|
|10|512|0,512|1023|1,023|
|11|1024|1,024|2047|2,047|
|12|2048|2,048|4095|4,095|
|13|4096|4,096|8191|8,191|
|14|8192|8,192|16383|16,383|
|15|16384|16,384|32767|32,767|
|16|32768|32,768|65535|65,535|
|17|65536|65,536|131071|131,071|
|18|131072|131,072|262143|262,143|
|19|262144|262,144|524287|524,287|
|20|524288|524,288|1048575|1048,575|
|21|1048576|1048,576|2097151|2097,151|
For example here, if the task is successful after 35 seconds, we will most likely have to wait for 32s more before the Predicate is run again.
With this patch, the maximum sleep time is now set to 1 second.
This pipeline aggregation runs a script on each bucket in the parent aggregation to determine whether the bucket is kept in the final aggregation tree. If the script returns true the bucket is retained, if it returns false the bucket is dropped
By extending AbstractQueryBuilder, EmptyQueryBuilder had setters for boost and
queryname which defeats its original purpose of beeing a stand-in
singleton for empty queries. By directly implementing QueryBuilder (and
temporarily also extending ToXContentToBytes) this is prevented
If you are using the default date or the named identifiers of dates,
the current implementation was allowed to read a year with only one
digit. In order to make this more strict, this fixes a year to be at
least 4 digits. Same applies for month, day, hour, minute, seconds.
Also the new default is `strictDateOptionalTime` for indices created
with Elasticsearch 2.0 or newer.
In addition a couple of not exposed date formats have been exposed, as they
have been mentioned in the documentation.
Closes#6158
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217
Field names containing dots can cause problems. For example, @jpountz
made this recreation which cause no error, but can result in a
serialization exception if the type already exists:
https://gist.github.com/jpountz/8c66817e00a322b81f85
But this is not just a potential conflict. It also has larger problems,
since only the leaf mapper is created. The intermediate "foo" object
field would not exist if only "foo.bar" was in the mappings.
This change forbids the use of dots in field names. It also
fixes an issue with passing through the update_all_types setting,
which was always set to true whenever a type already existed (!).
I do not think we should worry about backwards compatibility here. This
should be a hard break (and added to the migration plugin).
This commit adds logic to prefer shards with higher priority
or from newer indicse to be allocated first if they are unallocated post API.
This commit allows users to set `index.priority` to a non-negative integer to
prioritize index recovery for certain indices. This setting is dynamically updateable
and defaults to `0`. If two indices have the same priority this change takes the creation
date into account to prioritize shards from newer indices which is important in the time-based
indices usecase.
Closes#11787
When a bulk request fails on a Delete or Update request, the BulkItemResponse
reports incorrect "index" operation in the response. This PR fixes this
for the case of closed indices as reported in #9821 but also for
other failures and adds tests for the two cases covered.
Closes#9821
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217
the specialization can cause stack overflows if an exception is a
ElasticsearchWrapperException as well as a ElasticsearchException.
This commit just relies on the unwrap logic now to find the cause and only
renders if we the rendering exception is the cause otherwise forwards
to the generic exception rendering.
Closes#11994
While MappedFieldType contains settings for doc values and fielddata,
AbstractFieldMapper had special logic in its constructor that
required setting these on the field type from there. This change
removes those settings from the AbstractFieldMapper constructor.
As a result, defaultDocValues(), defaultFieldType() and
defaultFieldDataType() are no longer needed.
Currently parsing inner queries can return `null` which leads to unnecessary complicated
null checks further up the query hierarchy. By introducing a special EmptyQueryBuilder
that can be used to signal that this query clause should be ignored upstream where possible,
we can avoid additional null checks in parent query builders and still allow for this query
to be ignored when creating the lucene query later.
This new builder returns `null` when calling its `toQuery()` method, so at this point
we still need to handle it, but having an explicit object for the intermediate query
representation makes it easier to differentiate between cases where inner query clauses are
empty (legal) or are not set at all (illegal).
Also added check for empty inner builder list to BoolQueryBuilder and OrQueryBuilder.
Removed setters for positive/negatice query in favour of constructor with
mandatory non-null arguments in BoostingQueryBuilder.
Closes#11915
This commit merges the pre-existing special exception that
allowed to associate headers with exceptions and the elasticsaerch
base class `ElasticsearchException` This allows for more generic use
of exceptions where plugins can associate meta-data with any elasticsearch
base exception to control behavior etc.
This also addds a generic SecurityException to allow plugins to pass on
information based on the RestStatus.
If the version of a node is lower than the minimum supported version or higher than the maximum supported version, a node shouldn't be allowed to join and nodes should join that elected master node
Closes#11924
Removed ParseField#match variant that accepts the field name only, without parse flags. Such a method is harmful as it defaults to empty parse flags, meaning that no deprecation exceptions will be thrown in strict mode, which defeats the purpose of using ParseField. Unfortunately such a method was used in a lot of places were the parse flags weren't easily accessible (outside of query parsing), and in a lot of other places just by mistake.
Parse flags have been introduced now as part of SearchContext and mappers where needed. There are a few places (e.g. java api requests) where it is not possible to retrieve them as they depend on the index settings, in that case we explicitly pass in EMPTY_FLAGS for now, but this has to be seen as an exception.
Closes#11859
This commit changes MessageChannelHandler to not skip the underlying
ChannelBuffer while a StreamInput is open on top of it. In case eg. compression
is enabled, this prevents failures due to the fact that the decompressed
stream input expects a certain structure that it can't verify if the position
of the underlying buffer is changed.
Added dynamic arguments to `ElasticsearchException`, `ElasticsearchParseException` and `ElasticsearchTimeoutException`.
This helps keeping the exception messages clean and readable and promotes consistency around wrapping dynamic args with `[` and `]`.
This is just the start, we need to propagate this to all exceptions deriving from `ElasticsearchException`. Also, work started on standardizing on lower case logging & exception messages. We need to be consistent here...
- Uses the same `LoggerMessageFormat` as used by our logging infrastructure.
The work around for resolving `now` doesn't need to be used for aliases, becuase alias filters are parsed at search time. However it can't be removed, because the percolator relies on it.
Parent/child can be specified again in alias filters, this now works again because alias filters are parsed at search time. Parent/child will also use the late query parse work around, to make sure to do the final preparations when the search context is around. This allows the aliases api to validate the parent/child queries without failing because there is no search context.
Closes#10485
Eventually, the field type should not need any names, because there
will be only one name which leads to finding it (the full name, which is
also the index name). However, the short or "simple" name (using java
terminology for class names) is needed just in a couple places, for
serialization.
This change moves the simple name out of MappedFieldType.Names, into
Mapper, and makes Mapper and FieldMapper abstract classes.
Following the discussion in #11744, move boost and query _name to base class AbstractQueryBuilder with their getters and setters. Unify their serialization code and equals/hashcode handling in the base class too. This guarantess that every query supports both _name and boost and nothing needs to be done around those in subclasses besides properly parsing the fields in the parsers and printing them out as part of the doXContent method in the builders. More specifically, these are the performed changes:
- Introduced printBoostAndQueryName utility method in AbstractQueryBuilder that subclasses can use to print out _name and boost in their doXContent method.
- readFrom and writeTo are now final methods that take care of _name and boost serialization. Subclasses have to implement doReadFrom and doWriteTo instead.
- toQuery is a final method too that takes care of properly applying _name and boost to the lucene query. Subclasses have to implement doToQuery instead. The query returned will have boost and queryName applied automatically.
- Removed BoostableQueryBuilder interface, given that every query is boostable after this change. This won't have any negative effect on filters, as the boost simply gets ignored in that case.
- Extended equals and hashcode to handle queryName and boost automatically as well.
- Update the query test infra so that queryName and boost are tested automatically, and whenever they are forgotten in parser or doXContent tests fail, so this makes things a lot less error-prone
- Introduced DEFAULT_BOOST constant to make sure we don't repeat 1.0f all the time for default boost values.
SpanQueryBuilder is again a marker interface only. The convenient toQuery that allowed us to override the return type to SpanQuery cannot be supported anymore due to a clash with the toQuery implementation from AbstractQueryBuilder. We have to go back to castin lucene Query to SpanQuery when dealing with span queries unfortunately.
Note that this change touches not only the already refactored queries but also the untouched ones, by making sure that we parse _name and boost whenever we need to and that we print them out as part of QueryBuilder#doXContent. This will result in printing out the default boost all the time rather than skipping it in non refactored queries, something that we would have changed anyway as part of the query refactoring.
The following are the queries that support boost now while previously they didn't (parser now parses it and builder prints it out): and, exists, fquery, geo_bounding_box, geo_distance, geo_distance_range, geo_hash_cell, geo_polygon, indices, limit, missing, not, or, script, type.
The following are the queries that support _name now while previously they didn't (parser now parses it and builder prints it out): boosting, constant_score, function_score, limit, match_all, type.
Range query parser supports now _name at the same level as boost too (_name is still supported on the outer object though for bw comp).
There are two exceptions that despite have getters and setters for queryName and boost don't really support boost and queryName: query filter and span multi term query. The reason for this is that they only support a single inner object which is another query that they wrap, no other elements.
Relates to #11744Closes#10776Closes#11974
Today we loose the RestStatus code for non-serializable exceptions.
This can be tricky if they are supposed to signal certain situations
like authentication errors etc. This commit adds support for carrying on
the exceptions in the NotSerializableExceptoinWrapper
The filters aggregation now has an option to add an 'other' bucket which will, when turned on, contain all documents which do not match any of the defined filters. There is also an option to change the name of the 'other' bucket from the default of '_other_'
Closes#11289
This change means that when the skip gap policy is used, the bucket script aggregation will skip executing the script on a bucket if any of the required bucket_paths are missing for the bucket. No aggregation will be added to the bucket, and the aggregation will move to the next bucket.
This commit changes the postrm script so that it prints error messages instead of failing & exiting when the deletion of a directory failed while removing a RPM/DEB package.
Closes#11373
This allows a lot of null checks to be removed where we were always falling back to the ValueFormat.RAW anyway. Now the format is set to ValueFormat.RAW when no alternative is suitable.
Closes#10594
Field stats index constraints allows to omit all field stats for indices that don't match with the constraint. An index
constraint can exclude indices' field stats based on the `min_value` and `max_value` statistic. This option is only
useful if the `level` option is set to `indices`.
For example index constraints can be useful to find out the min and max value of a particular property of your data in
a time based scenario. The following request only returns field stats for the `answer_count` property for indices
holding questions created in the year 2014:
curl -XPOST 'http://localhost:9200/_field_stats?level=indices' -d '{
"fields" : ["answer_count"] <1>
"index_constraints" : { <2>
"creation_date" : { <3>
"min_value" : { <4>
"gte" : "2014-01-01T00:00:00.000Z",
},
"max_value" : {
"lt" : "2015-01-01T00:00:00.000Z"
}
}
}
}'
Closes#11187
"Root" is a very confusing term for meta field mappers. This change
renames "RootMapper" to "MetadataFieldMapper" and simplifies
how metadata mappers are setup.
It also requires that metadata mappers are now a FieldMapper
(MetadataFieldMapper extends from AbstractFieldMapper). The only
use of a root mapper that wasn't a field mapper was the theoretical
"external" root mapper (just a test mapper). But it doesn't make
sense to not have an actual field, and this falls inline with
the hopefully eventual collapsing of AbstractFieldMapper/FieldMapper/Mapper.
- shard listing actions underpinning shard allocation do not have access to that new node yet (causing errors during shard allocation see #11923
- the very first cluster state published to a node already has shard assignments to it. This surfaced other issues we are working to fix separately
This commit changes the reroute to be done post processing the initial join cluster state to side step these issues while we work on a longer term solution.
Closes#11960
We had several problems with Java Serializatin in the past. At some point
in the Java 1.7.x series JDKs where not compatible anymore when java
serialization (ObjectStream) was used to exchange objects. In elasticsearch
we used this to serialize exceptions across the wire which caused several problems
with incompatible JDKs. While causing lot of trouble this essentially prevented
users from moving forward and upgrade their JVMs. To prevent these kind of issues
this commit removes the dependency on java serialization entirely and bans the
usage of ObjectOutputStream and ObjectInputStream entirely.
Yet, we can't fully serialize all exception anymore such that this commit
is best effort and adds hand written serialization to all elasticsearch exceptions
as well to a selected set of JDK and Lucene exceptions. (see StreamOutput#writeThrowable /
StreamInput.readThrowable). Stacktraces should be preserved for all exceptions while
several names might be replaced with ElasticsearchException if there is no mapping for
the given exception.
Added validation for all inner queries that any already refactored query may hold. Added also tests around this. At the end of the refactoring validate will be called by SearchRequest#validate during TransportSearchAction execution, which will call validate against the top level query builder that will need to go and validate its data plus all of its inner queries, and so on.
Closes#11889
In order to support older RPM based distributions like CentOS5,
we should have one RPM available, which is not signed.
This commit creates an unsigned RPM first, then moves it over to
target/releases during the build, then builds a signed RPM.
The unsigned one is uploaded via S3, where as the signed one is
used for the repositories.
In addition, you can now build an RPM without having to specify
any gpg credentials due to offloading this into a maven profile
that is only activated when specifying `rpm.sign` property.
Closes#11587
instead of maintaining a thread local cache in the PercolatorQueriesRegistry.
Before PercolatorQueriesRegistry had its own cache, because all the queries had to forcefully opt out of caching. Nowadays in master small segments are never cached by the query cache, so the reason for the dedicated cache is no longer valid.
Today we keep track of how often filters are used at the index level in order
to decide whether they should be cached or not. This is an issue if you have
several shards of the same index on the same node as it will multiply statistics
by the number of shards that you have for this index on the node, which defeats
the purpose of waiting for a filter to be reused before caching them.
If the translog UUID is corrupted we should not convert it
to UTF-8 since it might be invalid. Instead we should compare
the UTF-8 byte representation directly.
This more consistent with the other logging it makes and since it can be used in many operations the output can be more verbose (without adding too much info as to who timed out exactly - which we can fix separately). If need be the caller of the observer can log a higher level message.
Closes#11722
In order to be more consistent with what they do, the query cache has been
renamed to request cache and the filter cache has been renamed to query
cache.
A known issue is that package/logger names do no longer match settings names,
please speak up if you think this is an issue.
Here are the settings for which I kept backward compatibility. Note that they
are a bit different from what was discussed on #11569 but putting `cache` before
the name of what is cached has the benefit of making these settings consistent
with the fielddata cache whose size is configured by
`indices.fielddata.cache.size`:
* index.cache.query.enable -> index.requests.cache.enable
* indices.cache.query.size -> indices.requests.cache.size
* indices.cache.filter.size -> indices.queries.cache.size
Close#11569
Today, we disable CORS by default, but if a user simply enables CORS their instance of
elasticsearch will allow cross origin requests from anywhere, as the default value for allowed
origins is `*`.
This changes the default to be `null` so that no origins are allowed and the user must explicitly
specify the origins they wish to allow requests from. The documentation also mentions that there
is a security risk in using `*` as the value.
Closes#11169
the codebase based on the conventions that we decided to follow. Also including
some cosmetic fixes (making members final where possible, avoiding this
references outside of setters/getters).
In addition to that this PR changes:
* prevent NPEs in doXContent when rendering out nested queries that are null. We now render out empty object ({ }) which then gets parser to null to be consistent with queries than come only through the rest layer
* prevent adding nested null queries to collections of clauses like in BoolQueryBuilder
* add validate() method to all builders (even when empty)
Closes#11834
In order to be backwards compatible, indices created before 2.x must support
indexing of a unix timestamp and its configured date format. Indices created
with 2.x must configure the `epoch_millis` date formatter in order to
support this.
Relates #10971
Today we are very lenient in parsing the translog files. This is
actually not necessary since we have a clear run once upgrade path.
All files are converted into the new file name pattern such that we
only need to look at old file patterns in the context of the upgrade.
This commit makes parsing really strict with the exceptoin of the upgrade path.
This adds a new pipeline aggregation, the cumulative sum aggregation. This is a parent aggregation which must be specified as a sub-aggregation to a histogram or date_histogram aggregation. It will add a new aggregation to each bucket containing the sum of a specified metrics over this and all previous buckets.
Today we mark a translog as upgraded by adding a marker to the engine commit.
Yet, this commit was only added if there was no translog present before ie. only
if we have a fresh engine which is missing the entire point. Yet, this commit
adds a backwards index tests that ensures we can open old indices more than once
ie. mark the index as upgraded.
Closes#11858
In Lucene 5x the exception thrown when highlighter encounters a huge term
is a BytesRefHash.MaxBytesLengthExceededException but in Lucene 4x it is
wrapped in a RuntimeException. Therefore, it seems saver to unwrap this.
As we now have an enum Operator that comes with many useful helper methods switching to use
that instead of the enums defined separately. Also switches to using the new enum's helper
methods where applicable removing duplicate parsing logic.
This breaks backwards compatibility. Documenting the break in
migrate_query_refactoring.asciidoc
Relates to #10217
There was only a single actual "use" of close, for a threadlocal
in VersionFieldMapper. However, that threadlocal is completely
unnecessary, so this change removes the threadlocal and
close() altogether.
This commit consolidates several abstractions on the shard level in
ordinary classes not managed by the shard level guice injector.
Several classes have been collapsed into IndexShard and IndexShardGatewayService
was cleaned up to be more lightweight and self-contained. It has also been moved into
the index.shard package and it's operation is renamed from recovery from "gateway" to recovery
from "store" or "shard_store".
Closes#11847
This commit folds ShardRouting, ImmutableShardRouting and MutableShardRouting
into ShardRouting. All mutators are package private anyway today so it's just
unnecessary abstraction.
ShardRoutings are now frozen once they are added to the IndexRoutingTable
to prevent modifications outside of the allocation code.
This commit makes the get and search APIs always return `_parent`, `_routing`,
`_timestamp` and `_ttl` in addition to `_id` and `_type`. This way, consumers
always have all required information in order to reindex a document.
Currently the SnapshotsService is concerned with both maintaining the global snapshot lifecycle on the master node as well as responsible for keeping track of individual shards on the data nodes. This refactoring separates two areas of concerns by moving all shard-level operations into a separate SnapshotShardsService.
Closes#11756
Refactors CommonTermsQuery analogous to TermQueryBuilder. Still left to do are
the tests to compare between builder and actual Lucene query.
Relates to #10217
This PR is against the query-refactoring branch.
This commit makes SimpleQueryStringBuilder streamable, add hashCode and equals. Adds a dedicated builder/parser unit test, fixes formatting, adds JavaDoc where needed, adjust the handling of default values according to https://github.com/elastic/dev/blob/master/design/queries/general-guidelines.md
Switched to using toLanguageTag/forLanguageTag when parsing Locales. Using LocaleUtils from either Elasticsearch or Apache commons resulted in Locales not passing the roundtrip test. For more info see https://issues.apache.org/jira/browse/LUCENE-4021
Relates to #10217
Currently the filter cache is configured to have a maximum size in bytes of 10%
of the JVM memory, and a maximum number of cached filters (across all segments
of all shard on the same node) of 100000. I would like to change the latter to
a more reasonable value of 1000.
Given that we track the most 256 most recently used filters per index and only
cache those that have been seen 5 times or more, a single index cannot have more
than 50 hot filters, so a maximum number of cached filters of 1000 per node
should be more than necessary.
Today, we have scheduled reroute that kicks every 10 seconds and checks if a
reroute is needed. We use it when adding nodes, since we don't reroute right
away once its added, and give it a time window to add additional nodes.
We do have recover after nodes setting and such in order to wait for enough
nodes to be added, and also, it really depends at what part of the 10s window
you end up, sometimes, it might not be effective at all. In general, its historic
from the times before we had recover after nodes and such.
This change removes the 10s scheduling, simplifies RoutingService, and adds
explicit reroute when a node is added to the system. It also adds unit tests
to RoutingService.
closes#11776
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217Closes#11823
Since elasticsearch doesn't shade artifacts anymore (see #11522), the dependencies list for RPM/DEB must be updated. Now we package all maven libs by default except the generated -shaded/-tests/-test-cours JARs and slf4j-api (marked as optionnal).
We currently are very lax about allowing data types to conflict for the
same field name, across document types. This change makes the underlying
map in MapperService a 1-1 map of field name to field type, and throws
exception when new types are not compatible.
To still allow changing a type, with parameters that are allowed to be
changed, but for a field that exists in multiple types, a new parameter
to index creation and put mapping API is added: update_all_types.
This defaults to false, and the exception messages suggest using
this parameter when trying to modify a setting that is allowed to be
modified but is being limited by this restriction.
There are also a couple changes which try to base fields from new types
for dynamic mappings, and root mappers, on existing settings. For
dynamic mappings this is important if the dynamic defaults have been
changed. For root mappings, this is mostly just for backcompat when
pre 2.0 root mappers could have their field type changed.
fixes#8871
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217
we currently don't expose this.
This adds the following to the OS section of `_nodes`:
```
"os": {
"name": "Mac OS X",
...
}
```
and the following to the OS section of `_cluster/stats`:
```
"os": {
...
"names": [
{
"name": "Mac OS X",
"count": 1
}
],
...
},
```
Closes#11807
This is a follow up to #8143 and #6730 for _timestamp. It removes
support for `path`, as well as any field type settings, and
enables docvalues for _timestamp, for 2.0. Users who need to
adjust these settings can use a date field.
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings. In
this case this also includes FQueryFilterParser, since both queries are
closely related.
Relates to #10217Closes#11729
- Fixes tests, and removes a few special snowflake, fragile tests.
- Removes concrete implementation of predict() and moves it into
each model so that the logic is clearer. Because there is some
shared checks/assertions, those remain in predict() and the main
prediction happens in doPredict()
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217Closes#11717
The commit about adding cluster health response features also removed
accidentally some functionality, that resulted in wrong instanceof checks
in InternalClusterService and thus in test failures because the cluster
state task that was added via an anonymous was missing the cast.
This commit readds the abstract class with slight renaming.
Commit id was: 88f8d58c8b
If we mark the shard as being in POST_RECOVERY before the percolator
is fully set up we might expose it to the user as fully searchable before
all queries are loaded. This can lead to wrong results especially in tests
when a shard is concurrently marked as STARTED.
This commit also removes unneded abstractions on IndexShard where readoperations
should be allowed when the purose is a write.
It is handy to have a base interface, not just an abstract class, for all of our query builders. This gives us more flexibility especialy with complex class hierarchies. For instance SpanTermQueryBuilder extends BaseTermQueryBuilder, but also needs to be marked as a SpanQueryBuilder. The latter is a marker interface that should extend QueryBuilder which is not possible unless QueryBuilder actually is an interface.
Also remove queryId method as it created confusion, getName is good enough for the purpose, and override the return type of toQuery method for SpanQueryBuilders to SpanQuery.
Closes#11796
In order to get a quick overview using by simply checking the cluster state
and its corresponding cat API, the following two attributes have been added
to the cluster health response:
* task max waiting time, the time value of the first task of the
queue and how long it has been waiting
* active shards percent: The percentage of the number of shards that are in
initializing state
This makes the cluster health API handy to check, when a fully restarted
cluster is back up and running.
Closes#10805
The change makes rest-spec-api a project in the same way as we build dev-tools. it packages the tests and api in a bundle using the maven-remote-resources-plugin and uses the same plugin in the plugins and core pom to unpack the rest-api-spec into the target directory and references the rest tests there in the test resources.
The main stimulus for this change is that for those using Eclipse the current build does not work. After running `mvn eclipse:eclipse` the Eclipse IDE errors because the rest-api-spec is outside of the project scope, meaning that every time the command is run (required whenever any dependencies change), the class path of all the projects has to be manually fixed.
This fixes an issue to allow for negative unix timestamps.
An own printer for epochs instead of just having a parser has been added.
Added docs that only 10/13 length unix timestamps are supported
Added docs in upgrade documentation
Fixes#11478
Tests relying on sleeps and latch timeouts are prone to weird timing issues
and hard to read / understand error messages. This commit moves towards a more
deterministic error model and replaces empty fails with real exceptions.
Some repository verification exceptions are currently only returned to the users but not logged on the nodes where the exceptions occurred, which makes troubleshooting difficult.
Closes#11760
For #8871, we need to be able to check field types are compatible,
without comparing FieldMappers. This change moves the simulation
checks (which generate merge conflicts) for any properties of
MappedFieldTypes into a new method, validateCompatible.
This also simplifies the merge code which merges settings
between the old and new fieldtypes. Previously, each subclass
of FieldMapper would have to set its own fieldtype settings.
However, now that we have .clone(), which perfectly copies
all properties (with subclasses accounted for), we can now
do a simple clone when merging.
Finally, this fixes a subtle bug in merging, in which if
merging has conflicts, and we were not simulating, we would
still update the field type, even though it was not compatible!
NOTE: there is one test failure I am trying to track down with
timestamp merging. Otherwise, all tests pass.
Currently, we delete the shard _state file on engine failure.
This behaviour does not persist the engine failure reason for later inspection.
This commit marks the shard store as corrupted instead of deleting
the _state file to ensure the store index can not be opened after and
the engine failure is persisted.
CommonTermsQueryParser does not check for disable_coords, only for
disable_coord. Yet the builder only outputs disable_coords, leading to
disabling the coordination factor to be ignored in the Java API.
Closes#11730Closes#11780
When using compression over the network, you might sometimes see warnings that
the stream was not fully read. This is because DeflaterOutputStream adds an
end-of-stream marker. When deserializing, we need to poll for one byte using
InputStream.read() to make sure to decode this EOS marker.
For the record, it does not strike all the time today because we perform
buffering when decompressing to avoid performing too many JNI calls, but it
is easy to make this warning happen all the time by decreasing the size of
the buffer we use.
Close#11748
Need to reset the registered setting in order to make sure the nex round will capture the right delay interval
also randomize setting and name the setting properly
closes#11759
Added several classes to support expressions being used for numerical
calculations in aggregations. Expressions will still not compile
when used with mapping and update script contexts.
Closes#11596Closes#11689
Allow to set delayed allocation timeout on unassigned shards when a node leaves the cluster. This allows to wait for the node to come back for a specific period in order to try and assign the shards back to it to reduce shards movements and unnecessary relocations.
The setting is an index level setting under `index.unassigned.node_left.delayed_timeout` and defaults to 0 (== no delayed allocation). We might want to change the default, but lets do it in a different change to come up with the best value for it. The setting can be updated dynamically.
When shards are delayed, a log message with "info" level will notify how many shards are being delayed.
An implementation note, we really only need to care about delaying allocation on unassigned replica shards. If the primary shard is unassigned, anyhow we are going to wait for a copy of it, so really the only case to delay allocation is for replicas.
close#11712
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217Closes#11703
`com.google.common.collect.Iterators#emptyIterator()` is marked as deprecated and will be removed in May 2016. We should use JDK7 `Collections#emptyIterator()`
By setting human parameter to true, it's now possible to see human readable versions of Elasticsearch that created and updated the index as well as the date when the index was created.
Closes#11484
This changes the parameter name `ignore_like` to the more user friendly name
`unlike`. This later feature generates a query from the terms in `A` but not
from the terms in `B`. This translates to a result set which is like `A` but
unlike `B`. We could have further negatively boosted any documents that have
some `B`, but these documents already do not receive any contribution from
having `B`, and would therefore negatively compete with documents having `A`.
Closes#11117
this is really just a workaround for plugins to run their own
REST tests instead of the core ones. It opts out of the rest test
loading from the core jar file and tries to load from the classpath instead.
Eventually we need to fix this infrastrucutre to move away from parameterized
tests such that subclasses can override behavior.
Closes#11721
Added a licenses/ directory to core which contains a sha1 file for each JAR
dependency, and one or more LICENSE files and one NOTICE file for each
project.
Also adds dev-tools/src/main/resources/license-check/check_license_and_sha.pl
which checks that the licenses/ dir is up to date during a mvn verify,
and which can be used to update the sha1 files when upgrading dependencies.
Closes#2794Closes#10684Closes#11705
This change adds a simplistic heuristic to try to balance new shard
allocations across multiple data paths on one node so that e.g. if
there are two path.data and both have roughly the same free space, if
10 shards are suddenly allocated, we will put 5 on one path and 5 on
the other (vs 10 on a single path today).
Closes#11185Closes#11122
Unassigned meta includes additional information as to why a shard is unassigned, this is especially handy when a shard moves to unassigned due to node leaving or shard failure.
The additional data is provided as part of the cluster state, and as part of `_cat/shards` API.
The additional meta includes the timestamp that the shard has moved to unassigned, allowing us in the future to build functionality such as delay allocation due to node leaving until a copy of the shard is found.
closes#11653
Since the /var/run/elasticsearch directory is cleaned when the operating system starts, the init.d script must ensure that the PID_DIR is correctly created.
Closes#11594
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217Closes#11628
As part of the refactoring of queries this PR splits the parsers parse() method
into a parsing and a query building part, adding serialization, hashCode() and
equals() to the builder.
Relates to #10217Closes#11621
The cache we are using for fielddata reclaims memory lazily/asynchronously. While this
helps with throughput this is an issue when a clear operation is issued manually since
memory is not reclaimed immediately. Since our clear methods already perform in linear
time anyway, this commit changes the fielddata cache to reclaim memory synchronously
when a clear command is issued manually. However, it remains lazy in case cache entries
are invalidated because segments or readers are closed, which is important since such
events happen all the time.
Close#11695
Some of our Java api builders had wrong logic when it comes to serializing the query in json format, resulting in missing fields like _name. Also, regexp parser was ignoring the _name field.
Closes#11694
`_timestamp` uses NumericDocValues instead of SortedNumericDocValues like other
numeric fields since it is guaranteed to be single-valued. However, we don't
need a different fielddata impl for it since DocValues.getSortedNumeric already
falls back to NUMERIC doc values if SORTED_NUMERIC are not available.
[maven] clean pom.xml
In Maven parent project, in dependency management, we should only declare which versions of 3rd party jars we want to use but not force any scope.
It makes then more obvious in modules what is exactly the scope of any dependency.
For example, one could imagine importing `jimfs` as a `compile` dependency in another module/plugin with:
```xml
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
</dependency>
```
But it won't work as expected as the default maven `scope` should be `compile` but here it's `test` as defined in the parent project.
So, if you want to use this lib for tests, you should simply define:
```xml
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
<scope>test</scope>
</dependency>
```
We also remove `maven-s3-wagon` from gce plugin as it's not used.
This was previously a container for an ObjectMapper, along with the
DocumentMapper that ObjectMapper came from. However, there was
only one use of needing the associated DocumentMapper, and that
wasn't actually used.
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217Closes#11629
In Maven parent project, in dependency management, we should only declare which versions of 3rd party jars we want to use but not force any scope.
It makes then more obvious in modules what is exactly the scope of any dependency.
For example, one could imagine importing `jimfs` as a `compile` dependency in another module/plugin with:
```xml
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
</dependency>
```
But it won't work as expected as the default maven `scope` should be `compile` but here it's `test` as defined in the parent project.
So, if you want to use this lib for tests, you should simply define:
```xml
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
<scope>test</scope>
</dependency>
```
We also remove `maven-s3-wagon` from gce plugin as it's not used.
Now that doc values are the default for fielddata, specialized in-memory
formats are becoming an esoteric option. This commit removes such formats:
- `fst` on string fields,
- `compressed` on geo points.
I also removed documentation and tests that the fielddata cache is shared if
you change the format, since this is only true for in-memory fielddata formats
(given that for doc values, the caching is done directly in Lucene).
The script APIs have been deprecated long ago we can now remove them.
This commit still keeps the parsing code since it might be used in a
query that is still stuck in transaction log. This issue should be discussed
elsewhere.
Closes#11619
In order to restrict a single set of field type settings for a given
field name across an index, we need the ability to compare field types.
This change adds equals and hashcode, as well as tests for every field
type.
Make sure there is a single place where shard routing move to unassigned, so we can add additional metadata when it does, also, simplify shard routing implementations a bit
closes#11634
The AsyncShardFetch retrieves shard information from the different nodes in order to detirment the best location for unassigned shards. The class uses TransportNodesListGatewayStartedShards and TransportNodesListShardStoreMetaData in order to fetch this information. These actions, inherit from TransportNodesAction and are activated using a list of node ids. Those node ids are extracted from the cluster state that is used to assign shards.
If we perform a reroute and adding new news in the same cluster state update task, it is possible that the AsyncShardFetch administration is based on
a different cluster state then the one used by TransportNodesAction to resolve nodes. This can cause a problem since TransportNodesAction filters away unkown nodes, causing the administration in AsyncShardFetch to get confused.
This commit fixes this allowing to override node resolving in TransportNodesAction and uses the exact node ids transfered by AsyncShardFetch
Information about in-progress snapshot and restore processes is not really metadata and should be represented as a part of the cluster state similar to discovery nodes, routing table, and cluster blocks. Since in-progress snapshot and restore information is no longer part of metadata, this refactoring also enables us to handle cluster blocks in more consistent manner and allow creation of snapshots of a read-only cluster.
Closes#8102
This commit cleans up all the MergeScheduler infrastructure
and simplifies / removes all unneeded abstractions. The MergeScheduler
itself is now private to the Engine and all abstractions like Providers
that had support for multiple merge schedulers etc. are removed.
Closes#11602
This is helpful to track down the origin of pending_tasks that aren't
expected. In tests we catch this with an assert, but in production
asserts may not be enabled so we should at least add the class name.
When specifying a time zone, RangeQueryBuilder currently throws an
exeption when the field does not use DateFieldMapper, but if there
are no mappers at all for that field it doesn't complain. To be
consistent we also throw expection now.
Refactors ExistsQueryBuilder and Parser, splitting the parse() method into a parsing
and a query building part. Also moving newFilter() utility method from parser to query builder.
Changes in the BaseQueryTestCase include introduction of randomized version to test disabled
FieldNamesFieldMappers and also getting rid of the need for createEmptyBuilder() method by
using registered prototype constants.
Relates to #10217Closes#11427
and a query building part, adding NamedWriteable implementation for serialization and hashCode(),
equals() for testing.
This change also adds tests using fixed set of leaf queries (Terms, Ids, MatchAll) as nested Queries in test query setup.
Also QueryParseContext is adapted to return QueryBuilder instances for inner filter parses instead of
previously returning Query instances, keeping old methods in place but deprecating them.
Relates to #10217Closes#11427
Today we provide the ability to plug in MergePolicy and
we provide the once lucene ships with. We do not recommend to change
the default and even only a small number of expert users would ever touch
this. This commit removes the ancient log byte size and log doc count
merge policy providers, simplifies the MergePolicy wiring and makes the
tiered MP the one and only default. All notions of a merge policy has been
removed from the docs and should be deprecated in the previous version.
Closes#11588
There used to be a null check for _field_names mapper not existing. This
was recently removed. However, there is a corner case when the mapper
may be missing: when no types or docs exist at all in the index.
This change adds back a null check and just returns no docs.
The current ExceptionsHelper.unwrapCause(exception) requires the incoming exception to support ElasticsearchWrapperException , which TranslogRecoveryPerformer.BatchOperationException doesn't implement. I opted for a more generic solution
We had to make CompressedXContent.equals decompress data to fix some
correctness issues which had the downside of making equals() slow. Now we store
a crc32 alongside compressed data which should help avoid decompress data in
most cases.
Close#11247
Now that mapping updates are sync and done before indexing we don't really need the waiting component. Also, removed many places were they were used as safe guard against delayed mapping updates, which are now not needed.
As part of the query refactoring, we want to be able to serialize queries by having them extend Writeable, rather than serializing their json. When reading them though, we need to be able to identify which query we have to create, based on its name.
For this purpose we introduce a new abstraction called NamedWriteable, which is supported by StreamOutput and StreamInput through writeNamedWriteable and readNamedWriteable methods. A new NamedWriteableRegistry is introduced also where named writeable prototypes need to be registered so that we are able to retrieve the proper instance of query given its name and then de-serialize it calling readFrom against it.
Closes#11553
While we had initially planned to keep rivers around in 2.0 to ease migration,
keeping support for rivers is challenging as it conflicts with other important
changes that we want to bring to 2.0 like synchronous dynamic mappings updates.
Nothing impossible to fix, but it would increase the complexity of how we
deal with dynamic mappings updates and manage rivers, while handling dynamic
mappings updates correctly is important for resiliency and rivers are on the go.
So removing rivers in 2.0 may well be a better trade-off.
These methods are now all in MappedFieldType. This removes the remaining
callers of the methods on FieldMapper, and cuts down the FieldMapper
API to no longer include them.
The MapperService is the "index wide view" of mappings. Methods on it
are used at query time to lookup how to query a field. This
change reduces the exposed api so that any information returned
is limited to that api exposed by MappedFieldType. In the future,
MappedFieldType will be guaranteed to be the same across all
document types for a given field.
Note CompletionFieldType needed some more settings moved to it. Other
than that, this change is almost purely cosmetic.
Split the parse method into a parsing and a query building part, adding serialization
and hashCode(), equals() for better testing. Add basic unit test for Builder and Parser.
Closes#11551
The shard can potentially not be deleted if the obsever that checks for the shard
STARTED is not registered because the registering is delayed by the disruption.
If the sum of delays is more than 10s then the wait for shard deletion will time out.
The ResourceWatcher used settings prefixed `watcher.`, which
potentially could clash with the watcher plugin.
In order to prevent confusion, the settings have been renamed to
`resource.reload` prefixes.
This also uses the deprecation logging infrastructure introduced
in #11033 to log deprecated settings and their alternative at
startup.
Closes#11175
In case a developer gets a list of ids from another data source,
it does not make a lot of sense, to convert it to an array first,
and then internally in IdsQueryBuilder elasticsearch creates a
list out of this.
Closes#5089
In order for exists queries to use the null value for
a field, null value needs to be part of the field type (should
differ between document types). This change moves null value
into the field type, as well as simplifies the null value
methods available to remove supportsNullValue().
#11363 introduced a retry logic for the case where we have to wait on a mapping update during the translog replay phase of recovery. The retry throws or recovery stats off as it may count ops twice.
There are different ways to register custom query parsers through plugins, a couple of them work per index via index settings, which is probably even too flexible. There also three different ways to add a global custom query parser through either IndicesQueriesModule or IndicesQueriesRegistry. This commit consolidates the registration of custom query parsers via IndicesQueriesModule#addQuery(Class<? extends QueryParser>). The complexity of supporting parsers per index is not needed hence it got removed. Also the other ways of registering global custom parsers are dropped in favour of the one mentioned above.
Closes#11481
The search template and template query did not run the template through the script engine before searching for an inner template. This meant that parsing for the inner template failed because the template was not always valid JSON (if it contained mustache code) when it was parsed to find the inner template. This has been fixed and Tests added to check for the failing behaviour.
Tests are from https://github.com/elastic/elasticsearch/pull/8393
In #10918, we introduced the prompt placeholders. These were had a different format
than our existing placeholders. This changes the prompt placeholders to follow the
format of the existing placeholders.
Relates to #11455
This commit adds an additioal jar that is shaded and keeps all the
artifacts that are used by default on the server-side unshaded. Users
that need a shaded jar can now use the `shaded` classifyer to pull
the shaded minimized jar in instead. Including the shaded jar in a
downstream project looks like this:
```XML
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<classifier>shaded</classifier>
</dependency>
```
After asynchronously fetching shard information the gateway allocator issues a reroute via a cluster state update task. #11421 introduced an optimization trying to avoid submitting unneeded reroutes when results for many shards come in together. This is done by having a rerouting flag, indicating a pending reroute is coming and thus any new incoming shard info doesn't need to issue a reroute. This flag wasn't reset upon an error in the reroute update task. Most notably - if a master node had to step during to a min_master_node violation, it could reject an ongoing reroute. Lacking to reset the flag causing it to skip any future reroute, when the node became master again.
Closes#11519
Since we are creating write.lock earlier now, blob store shouldn't attempt deleting this file during clean up at the end of the restore process. The file is locked and the blog store doesn't succeed, but it generates a lot of useless warnings "failed to delete file [write.lock] during snapshot cleanup".
Closes#11517