As a refinement to Project Coin (JEP-213, JDK-8042880), Java 9 is going
to disallow the use of ‘_’ as a one-character identifier. This will be
done by adding ‘_’ as a keyword to the Java language (JDK-8065599).
Currently, uses of ‘_’ as a one-character identifier are warnings in
the Java 8 compiler. This commit removes all uses of ‘_’ as a
one-character identifier from the codebase.
SpanContainingQueryParser and SpanWithinQueryParser always set the boost to the parsed lucene query, even if it is the default one. The default boost of the main query though is the boost coming from the inner little query, value that we end up overriding all the time. We should instead set the boost to the main query only if it differs from the default, to mimic lucene's behaviour.
Relates to #13272Closes#13339
SimpleQueryStringParser applies whatever boost the query holds, even if the default 1, to the query obtained from parsing of the query string. that might contain its boost, for instance if it resolved to a simple query like term (single term query against a single field). We should rather multiply the existing boost with the boost set to the query, same as we do in query_string
Relates to #13272Closes#13331
doParse() was supposed to allow aggs to perform extra parsing. Unfortunately, this forced the
parser to carry instance-level state, which would carry-over and "corrupt" any other aggs of the
same type in the same query.
Instead, we are now collecting all unknown params and pasing them as a Map<String, Object>
to buildFactory(). The agg may then parse them and instantiate a factory. Each param the
agg uses, it should unset from the unusedParams object.
After building the factory, the parser verifies that unusedParams is empty. If it is not empty,
an exception is raised so the user knows they provided unknown params.
Fixes#13337
This changes construction of Phrase and Boolean queries to use the builder,
and replaces BitDocIdSetFilter with BitSetProducer for nested and parent/child
queries. I had to remove the ParentIdsFilter for the case when there was a
single parent as it was using the source of BitSets for parents as a regular
Filter, which is not possible anymore now. I don't think this is an issue since
this case rarely occurs, and the alternative logic for when there are several
matching parent ids should not be much worse.
This pipeline will calculate percentiles over a set of sibling buckets. This is an exact
implementation, meaning it needs to cache a copy of the series in memory and sort it to determine
the percentiles.
This comes with a few limitations: to prevent serializing data around, only the requested percentiles
are calculated (unlike the TDigest version, which allows the java API to ask for any percentile).
It also needs to store the data in-memory, resulting in some overhead if the requested series is
very large.
This commit moves ignore_malformed and coerce options from the GeoPointFieldType to the Builder in GeoPointFieldMapper. This makes these options consistent with other types in 2.0.
This was supposed to just help the user, in case they misconfigured something.
Broadcast is an ipv4 only thing, the only way you can really detect its a broadcast
address, is to look and see if an interface has that address as its broadcast address.
But we cannot trust that container interfaces won't have a crazy setup...
Closes#13327
The match_phrase_prefix query properly parses the boost etc. but it loses it in its rewrite method. Fixed that by setting the orginal boost to the rewritten query before returning it. Also cleaned up some warning in MultiPhrasePrefixQuery.
Closes#13129Closes#13142
Target-type inference has been improved in Java 8. This leads to these
lines now being interpreted as invoking String#valueOf(char[]) whereas
they previously were interpreted as invoking String#valueOf(Object).
This change leads to ClassCastExceptions during test execution. Simply
casting the parameter to Object restores the old invocation.
Closes#13315
We have some optimization in FilteredQueryParser that tries to mimic what the rewrite method in lucene does, based on what gets parsed we return the simplest query possible. That might cause issues with boost values though, if specified in both the main query and the inner query that we shortcut to. We should rather rely on lucene's rewrite method to simplify the lucene representation of the query, and always build a filtered query instead.
relates to #13272Closes#13312
We currently optimize scroll when sort=_doc because docs are returned in order.
But documents are also returned in order when sorting by score and the query
gives constant scores. This optimization has the nice side-effect of also
optimizing scrolls with the default `match_all` query.
Until now we had a cloud-aws plugin which is providing 2 disctinct features:
* discovery on EC2
* snapshot/restore on S3
This commit splits the plugin by feature so people can use either one or the other or both features.
Doc is updated accordingly.
Before this change the check would check that all test classes end in Tests but the message would say they need to end in Test or Tests which was confusing.
Today we always collect in order to compute counts, but some of them can be
easily optimized by using pre-computed index statistics. This is especially
true in the case that there are no deletions, which should be common for the
time-based data use-case.
Counts on match_all queries can always be optimized, so requests like
```
GET index/_search?size=0
GET index/_search
{
"size": 0,
"query" : {
"match_all": {}
}
}
```
should now return almost instantly. Additionally, when there are no deletions,
term queries are also optimized, so the below queries which all boil down to a
single term query would also return almost immediately:
```
GET index/type/_search?size=0
GET index/_search
{
"size": 0,
"query" : {
"match": {
"foo": "bar"
}
}
}
GET index/_search
{
"size": 0,
"query" : {
"constant_score": {
"filter": {
"exists": {
"field": "foo"
}
}
}
}
}
```
Users might specify something like -Des.network.host=0.0.0.0, as that
was the old default with previous versions of elasticsearch. This means
to bind to all interfaces, but it makes no sense as a publish address.
Pick a good one in this case, just like we do in other cases where
publish isn't explicitly specified and we are bound to multiple (e.g.
when configured by interface, or dns hostname with multiple addresses).
However, in this case warn the user about it: since its arbitrarily
picking the first non-loopback address like the old versions
did, thats a little too heuristical, but lets make the cutover easy.
Separately, fail hard if things like multicast or broadcast addresses are
configured as bind or publish addresses, as that is simply invalid.
Closes#13274
The number and distribution of errors in some restore test may cause restore process to continue to fail for a prolong time. This test caps the total number of simulated failures to make sure that restore is guaranteed to eventually succeed after a limited number of retries.
We currently have a small number of test classes with the suffix "Test",
yet most use the suffix "Tests". This change renames all the "Test"
classes, so that we have a simple rule: "Non-inner classes ending with
Tests".
These are not actually tests, but command line applications that must be
run manually. This change removes the entire stresstest package. We can
add back individual tests that we find necessary, and make them real
tests (whether integ or not).
While the list of having exclusions is small, it shouldn't be necessary
at all. Base test cases should be suffixed with TestCase so they are not
picked up by the test class name pattern. This same rule works for
abstract classes as well.
This change renames abstract tests to use the TestCase suffix, adds a
check in naming convention tests, and removes the exclusion from our
test runner configuration. It also excludes inner classes (the only
exclude we should have IMO), so that we have no need to @Ignore the
inner test classes for naming convention tests.
In this test we assume that after waitForRelocation() has returned shards
are no more relocated and optimize will therefore succeed always.
However, because the test does not wait for green status, relocations can
still start after waitForRelocation() has returned successfully.
see #13266 for a detailed explanation
At the moment if an index script is used in a request, the spawned request to get the indexed script from the `.scripts` index does not get the headers and context copied to it from the original request. This change makes the calls to the `ScriptService` pass in a `HasContextAndHeaders` object that can provide the headers and context. For the `search()` method the context and headers are retrieved from `SearchContext.current()`.
Closes#12891
The shaded version of elasticsearch was built at the very beginning to avoid dependency conflicts in a specific case where:
* People use elasticsearch from Java
* People needs to embed elasticsearch jar within their own application (as it's today the only way to get a `TransportClient`)
* People also embed in their application another (most of the time older) version of dependency we are using for elasticsearch, such as: Guava, Joda, Jackson...
This conflict issue can be solved within the projects themselves by either upgrade the dependency version and use the one provided by elasticsearch or by shading elasticsearch project and relocating some conflicting packages.
Example
-------
As an example, let's say you want to use within your project `Joda 2.1` but elasticsearch `2.0.0-beta1` provides `Joda 2.8`.
Let's say you also want to run all that with shield plugin.
Create a new maven project or module with:
```xml
<groupId>fr.pilato.elasticsearch.test</groupId>
<artifactId>es-shaded</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<elasticsearch.version>2.0.0-beta1</elasticsearch.version>
</properties>
<dependencies>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>shield</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
</dependencies>
```
And now shade and relocate all packages which conflicts with your own application:
```xml
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<relocations>
<relocation>
<pattern>org.joda</pattern>
<shadedPattern>fr.pilato.thirdparty.joda</shadedPattern>
</relocation>
</relocations>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
```
You can create now a shaded version of elasticsearch + shield by running `mvn clean install`.
In your project, you can now depend on:
```xml
<dependency>
<groupId>fr.pilato.elasticsearch.test</groupId>
<artifactId>es-shaded</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.1</version>
</dependency>
```
Build then your TransportClient as usual:
```java
TransportClient client = TransportClient.builder()
.settings(Settings.builder()
.put("path.home", ".")
.put("shield.user", "username:password")
.put("plugin.types", "org.elasticsearch.shield.ShieldPlugin")
)
.build();
client.addTransportAddress(new InetSocketTransportAddress(new InetSocketAddress("localhost", 9300)));
// Index some data
client.prepareIndex("test", "doc", "1").setSource("foo", "bar").setRefresh(true).get();
SearchResponse searchResponse = client.prepareSearch("test").get();
```
If you want to use your own version of Joda, then import for example `org.joda.time.DateTime`. If you want to access to the shaded version (not recommended though), import `fr.pilato.thirdparty.joda.time.DateTime`.
You can run a simple test to make sure that both classes can live together within the same JVM:
```java
CodeSource codeSource = new org.joda.time.DateTime().getClass().getProtectionDomain().getCodeSource();
System.out.println("unshaded = " + codeSource);
codeSource = new fr.pilato.thirdparty.joda.time.DateTime().getClass().getProtectionDomain().getCodeSource();
System.out.println("shaded = " + codeSource);
```
It will print:
```
unshaded = (file:/path/to/joda-time-2.1.jar <no signer certificates>)
shaded = (file:/path/to/es-shaded-1.0-SNAPSHOT.jar <no signer certificates>)
```
This PR also removes fully-loaded module.
By the way, the project can now build with Maven 3.3.3 so we can relax a bit our maven policy.
Currently, we do not allow reads on shards which are in POST_RECOVERY which
unfortunately can cause search failures on shards which just recovered if there no replicas (#9421).
The reason why we did not allow reads on shards that are in POST_RECOVERY is
that after relocating a shard might miss a refresh if the node that executed the
refresh is behind with cluster state processing. If that happens, a user might execute
index/refresh/search but still not find the document that was indexed.
We changed how refresh works now in #13068 to make sure that shards cannot miss a refresh this
way by sending refresh requests the same way that we send write requests.
This commit changes IndexShard to allow reads on POST_RECOVERY now.
In addition it adds two test:
- test for issue #9421 (After relocation shards might temporarily not be searchable if still in POST_RECOVERY)
- test for visibility issue with relocation and refresh if reads allowed when shard is in POST_RECOVERY
closes#9421
DateHistogramIT has suite scope and therefore must take care of cleaning up after
each test. Otherwise we cannot run tests in several times with tests.iters.
The implementation this commit replaces was almost k-NN regression with
k=2, but had two bugs: (a) it depends on the empirical raw estimates
being in strictly non-decreasing order for the binary search (which they
are not); and (b) it weights the biases positively with increased
distance from the corresponding raw estimate.
“HyperLogLog in Practice” leaves the choice of exact algorithm here
fairly vague, just noting: “We use k-nearest neighbor interpolation to
get the bias for a given raw estimate (for k = 6).” The majority of
other open source HyperLogLog++ implementations appear to use k-NN
regression with uniform weights (and generally k = 6). Uniform
weighting does decrease variance, but also introduces bias at the domain
extrema. This problem, plus the use of the word “interpolation” in the
original paper, suggests (inverse) distance-weighted k-NN, as
implemented here.
Since #13068 refresh and flush requests go to the primary first and are then replicated.
One difference to before is though that if a shard is not available (INITIALIZING for example)
we wait a little for an indexing request but for refresh we don't and just give up immediately.
Before, refresh requests were just send to the shards regardless of what their state is.
In tests we sometimes create an index, issue an indexing request, refresh and
then get the document. But we do not wait until all nodes know that all primaries have ben assigned.
Now potentially one node can be one cluster state behind and not know yet that
the shards have ben started. If the refresh is executed through this node then the
refresh request will silently fail on shards that are started already because from
the nodes perspective they are still initializing. As a consequence, documents
that expected to be available in the test are now not.
Example test failures are here: http://build-us-00.elastic.co/job/elasticsearch-20-oracle-jdk7/395/
This commit changes the timeout to 1m (default) to make sure we don't miss shards
when we refresh. This will trigger the same retry mechanism as for indexing requests.
We still have to make a decision if this change of behavior is acceptable.
see #13238
From a user perspective, the main benefit from this upgrade is that the new
Lucene53Codec has disk-based norms. The elasticsearch directory has been fixed
to load these norms through mmap instead of nio.
Other changes include the removal of `max_thread_states`, the fact that
PhraseQuery and BooleanQuery are now immutable, and that deleted docs are now
applied on top of the Scorer API.
This change introduces a couple of `AwaitsFix`s but I don't think it should
hold us from merging.
This allows reducing privileges with doPrivileged to work,
otherwise it will fail with NPE.
In general, if some code wants to do that, let it. The null
check is needed, even though ProtectionDomain(CodeSource, PermissionCollection)
is more than a bit misleading: "the current Policy will not be consulted".
Additionally add a defensive check for location, since the docs
there are even more confusing: https://bugs.openjdk.java.net/browse/JDK-8129972
The jdk policy impl has both these checks.
Several shard-level operations that previously broadcasted a request
per shard were converted to broadcast a request per node. This commit
converts upgrade action to this new model as well.
Closes#13204
Switch to use HTTPS by default for all hardcoded plugin URLs.
If users want to install via HTTP they can still specify a HTTP
URL manually.
Closes#12748
The current netty multiport tests bind on localhost and then try to connect
to 127.0.0.1, which may fail, if localhost is resolved to ipv6 by default.
This randomly chooses between 127.0.0.1, localhost and ::1 (if available) for
binding and then uses this throughout the test.
The path that a shard is allocated on is not taken into account when
we decide to move a shard away from a node because it passed a watermark.
Even worse we potentially moved away (relocated) a shard that was not even
allocated on that disk but on another on the node in question. This commit
adds a ShardRouting -> dataPath mapping to ClusterInfo that allows to identify
on which disk the shards are allocated on.
Relates to #13106
The field type tests for mappings had a huge hole: check compatibility
was not tested directly at all! I had meant for this to happen in a
follow up after #8871, and was relying on existing mapping tests.
However, there were a number of issues.
This change reworks the fieldtype tests to be able to check all settable
properties on a field type work with checkCompatibility. It fixes a
handful of small bugs in various field types. In particular, analyzer
comparison was just wrong: it was comparing reference equality for
search analyzer instead of the analyzer name. There was also no check
for search quote analyzer.
closes#13112
```sh
bin/plugin install lmenezes/elasticsearch-kopf/develop
-> Installing lmenezes/elasticsearch-kopf/develop...
Trying http://download.elastic.co/lmenezes/elasticsearch-kopf/elasticsearch-kopf-develop.zip ...
Trying http://search.maven.org/remotecontent?filepath=lmenezes/elasticsearch-kopf/develop/elasticsearch-kopf-develop.zip ...
Trying https://oss.sonatype.org/service/local/repositories/releases/content/lmenezes/elasticsearch-kopf/develop/elasticsearch-kopf-develop.zip ...
Trying https://github.com/lmenezes/elasticsearch-kopf/archive/develop.zip ...
Downloading .................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/develop.zip checksums if available ...
Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip ...
Downloading ....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip checksums if available ...
```
This happens because we don't have anymore ElasticsearchWrapperException here but standard java exceptions.
Closes#13196.
Currently, many shard-level operations are transported with a request
per shard via TransportBroadcastAction. These shard-level requests are
then submitted to unbounded execution queues for asynchronous execution
on the receiving node. This transport mechanism and stuffing of the
execution queues can be problematic on large clusters. A better
mechanism would be to aggregate the shard-level requests, transport
them via a single request per node, and execute the shard-level
operations serially on the receiving node.
This commit introduces TransportNodeBroadcastAction which is the
high-level mechanism for transporting the shard-level operations in a
single request per node. The shard-level operations are executed
serially on the receiving node and per-node shard-level results are
aggregated into a single response per node. These node-level results
are then aggregated into a single response to the initial request.
One item of note is a new mechanism for registering request handlers.
This mechanism enables registrants to provide a callback for
instantiating new instances of the request class. Doing this enables
the inner class to be instantiated with the context of its outer class.
This is done so that a single NodeRequest class can be defined rather
than defining a class per operation.
Closes#7990
Today we sum up the disk usage for the allocation decider which is broken since
we don't stripe across multiple data paths. Each shard has it's own private path
now but the allocation deciders still treat all paths as one big disk. This commit
adds allows allocation deciders to access the least used and most used path to make
better allocation decidsions upon canRemain and canAllocate calls.
Yet, this commit doesn't fix all the issues since we still can't tell which shard
can remain and which can't. This problem is out of scope in this commit and will be solved
in a followup commit.
Relates to #13106
Previously multiple clusters in the same JVM reused the same port ranges, leading to potential big gaps in port selection, which in turns causes unicast based discovery to fail, missing to find another node in the default 5 port range.
Also the previous logic had http use a range that is assigned to another JVMs.
We default the value to be 20x the value of a ping timeout, however we only use the legacy ping timeout settings value for the calculation.
Closes#13162
This commit removes and now forbids all uses of
com.google.common.collect.Lists across the codebase. This is the first
of many steps in the eventual removal of Guava as a dependency.
lessThanOrEqualTo is more appropriate when comparing _ttl than lessThan
because in rare cases, when tests run very fast, the ttl you fetch will
still equal the one you sent.
Some listeners may need to do work before a shard's path is
accessed (such as creating the directory in a plugin), so the listener
should be called before anything happens (as its name implies).
detect_noop is pretty cheap and noop updates compartively expensive so this
feels like a sensible default.
Also had to do some testing and documentation around how _ttl works with
detect_noop.
Closes#11282
As discussed in #11744 this is the last step to unify parsing of boost and _name. Those fields are supported only in long version of queries, while we sometimes parse them when wwe shouldn't, inconsistently.
Closes#11744Closes#12966
Since we now don't stripe shards across data paths we need a way
to access the information on which path a shard is allocated to
eventually do better allocation decisions based on disk usage etc.
This commit exposes the shard paths as part of the shard stats.
Relates to #13106