Currently we support empty query clauses like the filter in
"constant_score" : { "filter" : { } }
How these clauses are handled depends on the surrounding query.
They later are either ignored, converted to match all or no documents or
passed up further in the query hierarchy. During parsing these claues are
currently represented as EmptyQueryBuilders. When not handled anywhere else,
these special cases need to be checked for on the shard when building the
lucene query.
This is trappy, so this PR changes the parsing of compound queries. Instead
of returning QueryBuilder, the core query parsing method
QueryShardContext#parseInnerQueryBuilder() now return an Optional which can
be empty in the case of empty query clauses. This has the advantage of forcing
callers to deal with this sooner or later. When encountering empty Optionals,
compound query builders now have the choice to ignore them, pass them on or
rewrite to a different query, depending on context.
This commit adds a new aggs-matrix-stats module. The module presents a new class of aggregations called Matrix Aggregations. Matrix aggregations work on multiple fields and produce a matrix as output. The first matrix aggregation provided by the module is matrix_stats aggregation. This aggregation computes the following statistics over a set of fields:
* Covariance
* Correlation
For completeness (and interpretation purposes) the following per-field statistics are also provided:
* sample count
* population mean
* population variance
* population skewness
* population kurtosis
This commit adds a note regarding the difference in configuration for
the Windows service heap size from any other installation of
Elasticsearch.
Relates #18606
The setting bootstrap.mlockall is useful on both POSIX-like systems
(POSIX mlockall) and Windows (Win32 VirtualLock). But mlockall is really
a POSIX only thing so the name should not be tied POSIX. This commit
renames the setting to "bootstrap.memory_lock".
Relates #18669
Index deletion requests currently use a custom acknowledgement mechanism that wait for the data nodes to actually delete the data before acknowledging the request to the client. This was initially put into place as a new index with same name could only be created if the old index was wiped as we used the index name as data folder on the data nodes. With PR #16442, we now use the index uuid as folder name which avoids collision between indices that are named the same (deleted and recreated). This allows us to get rid of the custom acknowledgment mechanism altogether and rely on the standard cluster state-based acknowledgment instead.
Closes#18558
If this option is enabled on a processor it silently catches any processor related failure and continues executing the rest of the pipeline.
Closes#18493
We do throw ConnectTransportException which is logged in trace level hiding a potentially
important information when an old or wrong node wants to connect. We should throw ISE and
log as warn.
Similar reasoning as #18133 but for the aggs API. One important change is that
I moved the base PipelineAggregatorBuilder class to the o.e.s.aggregations
package instead of o.e.s.aggregations.pipeline so that the create method does
not need to be public.
Share applying template with MetaDataCreateIndexService and MetaDataIndexTemplateService
Add some unit test
Collapse addMappingsToMapperService and move it to MapperService
Extract validateTemplate method
use expectThrows in testcase
Add TODO comment
Closes#2415
This commit clarifies the behavior that must be adhered to by any
implementors of the BlobContainer interface. This is done through
expanded Javadocs.
Closes#18157Closes#15580
Gradle has "rules" for certain task names, and clean is one of these.
When you run clean, it searches for any tasks named cleanX, and tries to
reverse engineer the X task. For eclipse, this means it finds
cleanEclipse, and internally runs it (but this does not show up as a
dependency of clean in tasks list!!). Since we added .settings as an
additional file to delete with cleanEclipse, this gets deleted when
running just "clean". It doesn't delete the other files because those
have their own clean methods, and dependencies are not followed in this
insanity. This change simply makes a separate task for cleaning eclipse
settings.
The page cache recycler has a dependency on thread pool that was there
for historical reasons but is no longer needed. This commit removes this
now unneeded dependency.
Relates #18664
Contains a number of cleanups related to recent changes in RoutingNodes:
- PR #17821 (Immutable ShardRouting) changed RoutingNode to use a map indexed by ShardId to manage ShardRouting elements. This means that we can directly select the right ShardRouting without iterating over all elements. This lets us get rid of RoutingNodeIterator and all kind of iterations all over the place.
- Second cleanup is an extension of #18390 (Expose cluster state before reroute in RoutingAllocation instead of RoutingNodes). We should not reexpose RoutingTable in RoutingNodes and only use it in the constructor. This makes it clear that the RoutingTable is only used to construct the RoutingNodes and can diverge from it afterwards (only RoutingNodes is mutable).
- Remove AllocationService.applyNewNodes() (that is already done as part of construction of RoutingNodes)
When we shrink an index we can estimate the shards size for the primary
from the source index. This is important for allocation decisions since we
should try out best to ensure we have enough space on the node we shrink the
index.
Currently we return `null` when the query in a common terms query has
zero tokens after analysis. It would be better if query builders
`toQuery()` would never return null and return a meaningful lucene
query instead. Since an ExtendedCommonTermsQuery with no terms gets
rewritten to a MatchNoDocsQuery later, it is enough to leave out the
check for zero tokens.
The fact that ip fields used a different doc values representation in 2.x causes
issues when querying 2.x and 5.0 indices in the same request. This changes 2.x
doc values on ip fields/2.x to be hidden behind binary doc values that use the
same encoding as 5.0. This way the coordinating node will be able to merge shard
responses that have different major versions.
One known issue is that this makes sorting/aggregating slower on ip fields for
indices that have been generated with elasticsearch 2.x.
This adds a low level primitive operations to shrink an existing
index into a new index with a single shard. This primitive expects
all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node.
To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only:
```BASH
$ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{
"settings" : {
"index.routing.allocation.require._name" : "shrink_node_name",
"index.blocks.write" : true
}
}
```
once all shards are started on the shrink node. the new index can be created via:
```BASH
$ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{
"settings" : {
"index.codec" : "best_compression",
"index.number_of_replicas" : 1
}
}'
```
This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc.
The shrink operation does not modify the source index, if a shrink operation should
be canceled or if the shrink failed, the target index can simply be deleted and
all resources are released.
When calling `findTemplateBuilder(context, currentFieldName, "text", null)`,
elasticsearch ignores all templates that have a `match_mapping_type` set since
no dynamic type is provided (the last parameter, which is null in that case).
So this should only be called _last_. Otherwise, if a path-based template
matches, it will have precedence over all type-based templates.
Closes#18625
We have 3 evil tests for jarhell. They have been failing in java 9
because of how evil they are. The first checks the leniency we add for
jarhell in the jdk itself. This is unecessary, since if the leniency
wasn't there, we would already be failing all jarhell checks. The second
is checking the compile version is compatible with the jdk. This is
simpler since we don't need to fake the java version: we know 1.7 should
be compatibile with both java 8 and 9, so we can use that as a constant.
Finally the last test checks if the java version system property is
broken. This is simply something we should not check, we have to trust
that java specifies it correctly, and again, if it was broken, all
jarhell checks would be broken.