This change adds a new "filter_path" parameter that can be used to filter and reduce the responses returned by the REST API of elasticsearch.
For example, returning only the shards that failed to be optimized:
```
curl -XPOST 'localhost:9200/beer/_optimize?filter_path=_shards.failed'
{"_shards":{"failed":0}}%
```
It supports multiple filters (separated by a comma):
```
curl -XGET 'localhost:9200/_mapping?pretty&filter_path=*.mappings.*.properties.name,*.mappings.*.properties.title'
```
It also supports the YAML response format. Here it returns only the `_id` field of a newly indexed document:
```
curl -XPOST 'localhost:9200/library/book?filter_path=_id' -d '---hello:\n world: 1\n'
---
_id: "AU0j64-b-stVfkvus5-A"
```
It also supports wildcards. Here it returns only the host name of every nodes in the cluster:
```
curl -XGET 'http://localhost:9200/_nodes/stats?filter_path=nodes.*.host*'
{"nodes":{"lvJHed8uQQu4brS-SXKsNA":{"host":"portable"}}}
```
And "**" can be used to include sub fields without knowing the exact path. Here it returns only the Lucene version of every segment:
```
curl 'http://localhost:9200/_segments?pretty&filter_path=indices.**.version'
{
"indices" : {
"beer" : {
"shards" : {
"0" : [ {
"segments" : {
"_0" : {
"version" : "5.2.0"
},
"_1" : {
"version" : "5.2.0"
}
}
} ]
}
}
}
}
```
Note that elasticsearch sometimes returns directly the raw value of a field, like the _source field. If you want to filter _source fields, you should consider combining the already existing _source parameter (see Get API for more details) with the filter_path parameter like this:
```
curl -XGET 'localhost:9200/_search?pretty&filter_path=hits.hits._source&_source=title'
{
"hits" : {
"hits" : [ {
"_source":{"title":"Book #2"}
}, {
"_source":{"title":"Book #1"}
}, {
"_source":{"title":"Book #3"}
} ]
}
}
```
We used to keep track of the rewritten query in the highlighter context to support custom rewriting done by our own postings highlighter fork. Now that we rely on lucene implementation, no rewrite happens, we can simply keep track of the original query and simplify code around it.
Closes#11317
TransportShardReplicationOperationAction is a mouthful and is the only thing we mean when we say replication. This commit also changes some related friends.
The bin/plugin script now uses the default CONF_DIR & CONF_FILE environment vars. This allows to install a plugin even if Elasticsearch has been installed with a RPM or a DEB package. This commit also adds testing files for TAR archive and plugins installation.
Closes#10673
Today, when loading plugins from the classpath we take the enumeration
given to us by the classloader and attempt to load every URL. This can
cause issues as certain classloaders, such as groovy's, will return the same
URL multiple times in the enumeration. When this happens, startup can fail
with guice errors as bindings have already been registered.
To workaround this, we create a set from the URLs returned by the classloader
to provide uniqueness.
Simplification of MultiValueMode by removing the apply and reduce
methods for each mode. This creates a more consistent environment for
sorting methods since all sorting must now go through select methods.
This allows for better error handling and better encapsulation for
sorting fields with multiple values.
Note that apply and reduce had inconsistencies in the code base
prior to this change since different calls were assuming that the
accumulator for apply was the first input versus the second input.
Also added is an UnsortedNumericDoubleValues interface to allow
customized values to be input into the different sort modes. This
prevents the need for apply/reduce outside of MultiValueMode.
closes#11290
In order to get some information if the TTL purger thread could
successfully delete all documents per bulk exection, this commit
adds some logging. TRACE level logging will potentially contain
a lot of information about all the bulk failures.
Closes#11019
The Sampler agg was not capable of collecting samples for more than one parent bucket.
Added a Junit test case and changed BestDocsDeferringCollector to internally maintain collections per parent bucket.
Closes#10719
This option is broken currently since it potentially interprets an incoming
binary value as compressed while it just happens that the first bytes are the
same as the LZF header.
Sigar can only be disabled by removing the binaries. This is tricky for our
tests and might cause a lot of trouble if a user wants or needs to do it.
This commit allows to disable sigar with a simple boolean flag in the settings.
Closes#9582
FieldMapper is currently generic, where the templated type
is only used as the return of a single function, value(Object).
This change simply removes this generic type. It is not needed. The
implementations of value() now has a covariant return (so
those methods have not changed).
If we close the shard before the engine is started we see a NPE in
the logs which is not problematic since the relevant parts are in a
finally block. Yet, the NPE is unnecessary and can be confusing.
Today, only the NettyTransportChannel implements the getProfileName method
and the other channel implementations do not. The profile name is useful for some
plugins to perform custom actions based on the name. Rather than checking the
type of the channel, it makes sense to always expose the profile name.
For DirectResponseChannels we use a name that cannot be used in the settings
to define another profile with that name. For LocalTransportChannel we use the
same name as the default profile.
Closes#10483
When mapping updates happen concurrently with document parsing, bad things can
happen. For instance, when applying a mapping update we first update the Mapping
object which is used for parsing and then FieldNameAnalyzer which is used by
IndexWriter for analysis. So if you are unlucky, it could happen that a document
was parsed successfully without introducing dynamic updates yet IndexWriter does
not see its analyzer yet.
In order to fix this issue, mapping updates are now protected by a write lock
and document parsing is protected by the read lock associated with this write
lock. This ensures that no documents will be parsed while a mapping update is
being applied, so document parsing will either see none of the update or all of
it.
We recently introduced support for reading and writing list of strings as part of #11056, but that was an oversight, we should be using arrays instead.
Closes#11276
The downside of having createTypeUids accept a list only is that if you do provide a collection nothing breaks at compile time, but you end up calling the same method that accepts an object as second argument. Renamed both methods to avoid clashes to `createUidsForTypesAndId` and `createUidsForTypesAndIds`. The latter accepts now a Collection of Objects rather than just a List.
Closes#11263
We fetch the state version to find the right shard to be started as
the primary. This can return a valid shard state even if the shard is
corrupted and can't even be opened. This commit adds best effort detection
for this scenario and returns an invalid version for the shard if it's corrupted
Closes#11226
Similar to the batching of "shards-started" actions, this commit implements batching of snapshot status updates. This is useful when backing up many indices as the cluster state does not need to be republished as many times.
Closes#10295