Add support for a specific deprecation logging that can be used to turn
on in order to notify users of a specific feature, flag, setting,
parameter, ... being deprecated.
The deprecation logger logs with a "deprecation." prefix logge
(or "org.elasticsearch.deprecation." if full name is used), and outputs
the logging to a dedicated deprecation log file.
Deprecation logging are logged under the DEBUG category. The idea is not to
enabled them by default (under WARN or ERROR) when running embedded in
another application.
By default they are turned off (INFO), in order to turn it on, the
"deprecation" category need to be set to DEBUG. This can be set in the
logging file or using the cluster update settings API, see the documentation
Closes#11033
In order to be sure that a release can be executed on the local machine,
the build_release script now checks for environment variables and tries
to execute a couple of commands.
In order to easily check for a correctly setup environment, you can
run the following commands, which exits early and does not trigger a
release process.
```
python3 dev-tools/build_release.py --check-only
```
This change adds a new "filter_path" parameter that can be used to filter and reduce the responses returned by the REST API of elasticsearch.
For example, returning only the shards that failed to be optimized:
```
curl -XPOST 'localhost:9200/beer/_optimize?filter_path=_shards.failed'
{"_shards":{"failed":0}}%
```
It supports multiple filters (separated by a comma):
```
curl -XGET 'localhost:9200/_mapping?pretty&filter_path=*.mappings.*.properties.name,*.mappings.*.properties.title'
```
It also supports the YAML response format. Here it returns only the `_id` field of a newly indexed document:
```
curl -XPOST 'localhost:9200/library/book?filter_path=_id' -d '---hello:\n world: 1\n'
---
_id: "AU0j64-b-stVfkvus5-A"
```
It also supports wildcards. Here it returns only the host name of every nodes in the cluster:
```
curl -XGET 'http://localhost:9200/_nodes/stats?filter_path=nodes.*.host*'
{"nodes":{"lvJHed8uQQu4brS-SXKsNA":{"host":"portable"}}}
```
And "**" can be used to include sub fields without knowing the exact path. Here it returns only the Lucene version of every segment:
```
curl 'http://localhost:9200/_segments?pretty&filter_path=indices.**.version'
{
"indices" : {
"beer" : {
"shards" : {
"0" : [ {
"segments" : {
"_0" : {
"version" : "5.2.0"
},
"_1" : {
"version" : "5.2.0"
}
}
} ]
}
}
}
}
```
Note that elasticsearch sometimes returns directly the raw value of a field, like the _source field. If you want to filter _source fields, you should consider combining the already existing _source parameter (see Get API for more details) with the filter_path parameter like this:
```
curl -XGET 'localhost:9200/_search?pretty&filter_path=hits.hits._source&_source=title'
{
"hits" : {
"hits" : [ {
"_source":{"title":"Book #2"}
}, {
"_source":{"title":"Book #1"}
}, {
"_source":{"title":"Book #3"}
} ]
}
}
```
We used to keep track of the rewritten query in the highlighter context to support custom rewriting done by our own postings highlighter fork. Now that we rely on lucene implementation, no rewrite happens, we can simply keep track of the original query and simplify code around it.
Closes#11317
TransportShardReplicationOperationAction is a mouthful and is the only thing we mean when we say replication. This commit also changes some related friends.
The bin/plugin script now uses the default CONF_DIR & CONF_FILE environment vars. This allows to install a plugin even if Elasticsearch has been installed with a RPM or a DEB package. This commit also adds testing files for TAR archive and plugins installation.
Closes#10673
Today, when loading plugins from the classpath we take the enumeration
given to us by the classloader and attempt to load every URL. This can
cause issues as certain classloaders, such as groovy's, will return the same
URL multiple times in the enumeration. When this happens, startup can fail
with guice errors as bindings have already been registered.
To workaround this, we create a set from the URLs returned by the classloader
to provide uniqueness.
Simplification of MultiValueMode by removing the apply and reduce
methods for each mode. This creates a more consistent environment for
sorting methods since all sorting must now go through select methods.
This allows for better error handling and better encapsulation for
sorting fields with multiple values.
Note that apply and reduce had inconsistencies in the code base
prior to this change since different calls were assuming that the
accumulator for apply was the first input versus the second input.
Also added is an UnsortedNumericDoubleValues interface to allow
customized values to be input into the different sort modes. This
prevents the need for apply/reduce outside of MultiValueMode.
closes#11290
In order to get some information if the TTL purger thread could
successfully delete all documents per bulk exection, this commit
adds some logging. TRACE level logging will potentially contain
a lot of information about all the bulk failures.
Closes#11019
The Sampler agg was not capable of collecting samples for more than one parent bucket.
Added a Junit test case and changed BestDocsDeferringCollector to internally maintain collections per parent bucket.
Closes#10719