* Accept an array of field names and boosts in the index.query.default_field setting
This commit allows to define an array of field names and boosts for the index setting `index.query.default_field`.
The format is equivalent to the `fields` options of the full text search queries (e.g. field_name^boost).
This commit also makes this setting dynamically updatable.
Fixes#25946
* More XContent migrations
* Removes ToXContentToBytes
* Adds toString to classes that used to extend ToXContentToBytes
* use XContentHelper
* more review comments
* prettify tostring output
The test verifies that search on the primary works by executing a search with preference _primary. If the primary is relocating, however, it
does not take the primary relocation target into account. The test only makes sense, however, if balancing is not happening yet, i.e., the
cluster is not green.
The javadoc tool on JDK 9 has issues with the combination of anonymous classes and varargs parameters.
This commit simply refactors a few anonymous classes to private inner classes.
This PR begins the long journey to deprecating Streamable.
The idea here is to add additional method signatures that
support Writeable.Reader, so that the work to migrate objects TransportMessage to
implement Writeable and not Streamable.
One example conversion is done in this PR: SimulatePipelineRequest.
This commit extracts the inner query in the ESToParentBlockJoinQuery for highlighting.
This query has been added in 5.4 and breaks plain highlighting on nested queries.
Highlighters that use postings or term vectors are not affected because they can't highlight nested documents correctly.
Fixes#26230
This commit makes the security code aware of the Java 9 FilePermission changes (see #21534) and allows us to remove the `jdk.io.permissionsUseCanonicalPath` system property.
Gives allocation commands from the cluster reroute API
the ability to provide messages to be logged once the
cluster state change has been committed.
The purpose of this change is to create a record in the
logs when allocation commands which could potentially
be destructive are applied. The allocate_empty_primary
and allocate_stale_primary commands are the only ones
that currently provide log messages.
Closes#22821
* Deprecate global_ordinals_hash and global_ordinals_low_cardinality
This change deprecates the `global_ordinals_hash` and `global_ordinals_low_cardinality` and
makes the `global_ordinals` execution hint choose internally if global ords should be remapped or use the segment ord directly.
These hints are too sensitive and expert to be exposed and we should be able to take the right decision internally based on the agg tree.
Currently the `precision` parameter must be a precision level
in the range of [1,12]. In #5042 it was suggested also supporting
distance units like "1km" to automatically approcimate the needed
precision level. This change adds this support to the Rest API by
making use of GeoUtils#geoHashLevelsForPrecision.
Plain integer values without a unit are still treated as precision
levels like before. Distance values that are too small to be represented
by a precision level of 12 (values approx. less than 0.056m) are
rejected.
Closes#5042
The `from` search parameter cannot really be used in scrolled searches. This
commit adds a check for this case to the SearchRequest#validate() method so we
can reported it as an error rather than silently ignoring it.
Closes#9373
There was some confusion about the fact that tokens emitted from a Pattern
Capture Token Filter are treated as synonyms when used to analyze a search
query. This commit adds an explanation to the note in the docs to emphasize this
behaviour.
Closes#25746
This change is a continuation of #25726 that aligns field expansions for the simple_query_string with the query_string and multi_match query.
The main changes are:
* For exact field name, the new behavior is to rewrite to a matchnodocs query when the field name is not found in the mapping.
* For partial field names (with * suffix), the expansion is done only on keyword, text, date, ip and number field types. Other field types are simply ignored.
* For all fields (*), the expansion is done on accepted field types only (see above) and metadata fields are also filtered.
The use_all_fields option is deprecated in this change and can be replaced by setting `*` in the fields parameter.
This commit also changes how text fields are analyzed. Previously the default search analyzer (or the provided analyzer) was used to analyze every text part
, ignoring the analyzer set on the field in the mapping. With this change, the field analyzer is used instead unless an analyzer has been forced in the parameter of the query.
Finally now that all full text queries can handle the special "*" expansion (`all_fields` mode), the `index.query.default_field` is now set to `*` for indices created in 6.
This commit converts script query to use a new FilterScript context. The
new context returns a boolean, so the error that would have previously
happened at runtime if a non boolean was returned would now happen at
script compilation. Also, the leniency of supporting returning a number
and 0 mapping to false, non-zero to true is gone, but it was never
documented. With the new context compilation will now also fail if
special variables are used at compilation time, instead of runtime, eg
ctx.
Right now we use a custom future for the CloseFuture associated with a
channel. This is because we need special unwrapping logic to ensure that
exceptions from a future failure are a certain type (opposed to an
UncategorizedException). However, the current version is limiting
because we can only attach one listener.
This commit changes the CloseFuture to extend the
PlainListenableActionFuture. This change allows us to attach multiple
listeners.
When slices is set as auto, there's an additional network call
needed for the reindex tasks to know how to rethrottle. Sometimes
the rethrottle action happens before the reindex task is fully
initialized, so in the test we wait for the task to be ready.
This commit also adds some safeguards to ensure that
cancel and rethrottle operations are handled correctly
Closes#26192
If we do not have permissions to write the keystore, an unclear access
denied exception is thrown. This commit catches this exception so that
we can decorate it with a friendlier error message.
Relates #26284
When Elasticsearch starts up, it tries to create a keystore if one does
not exist; this is so the keystore can be seeded. With the RPM and
Debian packages, the keystore would be located in
/etc/elasticsearch. This configuration directory is typically not
writable by the elasticsearch user so the Elasticsearch process will not
have permission to create the keystore. Instead, the RPM and Debian
packages should create the keystore (if it does not exist) on package
installation. This commit enables these packages to do that in the
post-install routines.
Relates #26282
We need to check if JAVA_TOOL_OPTIONS, and JAVA_OPTS are set, and if
ES_PATH_CONF is not set. However, if these variables are defined and
contain quotes, the current mechanism busts on them. Instead, we should
use safer mechanism for checking if these variable are defined or
not. This commit does that.
Relates #26268
We use `:` for cross-cluster search (eg `cluster:index`), therefore, we should
not allow the ambiguity when allowing cluster or index names.
Relates to #23892
We previously explicitly set the HOSTNAME environment variable so that
${HOSTNAME} could be used a placeholder for defining the node.name in
elasticsearch.yml. We removed explicitly setting this because bash
defines HOSTNAME. The problem is that bash defines HOSTNAME as a bash
variable, not as an environment variable. Therefore, to restore the
previous behavior, we export the bash value for HOSTNAME as an
environment variable named HOSTNAME. For consistency between Windows and
the Unix-like systems, we also define HOSTNAME with a value equal to the
environment variable COMPUTERNAME on Windows.
Relates #26262
We quoted some strings in the Windows elasticsearch-env script but echo
on Windows includes these quotes in the output. This commit removes
these quotes, they do not need to be output and are noise. Note that one
of the commands is wrapped in parentheses, this is to make obvious that
the space at the end of the corresponding line is intentionally there.
The error message for warning about the use of JAVA_TOOL_OPTIONS on
Windows incorrectly uses $JAVA_TOOL_OPTIONS to dereference the
environment variable JAVA_TOOL_OPTIONS; on Windows it should be
%JAVA_TOOL_OPTIONS%.
The client sniffer depends on the low-level REST client, while the Java high-level REST client and the transport client depend on Elasticsearch itself. Javadoc are not that useful unless they have links to the Elasticsearch classes in the latter case, and to the low-level REST client in the sniffer javadoc. This commit adds those links.
Links to inner classes were using `$` in urls instead of `.`, causing
them to 404.
Also fixes the doc generation code to generate docs into the correct
directory. We moved the docs but never updated the generation code.
* Add REST tests for value_count, stats, extended_stats and cardinality aggs
Also updates the document type of of other agg REST tests to `doc`
Related to #26220
We already added the functionality to create a new keystore on startup
in #26126 but apparently missed to persist the keystore. This change adds
peristence and adds a test for the boostrap loading.
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes#26233