It took me quite a while of online searching and experimenting to realize the function-call asymmetry in the Add versus Remove from a list, like the "tags" list! I realize we cannot give examples for every single thing the user wants to do in Painless, but this is such a common use case (removing a tag from a single doc, or from a set of docs with Update-By-Query) that I believe it ought to be demonstrated immediately after the "add a tag" example. We have an example of removing an entire document field, but not removing one element of a list (a multi-valued field).
Also, a minor grammar fix: I have added an apostrophe to the word "its" in the accompanying text of the example just above.
The docs here incorrectly state that it is okay for a heap dump file to
exist when heap dump path is configured to a fixed filename. This is
incorrect, the JVM will fail to write the heap dump if a heap dump file
already exists at the specified location (see the DumpWriter constructor
DumpWriter::DumpWriter(const char* path) in the JVM source).
The request and response classes have been extracted from `IndexUpgradeInfoAction` into top-level classes, and moved to the protocol jar. The `UpgradeActionRequired` enum is also moved.
Relates to #29827
Our rest testing framework has support for sniffing the host metadata on
startup and, before this change, it'd sniff that metadata before running
the first test. This prevents running these tests against
elasticsearch installations that won't support sniffing like Elastic
Cloud. This change allows tests to only sniff for metadata when they
encounter a test with a `node_selector`. These selectors are the things
that need the metadata anyway and they are super rare. Tests that use
these won't be able to run against installations that don't support
sniffing but we can just skip them. In the case of Elastic Cloud, these
tests were never going to work against Elastic Cloud anyway.
- Expose x-pack usage docs page which was not linked in supported-apis page
- make watcher a top-level dir outside of x-pack directory
- move x-pack info and usage pages to miscellaneous
- add new Watcher category to supported-apis (they were under miscellaneous)
- remove x-pack prefix from watcher docs titles
This commit adds two pieces. The first is a small set of documentation providing
instructions on how to get setup to run context examples. This will require a download
similar to how Kibana works for some of the examples. The second is an ingest processor
example using the downloaded data. More examples will follow as ideally one per PR.
This also adds a set of tests to individually test each script as a unit test.
Suggestion responses were previously serialized as streamables which
made writing suggesters in plugins with custom suggestion response types
impossible. This commit makes them serialized as named writeables and
provides a facility for registering a reader for suggestion responses
when registering a suggester.
This also makes Suggestion responses abstract, requiring a suggester
implementation to provide its own types. Suggesters which do not need
anything additional to what is defined in Suggest.Suggestion should
provide a minimal subclass.
The existing plugin suggester integration tests are removed and
replaced with an equivalent implementation as an example
plugin.
On some Linux distributions tmpfiles.d cleans files and
directories under /tmp if they haven't been accessed for
10 days.
This can cause problems for ML as ML is currently the only
component that uses the temp directory more than a few
seconds after startup. If you didn't open an ML job for
10 days and then tried to open one then the temp directory
would have been deleted.
This commit prevents the problem occurring in the case of
Elasticsearch being managed by systemd, as systemd private
temp directories are not subject to periodic cleanup (by
default).
Additionally there are now some docs to warn people about
the risk and suggest a manual mitigation for .tar.gz users.
Rest HL client: Add get license action
Continues to use String instead of a more complex License class to
hold the license text similarly to put license.
Relates #29827
* Make cluster stats response contain cluster UUID
* Updating constructor usage in Monitoring tests
* Adding cluster_uuid field to Cluster Stats API reference doc
* Adding rest api spec test for expecting cluster_uuid in cluster stats response
* Adding missing newline
* Indenting do section properly
* Missed a spot!
* Fixing the test cluster ID
This commit adds a boolean system property, `es.scripting.use_java_time`,
which controls the concrete return type used by doc values within
scripts. The return type of accessing doc values for a date field is
changed to Object, essentially duck typing the type to allow
co-existence during the transition from joda time to java time.
First, some background: we have 15 different methods to get a logger in
Elasticsearch but they can be broken down into three broad categories
based on what information is provided when building the logger.
Just a class like:
```
private static final Logger logger = ESLoggerFactory.getLogger(ActionModule.class);
```
or:
```
protected final Logger logger = Loggers.getLogger(getClass());
```
The class and settings:
```
this.logger = Loggers.getLogger(getClass(), settings);
```
Or more information like:
```
Loggers.getLogger("index.store.deletes", settings, shardId)
```
The goal of the "class and settings" variant is to attach the node name
to the logger. Because we don't always have the settings available, we
often use the "just a class" variant and get loggers without node names
attached. There isn't any real consistency here. Some loggers get the
node name because it is convenient and some do not.
This change makes the node name available to all loggers all the time.
Almost. There are some caveats are testing that I'll get to. But in
*production* code the node name is node available to all loggers. This
means we can stop using the "class and settings" variants to fetch
loggers which was the real goal here, but a pleasant side effect is that
the ndoe name is now consitent on every log line and optional by editing
the logging pattern. This is all powered by setting the node name
statically on a logging formatter very early in initialization.
Now to tests: tests can't set the node name statically because
subclasses of `ESIntegTestCase` run many nodes in the same jvm, even in
the same class loader. Also, lots of tests don't run with a real node so
they don't *have* a node name at all. To support multiple nodes in the
same JVM tests suss out the node name from the thread name which works
surprisingly well and easy to test in a nice way. For those threads
that are not part of an `ESIntegTestCase` node we stick whatever useful
information we can get form the thread name in the place of the node
name. This allows us to keep the logger format consistent.
Rollover should not swap aliases when `is_write_index` is set to `true`.
Instead, both the new and old indices should have the rollover alias,
with the newly created index as the new write index
Updates Rollover to leverage the ability to preserve aliases and swap which is the write index.
Historically, Rollover would swap which index had the designated alias for writing documents against. This required users to keep a separate read-alias that enabled reading against both rolled over and newly created indices, whiles the write-alias was being re-assigned at every rollover.
With the ability for aliases to designate a write index, Rollover can be a bit more flexible with its use of aliases.
Updates include:
- Rollover validates that the target alias has a write index (the index that is being rolled over). This means that the restriction that aliases only point to one index is no longer necessary.
- Rollover explicitly (and atomically) swaps which index is the write-index by explicitly assigning the existing index to have `is_write_index: false` and have the newly created index have its rollover alias as `is_write_index: true`. This is only done when `is_write_index: true` on the write index. Default behavior of removing the alias from the rolled over index stays when `is_write_index` is not explicitly set
Relevant things that are staying the same:
- Rollover is rejected if there exist any templates that match the newly-created index and configure the rollover-alias
- I think this existed to prevent the situation where an alias pointed to two indices for a short while. Although this can technically be relaxed, the specific cases that are safe are really particular and difficult to reason, so leaving the broad restriction sounds good
Stating that the Fuzzy Query generates "all possible" matching terms is misleading, given that the query's default behavior is to generate a maximum of 50 matching terms.
(cherry picked from commit 345a0071a2a41fd7f80ae9ef8a39a2cb4991aedd)
We removed the default_fs store type yet the docs still contain a
reference to them. This commit addresses that by removing this
reference, and changing a reference to this section of the docs to
instead refer to mmapfs.
In the section of the bootstrap checks docs for the maximum map count
check, we refer to max size virtual memory check and explicitly call out
the maximum size virtual memory check as being the previous
point. However, this is not correct as the previous point is currently
the max file size check. It does make sense for these two checks to be
proximate to each other in the docs so this commit reorders the checks
so that the maximum size virtual memory check indeed comes before the
maximum map count check. This makes the sense in the maximum map count
check correct.
The main highlight is the removal of the reclaim_deletes_weight in the TieredMergePolicy.
The es setting index.merge.policy.reclaim_deletes_weight is deprecated in this commit and the value is ignored. The new merge policy setting setDeletesPctAllowed should be added in a follow up.
In the HL REST client we replace the License object with a string, because of
complexity of this class. It is also not really needed on the client side since
end-users are not interacting with the license besides passing it as a string
to the server.
Relates #29827
Adds a new single-value metrics aggregation that computes the weighted
average of numeric values that are extracted from the aggregated
documents. These values can be extracted from specific numeric
fields in the documents.
When calculating a regular average, each datapoint has an equal "weight"; it
contributes equally to the final value. In contrast, weighted averages
scale each datapoint differently. The amount that each datapoint contributes
to the final value is extracted from the document, or provided by a script.
As a formula, a weighted average is the `∑(value * weight) / ∑(weight)`
A regular average can be thought of as a weighted average where every value has
an implicit weight of `1`.
Closes#15731
The notion of "quality" is an overloaded term in the search ranking evaluation
context. Its usually used to decribe certain levels of "good" vs. "bad" of a
seach result with respect to the users information need. We currently report the
result of the ranking evaluation as `quality_level` which is a bit missleading.
This changes the response parameter name to `metric_score` which fits better.
* INGEST: Extend KV Processor (#31789)
Added more capabilities supported by LS to the KV processor:
* Stripping of brackets and quotes from values (`include_brackets` in corresponding LS filter)
* Adding key prefixes
* Trimming specified chars from keys and values
Refactored the way the filter is configured to avoid conditionals during execution.
Refactored Tests a little to not have to add more redundant getters for new parameters.
Relates #31786
* Add documentation
With this commit we remove the documentation for the setting
`xpack.monitoring.collection.indices.stats.timeout` which has already
been removed in code.
Closes#32133
Relates #32229
Currently the ranking evaluation response contains a 'unknown_docs' section
for each search use case in the evaluation set. It contains document ids for
results in the search hits that currently don't have a quality rating.
This change renames it to `unrated_docs`, which better reflects its purpose.
Resolving wildcards in aliases expression is challenging as we may end
up with no aliases to replace the original expression with, but if we
replace with an empty array that means _all which is quite the opposite.
Now that we support and serialize the original requested aliases,
whenever aliases are replaced we will be able to know what was
initially requested. `MetaData#findAliases` can then be updated to not
return anything in case it gets empty aliases, but the original aliases
were not empty. That means that empty aliases are interpreted as _all
only if they were originally requested that way.
Relates to #31516
Throw an exception for doc['field'].value
if this document is missing a value for the field.
After deprecation changes have been backported to 6.x,
make this a default behaviour in 7.0
Closes#29286
We do not support intra-cluster connections on multiple interfaces, but the
documentation indicates that we will in future. In fact there is currently no
plan to support this, so the forward-looking documentation is misleading. This
commit
- removes the misleading sentence
- fixes that a transport profile affects outbound connections, not inbound ones
- tidies up some nearby text
Relates #29827
This implementation behaves like the current transport client, that you basically cannot configure a Watch POJO representation as an argument to the put watch API, but only a bytes reference. You can use the the `WatchSourceBuilder` from the `org.elasticsearch.plugin:x-pack-core` dependency to build watches.
This commit also changes the license type to trial, so that watcher is available in high level rest client tests.
/cc @hub-cap
* Add basic support for field aliases in index mappings. (#31287)
* Allow for aliases when fetching stored fields. (#31411)
* Add tests around accessing field aliases in scripts. (#31417)
* Add documentation around field aliases. (#31538)
* Add validation for field alias mappings. (#31518)
* Return both concrete fields and aliases in DocumentFieldMappers#getMapper. (#31671)
* Make sure that field-level security is enforced when using field aliases. (#31807)
* Add more comprehensive tests for field aliases in queries + aggregations. (#31565)
* Remove the deprecated method DocumentFieldMappers#getFieldMapper. (#32148)
Today it is unclear what guarantees are offered by the search preference
feature, and we claim a guarantee that is stronger than what we really offer:
> A custom value will be used to guarantee that the same shards will be used
> for the same custom value.
This commit clarifies this documentation.
Forward-port of #32098 to `master`.
This change adds two contexts the execute scripts against:
* SEARCH_SCRIPT: Allows to run scripts in a search script context.
This context is used in `function_score` query's script function,
script fields, script sorting and `terms_set` query.
* FILTER_SCRIPT: Allows to run scripts in a filter script context.
This context is used in the `script` query.
In both contexts a index name needs to be specified and a sample document.
The document is needed to create an in-memory index that the script can
access via the `doc[...]` and other notations. The index name is needed
because a mapping is needed to index the document.
Examples:
```
POST /_scripts/painless/_execute
{
"script": {
"source": "doc['field'].value.length()"
},
"context" : {
"search_script": {
"document": {
"field": "four"
},
"index": "my-index"
}
}
}
```
Returns:
```
{
"result": 4
}
```
POST /_scripts/painless/_execute
{
"script": {
"source": "doc['field'].value.length() <= params.max_length",
"params": {
"max_length": 4
}
},
"context" : {
"filter_script": {
"document": {
"field": "four"
},
"index": "my-index"
}
}
}
Returns:
```
{
"result": true
}
```
Also changed PainlessExecuteAction.TransportAction to use TransportSingleShardAction
instead of HandledAction, because now in case score or filter contexts are used
the request needs to be redirected to a node that has an active IndexService
for the index being referenced (a node with a shard copy for that index).
The current docs of the put-mapping Java API is currently broken. It its current
form, it creates an index and uses the whole mapping definition given as a JSON
string as the type name. Since we didn't check the index created in the
IndicesDocumentationIT so far this went unnoticed.
This change adds test to catch this error to the documentation test, changes the
documentation so it works correctly now and adds an input validation to
PutMappingRequest#buildFromSimplifiedDef() which was used internally to reject
calls where no mapping definition is given.
Closes#31906
Currently the `keep_types` token filter includes all token types specified using
its `types` parameter. Lucenes TypeTokenFilter also provides a second mode where
instead of keeping the specified tokens (include) they are filtered out
(exclude). This change exposes this option as a new `mode` parameter that can
either take the values `include` (the default, if not specified) or `exclude`.
Closes#29277
Make SnapshotInfo and CreateSnapshotResponse parsers lenient for backwards compatibility. Remove extraneous fields from CreateSnapshotRequest toXContent.
* Adds a new auto-interval date histogram
This change adds a new type of histogram aggregation called `auto_date_histogram` where you can specify the target number of buckets you require and it will find an appropriate interval for the returned buckets. The aggregation works by first collecting documents in buckets at second interval, when it has created more than the target number of buckets it merges these buckets into minute interval bucket and continues collecting until it reaches the target number of buckets again. It will keep merging buckets when it exceeds the target until either collection is finished or the highest interval (currently years) is reached. A similar process happens at reduce time.
This aggregation intentionally does not support min_doc_count, offest and extended_bounds to keep the already complex logic from becoming more complex. The aggregation accepts sub-aggregations but will always operate in `breadth_first` mode deferring the computation of sub-aggregations until the final buckets from the shard are known. min_doc_count is effectively hard-coded to zero meaning that we will insert empty buckets where necessary.
Closes#9572
* Adds documentation
* Added sub aggregator test
* Fixes failing docs test
* Brings branch up to date with master changes
* trying to get tests to pass again
* Fixes multiBucketConsumer accounting
* Collects more buckets than needed on shards
This gives us more options at reduce time in terms of how we do the
final merge of the buckeets to produce the final result
* Revert "Collects more buckets than needed on shards"
This reverts commit 993c782d117892af9a3c86a51921cdee630a3ac5.
* Adds ability to merge within a rounding
* Fixes nonn-timezone doc test failure
* Fix time zone tests
* iterates on tests
* Adds test case and documentation changes
Added some notes in the documentation about the intervals that can bbe
returned.
Also added a test case that utilises the merging of conseecutive buckets
* Fixes performance bug
The bug meant that getAppropriate rounding look a huge amount of time
if the range of the data was large but also sparsely populated. In
these situations the rounding would be very low so iterating through
the rounding values from the min key to the max keey look a long time
(~120 seconds in one test).
The solution is to add a rough estimate first which chooses the
rounding based just on the long values of the min and max keeys alone
but selects the rounding one lower than the one it thinks is
appropriate so the accurate method can choose the final rounding taking
into account the fact that intervals are not always fixed length.
Thee commit also adds more tests
* Changes to only do complex reduction on final reduce
* merge latest with master
* correct tests and add a new test case for 10k buckets
* refactor to perform bucket number check in innerBuild
* correctly derive bucket setting, update tests to increase bucket threshold
* fix checkstyle
* address code review comments
* add documentation for default buckets
* fix typo
This commit adds the _xpack/usage api to the high level rest client.
Currently in the transport api, the usage data is exposed in a limited
fashion, at most giving one level of helper methods for the inner keys
of data, but then exposing thos subobjects as maps of objects. Rather
than making parsers for every set of usage data from each feature, this
PR exposes the entire set of usage data as a map of maps.
Because this is a static method on a public API, and one that we encourage
plugin authors to use, the method with the typo is deprecated in 6.x
rather than just renamed.
With this commit we introduce a new circuit-breaking strategy to the parent
circuit breaker. Contrary to the current implementation which only accounts for
memory reserved via child circuit breakers, the new strategy measures real heap
memory usage at the time of reservation. This allows us to be much more
aggressive with the circuit breaker limit so we bump it to 95% by default. The
new strategy is turned on by default and can be controlled with the new cluster
setting `indices.breaker.total.userealmemory`.
Note that we turn it off for all integration tests with an internal test cluster
because it leads to spurious test failures which are of no value (we cannot
fully control heap memory usage in tests). All REST tests, however, will make
use of the real memory circuit breaker.
Relates #31767
It looks like we weren't clear on when and why you should close the high
level client and folks were closing it after every request which is not
efficient. This explains why you should close the client and when so
this shouldn't be as common.
Closes#32001
* Added lenient flag for synonym-tokenfilter.
Relates to #30968
* added docs for synonym-graph-tokenfilter
-- Also made lenient final
-- changed from !lenient to lenient == false
* Changes after review (1)
-- Renamed to ElasticsearchSynonymParser
-- Added explanation for ElasticsearchSynonymParser::add method
-- Changed ElasticsearchSynonymParser::logger instance to static
* Added lenient option for WordnetSynonymParser
-- also added more documentation
* Added additional documentation
* Improved documentation
* Handle missing values in painless
Throw an exception for `doc['field'].value`
if this document is missing a value for the `field`.
For 7.0:
This is the default behaviour from 7.0
For 6.x:
To enable this behavior from 6.x, a user can set a jvm.option:
`-Des.script.exception_for_missing_value=true` on a node.
If a user does not enable this behavior, a deprecation warning is logged on start up.
Closes#29286
This commit removes the link to an oss-MSI; there is only one version of the MSI, which includes X-Pack.
(cherry picked from commit d2e5db8a806ec8a25162f79db5209aceed4f30f7)
This is the first x-pack API we're adding to the high level REST client
so there is a lot to talk about here!
= Open source
The *client* for these APIs is open source. We're taking the previously
Elastic licensed files used for the `Request` and `Response` objects and
relicensing them under the Apache 2 license.
The implementation of these features is staying under the Elastic
license. This lines up with how the rest of the Elasticsearch language
clients work.
= Location of the new files
We're moving all of the `Request` and `Response` objects that we're
relicensing to the `x-pack/protocol` directory. We're adding a copy of
the Apache 2 license to the root fo the `x-pack/protocol` directory to
line up with the language in the root `LICENSE.txt` file. All files in
this directory will have the Apache 2 license header as well. We don't
want there to be any confusion. Even though the files are under the
`x-pack` directory, they are Apache 2 licensed.
We chose this particular directory layout because it keeps the X-Pack
stuff together and easier to think about.
= Location of the API in the REST client
We've been following the layout of the rest-api-spec files for other
APIs and we plan to do this for the X-Pack APIs with one exception:
we're dropping the `xpack` from the name of most of the APIs. So
`xpack.graph.explore` will become `graph().explore()` and
`xpack.license.get` will become `license().get()`.
`xpack.info` and `xpack.usage` are special here though because they
don't belong to any proper category. For now I'm just calling
`xpack.info` `xPackInfo()` and intend to call usage `xPackUsage` though
I'm not convinced that this is the final name for them. But it does get
us started.
= Jars, jars everywhere!
This change makes the `xpack:protocol` project a `compile` scoped
dependency of the `x-pack:plugin:core` and `client:rest-high-level`
projects. I intend to keep it a compile scoped dependency of
`x-pack:plugin:core` but I intend to bundle the contents of the protocol
jar into the `client:rest-high-level` jar in a follow up. This change
has grown large enough at this point.
In that followup I'll address javadoc issues as well.
= Breaking-Java
This breaks that transport client by a few classes around. We've
traditionally been ok with doing this to the transport client.
For historical reasons SQL restricts GROUP BY to only one field.
This commit removes the restriction and improves the test suite with
multi group by tests.
Close#31793
There have been at least two PRs trying to fix the spelling of "lazi" because it
isn't very clear from the example that the english analyzer will stem each token
in the example. This adds a short description of the analysis process to make
this clearer.
Relates to #31797
This is a followup to #31537. It makes a number of changes requested by
a review that came after the PR was merged. These are mostly cleanups
and doc improvements.
Only the shards that receive the bulk request will be affected by
`refresh`. Imagine a `_bulk?refresh=wait_for` request with three
documents in it that happen to be routed to different shards in an index
with five shards. The request will only wait for those three shards to
refresh. The other two shards of that make up the index do not
participate in the `_bulk` request at all.
Relates to #31819