This adds support for verifying that snippets with the `console-result`
language are valid json. It also switches the response snippets on the
`docs/get` page from `js` to `console-result` which will allow clients
to provide "alternatives" for them like they can now do with
`// CONSOLE` snippets.
This adds a pipeline aggregation that calculates the cumulative
cardinality of a field. It does this by iteratively merging in the
HLL sketch from consecutive buckets and emitting the cardinality up
to that point.
This is useful for things like finding the total "new" users that have
visited a website (as opposed to "repeat" visitors).
This is a Basic+ aggregation and adds a new Data Science plugin
to house it and future advanced analytics/data science aggregations.
Previously, the stats API reports a progress percentage
for DF analytics tasks that are running and are in the
`reindexing` or `analyzing` state.
This means that when the task is `stopped` there is no progress
reported. Thus, one cannot distinguish between a task that never
run to one that completed.
In addition, there are blind spots in the progress reporting.
In particular, we do not account for when data is loaded into the
process. We also do not account for when results are written.
This commit addresses the above issues. It changes progress
to being a list of objects, each one describing the phase
and its progress as a percentage. We currently have 4 phases:
reindexing, loading_data, analyzing, writing_results.
When the task stops, progress is persisted as a document in the
state index. The stats API now reports progress from in-memory
if the task is running, or returns the persisted document
(if there is one).
Customers occasionally discover a known behavior in Elasticsearch's pagination that does not appear to be documented. This warning is intended to educate customers of this behavior while still highlighting alternative solutions.
This change adds a new SSL context
xpack.notification.email.ssl.*
that supports the standard SSL configuration settings (truststore,
verification_mode, etc). This SSL context is used when configuring
outbound SMTP properties for watcher email notifications.
Backport of: #45272
Since #45136, we use soft-deletes instead of translog in peer recovery.
There's no need to retain extra translog to increase a chance of
operation-based recoveries. This commit ignores the translog retention
policy if soft-deletes is enabled so we can discard translog more
quickly.
Backport of #45473
Relates #45136
* [DOCS] Add template docs to scripts. Reorder template examples.
* Adds a 'Search template' section to the 'How to use scripts' chapter.
This links to the 'Search template' chapter for detailed info and
examples.
* Reorders and retitles several examples in the 'Search template'
chapter. This is primarily to make examples for storing, deleting, and
using search templates more prominent.
* Change <templatename> to <templateid>
* Add is_write_index column to cat.aliases (#44772)
Aliases have had the option to set `is_write_index` since 6.4,
but the cat.aliases action was never updated.
* correct version bounds to 7.4
* Repository Cleanup Endpoint (#43900)
* Snapshot cleanup functionality via transport/REST endpoint.
* Added all the infrastructure for this with the HLRC and node client
* Made use of it in tests and resolved relevant TODO
* Added new `Custom` CS element that tracks the cleanup logic.
Kept it similar to the delete and in progress classes and gave it
some (for now) redundant way of handling multiple cleanups but only allow one
* Use the exact same mechanism used by deletes to have the combination
of CS entry and increment in repository state ID provide some
concurrency safety (the initial approach of just an entry in the CS
was not enough, we must increment the repository state ID to be safe
against concurrent modifications, otherwise we run the risk of "cleaning up"
blobs that just got created without noticing)
* Isolated the logic to the transport action class as much as I could.
It's not ideal, but we don't need to keep any state and do the same
for other repository operations
(like getting the detailed snapshot shard status)
This commit updates the documentation to
- use the batch file included with the zip distribution; the exe file is included in the MSI only.
- introduce a space between the -E arguments and their values. Without a space (or quoted, but adding a space is cleaner), the argument will fail with PowerShell
(cherry picked from commit 5c8dbcedb0edf3a48ca1ec52aad9ea41fa941f8a)
* Search enhancement: pinned queries (#44345)
Search enhancement: - new query type allows selected documents to be promoted above any "organic” search results.
This is the first feature in a new module `search-business-rules` which will house licensed (non OSS) logic for rewriting queries according to business rules.
The PinnedQueryBuilder class offers a new `pinned` query in the DSL that takes an array of promoted IDs and an “organic” query and ensures the documents with the promoted IDs rank higher than the organic matches.
Closes#44074
Changes the order of parameters in Geometries from lat, lon to lon, lat
and moves all Geometry classes are moved to the
org.elasticsearch.geomtery package.
Backport of #45332Closes#45048
* Introduce Spatial Plugin (#44389)
Introduce a skeleton Spatial plugin that holds new licensed features coming to
Geo/Spatial land!
* [GEO] Refactor DeprecatedParameters in AbstractGeometryFieldMapper (#44923)
Refactor DeprecatedParameters specific to legacy geo_shape out of
AbstractGeometryFieldMapper.TypeParser#parse.
* [SPATIAL] New ShapeFieldMapper for indexing cartesian geometries (#44980)
Add a new ShapeFieldMapper to the xpack spatial module for
indexing arbitrary cartesian geometries using a new field type called shape.
The indexing approach leverages lucene's new XYShape field type which is
backed by BKD in the same manner as LatLonShape but without the WGS84
latitude longitude restrictions. The new field mapper builds on and
extends the refactoring effort in AbstractGeometryFieldMapper and accepts
shapes in either GeoJSON or WKT format (both of which support non geospatial
geometries).
Tests are provided in the ShapeFieldMapperTest class in the same manner
as GeoShapeFieldMapperTests and LegacyGeoShapeFieldMapperTests.
Documentation for how to use the new field type and what parameters are
accepted is included. The QueryBuilder for searching indexed shapes is
provided in a separate commit.
* [SPATIAL] New ShapeQueryBuilder for querying indexed cartesian geometry (#45108)
Add a new ShapeQueryBuilder to the xpack spatial module for
querying arbitrary Cartesian geometries indexed using the new shape field
type.
The query builder extends AbstractGeometryQueryBuilder and leverages the
ShapeQueryProcessor added in the previous field mapper commit.
Tests are provided in ShapeQueryTests in the same manner as
GeoShapeQueryTests and docs are updated to explain how the query works.
* SQL: ODBC: document newest conn string parameters
This commit adds the documentation for two newly added connection string
parameters: AutoEscapePVA and IndexIncludeFrozen.
It also removes the recommended OSes from the prerequisites list and
places the recommendation distinctively: the unmet prerequisites will
fail the installation, while the driver would install on other OSes than
those recommended.
* address review suggestions.
- adjust phrasing for clearer message.
(cherry picked from commit e18ac10c6e163a04f5b7cf7fa72f262882ffb711)
This commit replaces task_state and indexer_state in the
data frame _stats output with a single top level state
that combines the two. It is defined as:
- failed if what's currently reported as task_state is failed
- stopped if there is no persistent task
- Otherwise what's currently reported as indexer_state
Backport of #45276
Adds to the `index.blocks.read_only_allow_delete` docs the information that
this block may be added or removed automatically, and rewords the
breaking-changes docs to mention the blocks explicitly and to recommend using a
different block.
Relates #42559
* [ML][Data Frame] Add update transform api endpoint (#45154)
This adds the ability to `_update` stored data frame transforms. All mutable fields are applied when the next checkpoint starts. The exception being `description`.
This PR contains all that is necessary for this addition:
* HLRC
* Docs
* Server side
If a node exceeds the flood-stage disk watermark then we add a block to all of
its indices to prevent further writes as a last-ditch attempt to prevent the
node completely exhausting its disk space. However today this block remains in
place until manually removed, and this block is a source of confusion for users
who current have ample disk space and did not even realise they nearly ran out
at some point in the past.
This commit changes our behaviour to automatically remove this block when a
node drops below the high watermark again. The expectation is that the high
watermark is some distance below the flood-stage watermark and therefore the
disk space problem is truly resolved.
Fixes#39334
Previously, the reindex examples did not include `_doc` as the destination type.
This would result in the reindex failing with the error "Rejecting mapping
update to [users] as the final mapping would have more than 1 type: [_doc,
user]".
Relates to #43100.
Uses JDK 11's per-socket configuration of TCP keepalive (supported on Linux and Mac), see
https://bugs.openjdk.java.net/browse/JDK-8194298, and exposes these as transport settings.
By default, these options are disabled for now (i.e. fall-back to OS behavior), but we would like
to explore whether we can enable them by default, in particular to force keepalive configurations
that are better tuned for running ES.
Sometimes the recovery in this docs test takes long enough that it is expressed
in `s` rather than `ms`. This commit relaxes the assertion to account for this.
In many cases, including migration from previous versions of data
shippers (e.g. Beats), it is useful to use ILM to manage historical
indices, which are no longer being written to. This commit adds a guide
which gives an example of how to do that.
This adjusts the `buckets_path` parser so that pipeline aggs can
select specific buckets (via their bucket keys) instead of fetching
the entire set of buckets. This is useful for bucket_script in
particular, which might want specific buckets for calculations.
It's possible to workaround this with `filter` aggs, but the workaround
is hacky and probably less performant.
- Adjusts documentation
- Adds a barebones AggregatorTestCase for bucket_script
- Tweaks AggTestCase to use getMockScriptService() for reductions and
pipelines. Previously pipelines could just pass in a script service
for testing, but this didnt work for regular aggs. The new
getMockScriptService() method fixes that issue, but needs to be used
for pipelines too. This had a knock-on effect of touching MovFn,
AvgBucket and ScriptedMetric
Introduce shift field to MovingFunction aggregation.
By default, shift = 0. Behavior, in this case, is the same as before.
Increasing shift by 1 moves starting window position by 1 to the right.
To simply include current bucket to the window, use shift = 1
For center alignment (n/2 values before and after the current bucket), use shift = window / 2
For right alignment (n values after the current bucket), use shift = window.
Introduce shift field to MovingFunction aggregation.
By default, shift = 0. Behavior, in this case, is the same as before.
Increasing shift by 1 moves starting window position by 1 to the right.
To simply include current bucket to the window, use shift = 1
For center alignment (n/2 values before and after the current bucket), use shift = window / 2
For right alignment (n values after the current bucket), use shift = window.
Today the lag detector may remove nodes from the cluster if they fail to apply
a cluster state within a reasonable timeframe, but it is rather unclear from
the default logging that this has occurred and there is very little extra
information beyond the fact that the removed node was lagging. Moreover the
only forewarning that the lag detector might be invoked is a message indicating
that cluster state publication took unreasonably long, which does not contain
enough information to investigate the problem further.
This commit adds a good deal more detail to make the issues of slow nodes more
prominent:
- after 10 seconds (by default) we log an INFO message indicating that a
publication is still waiting for responses from some nodes, including the
identities of the problematic nodes.
- when the publication times out after 30 seconds (by default) we log a WARN
message identifying the nodes that are still pending.
- the lag detector logs a more detailed warning when a fatally-lagging node is
detected.
- if applying a cluster state takes too long then the cluster applier service
logs a breakdown of all the tasks it ran as part of that process.
Most of the circuit breaker settings are dynamically configurable.
However, `indices.breaker.total.use_real_memory` is not. With this
commit we add a clarifying note that this specific setting is static.
Closes#44974