Fix a deprecation warning that wasn't rendering correctly in
asciidoctor. This one needed to be explicitly marked as an inline macro
because it is on its own line and it needed to have its text escaped
because it contained a `,`. It also was missing explanitory text for
what the setting was.
Currently enabling profiling disables top-hits optimizations, which is
unfortunate: it would be nice to be able to notice the difference in method
counts and timings depending on whether total hit counts are requested.
This PR makes a few clarifications to the docs for the `enabled` setting:
- Replace references to 'mapping type' with 'mapping' or 'mapping definition'.
- In code examples, clarify that the disabled fields have type `object`.
- Add a section on how disabled fields can hold non-object data.
This section should be at the same sub-level as other sections in the
auto date-histogram docs, otherwise it is rendered on to another page
and is confusing for users to understand what it's in reference to.
The following phrase causes confusion:
> Alternatively the IP addresses or hostnames (if node name defaults to the
> host name) can be used.
This change clarifies the conditions under which you can use a hostname, and
adds an anchor to the note introduced in (#41137) so we can link directly to it
in conversations with users.
Fixes rendering the `version_qualified` attribute in the docs. This
attribute includes `-SNAPSHOT` but does not include `-alpha` or `-beta`
and represents the `version` field as returned by `GET /` or
`GET /_cat/plugins`. Without this change we just drop the entire line on
the floor rather than render it so the output looks bad.
This helps avoid memory issues when computing deep sub-aggregations. Because it
should be rare to use sub-aggregations with significant terms, we opted to always
choose breadth first as opposed to exposing a `collect_mode` option.
Closes#28652.
Added documentation for node repurpose tool and included documentation on how to repurpose nodes safely. Adjusted order of tools in `elasticsearch-node` tool since the repurpose tool is most likely to be used.
Co-Authored-By: David Turner <david.turner@elastic.co>
The explanation given in the completion suggester documentation why we use the
"simple" analyzer as the default is no longer valid. Since we still use "simple"
as the default, we should just delete the explanation that doesn't fit anymore.
Closes#36715
* [DOCS] Fix broken link to Elasticsearch Docker source code
* [DOCS] Link to Dockerfile in elastic/elasticsearch repo
* [DOCS] Link to Docker source files in elastic/elasticsearch repo
* [DOCS] Added settings page for ILM.
* [DOCS] Adding ILM settings file
* [DOCS] Moved the ILM settings to a separate section
* [DOCS] Linked to the rollover docs.
* [DOCS] Tweaked the "required" wording.
This commit is a correction of a doc bug in the docs for the ingest
date-index-name processor. The correct pattern is
yyyy-MM-dd'T'HH:mm:ss.SSSXX. This is due to the transition from Joda
time to Java time where Z does not mean the same thing between the two.
Command needs `?v` so user can see the column headers. Otherwise the instructions in the note about checking the init and relo columns don't make sense
This commit fixes a problem with BWC that was brought up in #40511. A
newer version of the code was emitting a new value for an enum to an
older version, and the older version could not handle that. It caused
the response to error. The MainResponse is now relaxed, and will accept
whatever values the server expose, and holds most of them as Strings
instead of complex objects.
Fixes#40511
We generate two pages with "funny" names:
* _changing_the_client_8217_s_initialization_code.html
* _changing_the_application_8217_s_code.html
The leading `_` comes from us not specifying the name of the page. The
`8217` comes about because of the single quote character. This is a
funny name, but it is the name that we have so we shouldn't change it
without putting in a redirect.
We're looking at switching these docs from being built with the
no-longer-maintained AsciiDoc project to being built with the
actively-maintained Asciidoctor project. Asciidoctor Doesn't include the
`8217`s in the generated ids. That is *better*, but we don't really want
to change the pages. Ultimately we'd prefer none of our pages start with
`_`, but that is a problem for a different time.
Anyway, this pins the ids to their "funny" id so it won't change when we
switch to Asciidoctor. We'll remove it later, when we have more fine
control of our redirects.
Drops the inline callouts from the painless reference book. These
callouts are incompatible with Asciidoctor and we'd very much like to
switch to Asciidoctor for building this book, partially because
Asciidoctor is actively developed and AsciiDoc is not, and partially
because it builds the book three times faster.
This change rejects an illegal combination of flush parameters where
force is true, but wait_if_ongoing is false. This combination is trappy
and should be forbidden.
Closes#36342
- Added square brackets for the optional argument of precision
- Fixed character to lower case after comma
(cherry picked from commit d2f6f3b9ce36875e2eb6145c50464b4d72f2b1df)
After `TIME` SQL data type is introduced, implement
`CURRENT_TIME/CURTIME` functions similarly to CURRENT_TIMESTAMP
that return the system's current time (only, without the date part).
Closes: #40468
(cherry picked from commit 9feede781409d0e264ce45951a25b28ff129b187)
* Add release notes for 7.0.0-rc1.
* [DOCS] Fixes broken link to breaking changes
* [DOCS] Removed old ML PRs; edited titles
* Remove superseded PR.
* Clean up Lucene upgrade PRs/issues.
A full format for a DATETIME would be:
`2019-03-30T10:20:30.123+10:00` which is 29 chars long.
For DATE a full format would be: `2019-03-30T00:00:00.000+10:00`
which is also 29 chars long.
(cherry picked from commit 6be83964ed025528778bca8d35692762e166983b)
Moves the id of the preface in the java-api so it is compatible with
both AsciiDoc and Asciidoctor. As it stands now we apply the id that we
want for the preface to the book itself which is strange and only works
with AsciiDoc.
Support ANSI SQL's TIME type by introductin a runtime-only
ES SQL time type.
Closes: #38174
(cherry picked from commit 046ccd4cf0a251b2a3ddff6b072ab539a6711900)
This updates the casting table to reflect the recent changes for casting consistency in Painless. This also adds a small section on explicitly casting a character to a String which has always been allowed but undocumented.
There is a single example in the Java API docs that contains an inline
callout that is incompatible with Asciidoctor:
```
client.prepareUpdate("ttl", "doc", "1")
.setScript(new Script(
"ctx._source.gender = \"male\"" <1> , ScriptService.ScriptType.INLINE, null, null))
.get();
```
This rewrites the example to use an Asciidoctor compatible end of line
callout. It also looks nicer to me because it fits better on the page.
```
client.prepareUpdate("ttl", "doc", "1")
.setScript(new Script(
"ctx._source.gender = \"male\"", <1>
ScriptService.ScriptType.INLINE, null, null))
.get();
```
We no longer need to mention soft deletes in the getting started guide
now that retention leases exist and default to 12h. This commit removes
mention of soft deletes from the getting started guide, to simplify that
content.
Currently, the docs correctly state that using `now` in range queries will not
be affected by the `time_zone` parameter. However, using date math roundings
like e.g. `now\d` will be affected by the `time_zone`. Adding this example
because it seems to be a frequently asked question and source of confusion.
Relates to #40581
Specifying an inline script in an "inline" field
was deprecated in 5.x. The new field name is
"source".
(Since 6.x still accepts "inline" I will only backport
this docs change as far as 7.0.)
I have been hitting the suite timeout on `DocsClientYamlTestSuiteIT`
As far as I can see, the docs tests are taking quite a while, I assume
it's because more and more docs snippets get added over time, which
means more tests. The current suite timeout is the default 20 minutes.
It takes me just a little less than 20 minutes to run these on my
laptop. On my CI, I end up hitting the suite timeout. Hereby I propose
that we increase the suite timeout to 30 minutes.
This commit changes the note in docs about required java version to note
the existence of the bundled jdk and how to bring your own java. It also
reorganizes the zip/targz docs as zip is no longer suitable on
Linux/MacOS.
The basic models `b, de, p` and the after effect `no`
are not available anymore in Lucene 8 but they are still
listed in the >7x documentation. This change removes these
references that should also be listed in the breaking change
of es 7.0.
Closes#40264
Improve the documentation of parameter --pass of elasticsearch-certutil
Backport of: #40137
Co-Authored-By: Diego Cardozo Sandrim <diegocsandrim@users.noreply.github.com>
Co-Authored-By: Vigneash Sundar <vikene@users.noreply.github.com>
To make script_score query to have the same features
as function_score query, we need to add randomScore
function.
This function produces different
random scores on different index shards.
It is also able to produce random scores
based on the internal Lucene Document Ids.
In #33062 we introduced the `cluster.remote.*.proxy` setting for proxied
connections to remote clusters, but left it deliberately undocumented since it
needed followup work so that it could work with SNI. However, since #32517 is
now closed we can add this documentation and remove the comment about its lack
of documentation.
Adds the search_as_you_type field type that acts like a text field optimized
for as-you-type search completion. It creates a couple subfields that analyze
the indexed terms as shingles, against which full terms are queried, and a
prefix subfield that analyze terms as the largest shingle size used and
edge-ngrams, against which partial terms are queried
Adds a match_bool_prefix query type that creates a boolean clause of a term
query for each term except the last, for which a boolean clause with a prefix
query is created.
The match_bool_prefix query is the recommended way of querying a search as you
type field, which will boil down to term queries for each shingle of the input
text on the appropriate shingle field, and the final (possibly partial) term
as a term query on the prefix field. This field type also supports phrase and
phrase prefix queries however
Today the upgrade assistant identifies that discovery configuration is required
in production mode, but links to docs saying to add one of these settings:
- `discovery.seed_hosts`
- `discovery.seed_providers`
- `cluster.initial_master_nodes`
However these settings do not exist in 6.7 so this is unhelpful advice. This
commit adjusts the docs to give more useful advice to users arriving at this
page from the upgrade assistant.
Relates https://discuss.elastic.co/t/174102
* Document MATCH and QUERY function predicates.
* Polish the functions pages and add a list of functions to the main Functions & Operators page.
(cherry picked from commit 4cec0ae1b962ec7ea011a290aec72740386eb808)
* document ODBC's extra connection string parameters
Document the connection string parameters that are currently not
configurable from the GUI.
Add a note about SmartScreen possible warning on driver MSI
installation.
Add a note about the TLS certificate file not being supported as bundled
or password-protected.
* rephrasing and restructuring the list of params
- addressing PR review notes
* rephrasings and reference linking
- addressing PR review notes
(cherry picked from commit ca90190226c0a8c63f51c2603b27df299ce8d503)
The current documentation recommends disabling allocations of all shards. This prevents shards of
new indices from being allocated as well. That means that during a rolling upgrade, operations like
roll over and index shrinking will operate correctly. This, in turn, can cause issues for data ingestion
and ILM.
* [ML] Add data frame task state object and field
* A new state item is added so that the overall task state can be
accoutned for
* A new FAILED state and reason have been added as well so that failures
can be shown to the user for optional correction
* Addressing PR comments
* adjusting after master merge
* addressing pr comment
* Adjusting auditor usage with failure state
* Refactor, renamed state items to task_state and indexer_state
* Adding todo and removing redundant auditor call
* Address HLRC changes and PR comment
* adjusting hlrc IT test
To avoid having to specify each spec by hand (which can miss specs to be
added), the test infrastructure now performs classpath discovery so that
each spec added, is automatically considered.
Relates #40358
(cherry picked from commit d0f60b4425c731509aa8ca765d55f563f866ef90)
* [ML] make source and dest objects in the transform config
* addressing PR comments
* Fixing compilation post merge
* adding comment for Arrays.hashCode
* addressing changes for moving dest to object
* fixing data_frame yml tests
* fixing API test
Extend CAST to support all data types notations (whether SQL or ES
specific)
Fix#40282
(cherry picked from commit eb2ee8a344da946920598839a5db76c8bb9bc3fe)
This commit expands the ccr overview page to include more information
about the lifecycle of following an index. It adds information linking
to the remote recovery documentation. And describes how an index can
fall-behind and how to fix it when this happens.
This is the equivalent of the `field_masking_span` query, allowing users to
merge intervals from multiple fields - for example, to search for stemmed tokens
near unstemmed tokens.
This change adds an option to convert a `date` field to nanoseconds resolution
and a `date_nanos` field to millisecond resolution when sorting.
The resolution of the sort can be set using the `numeric_type` option of the
field sort builder. The conversion is done at the shard level and is restricted
to dates from 1970 to 2262 for the nanoseconds resolution in order to avoid
numeric overflow.
The new cluster coordination subsystem introduced in 7.0 will only keep an
unresponsive node in the cluster for 30 seconds, whereas in earlier versions it
might have remained in the cluster for 90 seconds. This commit adds a note to
the migration documentation to that effect.
This commit clarifies how the gateway selection works when configuring
remote clusters for CCR or CCS. Specifically, it clarifies compatibility
between different versions which is a very common question.
The Migration Assistance API has been functionally replaced by the
Deprecation Info API, and the Migration Upgrade API is not used for the
transition from ES 6.x to 7.x, and does not need to be kept around to
repair indices that were not properly upgraded before upgrading the
cluster, as was the case in 6.
For cases where fields can have multi values, allow the behavior to be
customized through a dedicated configuration field.
By default this will be enabled on the drivers so that existing datasets
work instead of throwing an exception.
For regular SQL usage, the behavior is false so that the user is aware
of the underlying data.
Fix#39700
(cherry picked from commit 2b351571961f172fd59290ee079126bbd081ceaf)
The `GET /_cluster/state` API returns an internal representation of the cluster
state that does change from version to version. It's useful for debugging, but
it is not intended for regular use by clients.
This change adjusts the documentation of `GET /_cluster/state` to clarify that
this API yields an internal representation that should not be expected to
remain stable between versions.
Relates #40061, #40016
This change adds an option to the `FieldSortBuilder` that allows to transform the type
of a numeric field into another. Possible values for this option are `long` that transforms
the source field into an integer and `double` that transforms the source field into a floating point.
This new option is useful for cross-index search when the sort field is mapped differently on some
indices. For instance if a field is mapped as a floating point in one index and as an integer in another
it is possible to align the type for both indices using the `numeric_type` option:
```
{
"sort": {
"field": "my_field",
"numeric_type": "double" <1>
}
}
```
<1> Ensure that values for this field are transformed to a floating point if needed.
Currently if a field alias is updated, any percolator queries that contain the
alias will still refer to its old target. This PR documents the issue while we
look into addressing it.
Relates to #37212.
`document_type` is the type to use for parsing the document to percolate, which
is already optional and deprecated. However `percotale` queries also have the
ability to percolate existing documents, identified by an index, an id and a
type. This change makes the latter optional and deprecated.
Closes#39963
This commit, mostly authored by @DaveCTurner,
adds documentation for elasticsearch-node tool #37696.
(cherry picked from commit 09425d5a5158c2d3fdad794411b3bbc4bba47b15)
Lucene 8.0 includes a [change](https://issues.apache.org/jira/browse/LUCENE-8635)
that moves the terms index off-heap for all fields but ID fields. I'm
including this in the migration notes so that users who have queries that match
lots of terms won't be surprised in case of slowdown.
- new `rank_feature`/`script_score` queries
- new `index_phrases`/`index_prefixes` options
- disabling `_field_names` doesn't help anymore
- adaptive replica selection is on by default
We were missing release notes for 7.0.0-alpha1. I generated them by running
the release-notes script with a quick hack that filtered out pull requests
that had been closed on or after 2018-11-15.
Some breaking changes had been documented in the release notes rather than
the migration guide so I moved them.
Prior to this commit (and after 6.5.0), if an ingest node changes
the _index in a pipeline, the original target index would be created.
For daily indexes this could create an extra, empty index per day.
This commit changes the TransportBulkAction to execute the ingest node
pipeline before attempting to create the index. This ensures that the
only index created is the original or one set by the ingest node pipeline.
This was the execution order prior to 6.5.0 (#32786).
The execution order was changed in 6.5 to better support default pipelines.
Specifically the execution order was changed to be able to read the settings
from the index meta data. This commit also includes a change in logic such
that if the target index does not exist when ingest node pipeline runs, it
will now pull the default pipeline (if one exists) from the settings of the
best matched of the index template.
Relates #32786
Relates #32758Closes#36545
This commit introduces the forget follower API. This API is needed in cases that
unfollowing a following index fails to remove the shard history retention leases
on the leader index. This can happen explicitly through user action, or
implicitly through an index managed by ILM. When this occurs, history will be
retained longer than necessary. While the retention lease will eventually
expire, it can be expensive to allow history to persist for that long, and also
prevent ILM from performing actions like shrink on the leader index. As such, we
introduce an API to allow for manual removal of the shard history retention
leases in this case.
This change does the following:
1. Makes the per-node setting xpack.ml.max_open_jobs
into a cluster-wide dynamic setting
2. Changes the job node selection to continue to use the
per-node attributes storing the maximum number of open
jobs if any node in the cluster is older than 7.1, and
use the dynamic cluster-wide setting if all nodes are on
7.1 or later
3. Changes the docs to reflect this
4. Changes the thread pools for native process communication
from fixed size to scaling, to support the dynamic nature
of xpack.ml.max_open_jobs
5. Renames the autodetect thread pool to the job comms
thread pool to make clear that it will be used for other
types of ML jobs (data frame analytics in particular)
Backport of #39320
This is related to #35975. It adds documentation on the remote recovery
process. Additionally, it adds documentation about the various settings
that can impact the process.
* SYS COLUMNS will skip UNSUPPORTED field types in ODBC and JDBC, as well.
NESTED and OBJECT types were already skipped in ODBC mode, now they are
skipped in JDBC mode, as well.
(cherry picked from commit 9e0df64b2d36c9069dfa506570468f0522c86417)
These docs are out of date, now that we override the infinite DNS cache
within Elasticsearch. This commit completely removes this content, as
specific guidance is no longer needed here.
* Add "columnar" option for REST requests (but be lenient for non-"plain"
modes) for json, yaml, smile and cbor formats.
* Updated documentation
(cherry picked from commit 5b7e0de237fb514d14a61a347bc669d4b4adbe56)
While running these commands from alias, facing issues using kill `cat pid`, In some situations, the more compact:
```
pkill -F /var/run/myProcess.pid
```
is the way to go.
* Remove Hipchat support from Watcher (#39199)
Hipchat has been shut down and has previously been deprecated in
Watcher (#39160), therefore we should remove support for these actions.
* Add migrate note
This fixes a bug in the sensing of the current OS family in the test cluster
formation code. Previously all builds would assume every environment
was windows and would jump to using the windows zip build. This fixes
the OS sensing code as well as updates some tests to account for
different build flavors.
Backport of #38457
Currently remote compression and ping schedule settings are dynamic.
However, we do not listen for changes. This commit adds listeners for
changes to those two settings. Additionally, when those settings change
we now close existing connections and open new ones with the settings
applied.
Fixes#37201.
In #30209 we deprecated the camel case `nGram` filter name in favour of `ngram` and
did the same for `edgeNGram` and `edge_ngram` and we are removing those names in
8.0. This change disallows using the deprecated names for new indices created in 7.0 by
throwing an error if these filters are used.
Relates to #38911
* missing 'test2' index example (#39055)
If I got the idea of aliases properly, I think that the index "test2" should have
a reference in the example above of the following sentence:
" ... we associate the alias `alias1` to both `test` and `test2` ... "
* add PUT test2
* Update aliases.asciidoc
swap which is write/read
Co-authored-by: Guilherme Ferreira <guilhermeaferreira_t@yahoo.com.br>
Rollup jobs should be stopped + deleted before the indices are removed.
It's possible for an active rollup job to issue a bulk request, the test
ends and the cleanup code deletes all indices. The in-flight bulk
request will then stall + error because the index no-longer exists...
but this process might take longer than the StopRollup timeout.
Which means the test fails, and often fails several other tests since
the job is still active (e.g. other tests cannot create the same-named
job, or fail to stop the job in their cleanup because it's still stalled).
This tends to knock over several tests before the bulk finally times
out and the job shuts down.
Instead, we need to simply stop jobs first. Inflight bulks will resolve
quickly, and we can carry on with deleting indices after the jobs are
confirmed inactive.
stop-job.asciidoc tended to trigger this issue because it executed
an async stop API and then exited, which setup the above situation. In
can and did happen with other tests though. As an extra precaution,
the doc test was modified to substitute in wait_for_completion
to help head off these issues too.
This defaults to "true" (current behavior) and will throw an exception
if there is a property that cannot be recognized. If "false", it will
ignore anything unrecognizable.
(cherry picked from commit 38fbf9792bcf4fe66bb3f17589e5fe6d29748d07)
- Notes that you can adjust the `s3.client.*.endpoint` setting to point to a
repository held on an S3-compatible service.
- Notes that the default is `s3.amazonaws.com` and not to auto-detect the
endpoint.
- Reformats docs to width.
Closes#35925
* Fix#38623 remove xpack namespace REST API
Except for xpack.usage and xpack.info API's, this moves the last remaining API's out of the xpack namespace
* rename xpack api's inside inside the files as well
* updated yaml tests references to xpack namespaces api's
* update callsApi calls in the IT subclasses
* make sure docs testing does not use xpack namespaced api's
* fix leftover xpack namespaced method names in docs/build.gradle
* found another leftover reference
(cherry picked from commit ccb5d934363c37506b76119ac050a254fa80b5e7)
The data frame plugin allows users to create feature indexes by pivoting a source index. In a
nutshell this can be understood as reindex supporting aggregations or similar to the so called entity
centric indexing.
Full history is provided in: feature/data-frame-transforms
`SearchShardIterator` inherits its `compareTo` implementation from `PlainShardIterator`. That is good in most of the cases, as such comparisons are based on the shard id which is unique, even when searching against indices with same names across multiple clusters (thanks to the index uuid being different). In case though the same cluster is registered multiple times with different aliases, the shard id is exactly the same, hence remote results will be returned before local ones with same shard id objects. That is because remote iterators are added before local ones, and we use a stable sorting method in `GroupShardIterators` constructor.
This PR enhances `compareTo` for `SearchShardIterator` to tie break on cluster alias and introduces consistent `equals` and `hashcode` methods. This allows to remove a TODO in `SearchResponseMerger` which otherwise has to handle this special case specifically. Also, while at it I added missing tests around equals/hashcode and compareTo and expanded existing ones.
the get-ml-info API documentation tested that the
response show that ML's `upgrade_mode` was false.
For reasons that may be true due to other tests running in
parallel or not cleaning themselves up, this may not be
guaranteed. Since the actual value here is not of importance,
this commit relaxes the requirement that upgrade_mode be
static.
Forward port of https://github.com/elastic/elasticsearch/pull/38757
This change reverts the initial 7.0 commits and replaces them
with the 6.7 variant that still allows for the ecs flag.
This commit differs from the 6.7 variants in that ecs flag will
now default to true.
6.7: `ecs` : default `false`
7.x: `ecs` : default `true`
8.0: no option, but behaves as `true`
* Revert "Ingest node - user agent, move device to an object (#38115)"
This reverts commit 5b008a34aa.
* Revert "Add ECS schema for user-agent ingest processor (#37727) (#37984)"
This reverts commit cac6b8e06f.
* cherry-pick 5dfe1935345da3799931fd4a3ebe0b6aa9c17f57
Add ECS schema for user-agent ingest processor (#37727)
* cherry-pick ec8ddc890a34853ee8db6af66f608b0ad0cd1099
Ingest node - user agent, move device to an object (#38115) (#38121)
* cherry-pick f63cbdb9b426ba24ee4d987ca767ca05a22f2fbb (with manual merge fixes)
Dep. check for ECS changes to User Agent processor (#38362)
* make true the default for the ecs option, and update 7.0 references and tests
`<expression>::<dataType>` is a simplified altenative syntax to
`CAST(<expression> AS <dataType> which exists in PostgreSQL and
provides an improved user experience and possibly more compact
SQL queries.
Fixes: #38717
Make substitution of \u200C with a space explicit
The problem with this symbol `\u200C` in a test string,
that **SHOULD** be substituted with space in the rebuilt Persian analyzer, but it is not.
Correcting this line `"mappings": [ "\\u200C=> "] <1>` to
`"mappings": [ "\\u200C=>\\u0020"] <1>` in solves the problem.
This change explicitly says to substitute ZWNJ with a space.
Closes#38188
Reindex from remote now supports configurable SSL/TLS (node level)
settings. This change adds documentation relating to those settings
Relates: #37527
Backport of: #38486
Now that ML configurations are stored in the .ml-config
index rather than in cluster state there is a possibility
that some users may try to add configurations directly to
the index. Allowing this creates a variety of problems
including possible data exflitration attacks (depending on
how security is set up), so this commit adds warnings
against allowing writes to the .ml-config index other than
via the ML APIs.
Backport of #38509
This commit adds the 7.1 version constant to the 7.x branch.
Co-authored-by: Andy Bristol <andy.bristol@elastic.co>
Co-authored-by: Tim Brooks <tim@uncontended.net>
Co-authored-by: Christoph Büscher <cbuescher@posteo.de>
Co-authored-by: Luca Cavanna <javanna@users.noreply.github.com>
Co-authored-by: markharwood <markharwood@gmail.com>
Co-authored-by: Ioannis Kakavas <ioannis@elastic.co>
Co-authored-by: Nhat Nguyen <nhat.nguyen@elastic.co>
Co-authored-by: David Roberts <dave.roberts@elastic.co>
Co-authored-by: Jason Tedor <jason@tedor.me>
Co-authored-by: Alpar Torok <torokalpar@gmail.com>
Co-authored-by: David Turner <david.turner@elastic.co>
Co-authored-by: Martijn van Groningen <martijn.v.groningen@gmail.com>
Co-authored-by: Tim Vernum <tim@adjective.org>
Co-authored-by: Albert Zaharovits <albert.zaharovits@gmail.com>
In #38333 and #38350 we moved away from the `discovery.zen` settings namespace
since these settings have an effect even though Zen Discovery itself is being
phased out. This change aligns the documentation and the names of related
classes and methods with the newly-introduced naming conventions.
We have had various reports of problems caused by the maxRetryTimeout
setting in the low-level REST client. Such setting was initially added
in the attempts to not have requests go through retries if the request
already took longer than the provided timeout.
The implementation was problematic though as such timeout would also
expire in the first request attempt (see #31834), would leave the
request executing after expiration causing memory leaks (see #33342),
and would not take into account the http client internal queuing (see #25951).
Given all these issues, it seems that this custom timeout mechanism
gives little benefits while causing a lot of harm. We should rather rely
on connect and socket timeout exposed by the underlying http client
and accept that a request can overall take longer than the configured
timeout, which is the case even with a single retry anyways.
This commit removes the `maxRetryTimeout` setting and all of its usages.
Elasticsearch has long [supported](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#index-versioning) compare and set (a.k.a optimistic concurrency control) operations using internal document versioning. Sadly that approach is flawed and can sometime do the wrong thing. Here's the relevant excerpt from the resiliency status page:
> When a primary has been partitioned away from the cluster there is a short period of time until it detects this. During that time it will continue indexing writes locally, thereby updating document versions. When it tries to replicate the operation, however, it will discover that it is partitioned away. It won’t acknowledge the write and will wait until the partition is resolved to negotiate with the master on how to proceed. The master will decide to either fail any replicas which failed to index the operations on the primary or tell the primary that it has to step down because a new primary has been chosen in the meantime. Since the old primary has already written documents, clients may already have read from the old primary before it shuts itself down. The version numbers of these reads may not be unique if the new primary has already accepted writes for the same document
We recently [introduced](https://www.elastic.co/guide/en/elasticsearch/reference/6.x/optimistic-concurrency-control.html) a new sequence number based approach that doesn't suffer from this dirty reads problem.
This commit removes support for internal versioning as a concurrency control mechanism in favor of the sequence number approach.
Relates to #1078
`CreateIndexRequest#source(Map<String, Object>, ... )`, which is used when
deserializing index creation requests, accidentally accepts mappings that are
nested twice under the type key (as described in the bug report #38266).
This in turn causes us to be too lenient in parsing typeless mappings. In
particular, we accept the following index creation request, even though it
should not contain the type key `_doc`:
```
PUT index?include_type_name=false
{
"mappings": {
"_doc": {
"properties": { ... }
}
}
}
```
There is a similar issue for both 'put templates' and 'put mappings' requests
as well.
This PR makes the minimal changes to detect and reject these typed mappings in
requests. It does not address #38266 generally, or attempt a larger refactor
around types in these server-side requests, as I think this should be done at a
later time.
With this change we no longer support pluggable discovery implementations. No
known implementations of `DiscoveryPlugin` actually override this method, so in
practice this should have no effect on the wider world. However, we were using
this rather extensively in tests to provide the `test-zen` discovery type. We
no longer need a separate discovery type for tests as we no longer need to
customise its behaviour.
Relates #38410
Renames the following settings to remove the mention of `zen` in their names:
- `discovery.zen.hosts_provider` -> `discovery.seed_providers`
- `discovery.zen.ping.unicast.concurrent_connects` -> `discovery.seed_resolver.max_concurrent_resolvers`
- `discovery.zen.ping.unicast.hosts.resolve_timeout` -> `discovery.seed_resolver.timeout`
- `discovery.zen.ping.unicast.hosts` -> `discovery.seed_addresses`
X-Pack security supports built-in authentication service
`token-service` that allows access tokens to be used to
access Elasticsearch without using Basic authentication.
The tokens are generated by `token-service` based on
OAuth2 spec. The access token is a short-lived token
(defaults to 20m) and refresh token with a lifetime of 24 hours,
making them unsuitable for long-lived or recurring tasks where
the system might go offline thereby failing refresh of tokens.
This commit introduces a built-in authentication service
`api-key-service` that adds support for long-lived tokens aka API
keys to access Elasticsearch. The `api-key-service` is consulted
after `token-service` in the authentication chain. By default,
if TLS is enabled then `api-key-service` is also enabled.
The service can be disabled using the configuration setting.
The API keys:-
- by default do not have an expiration but expiration can be
configured where the API keys need to be expired after a
certain amount of time.
- when generated will keep authentication information of the user that
generated them.
- can be defined with a role describing the privileges for accessing
Elasticsearch and will be limited by the role of the user that
generated them
- can be invalidated via invalidation API
- information can be retrieved via a get API
- that have been expired or invalidated will be retained for 1 week
before being deleted. The expired API keys remover task handles this.
Following are the API key management APIs:-
1. Create API Key - `PUT/POST /_security/api_key`
2. Get API key(s) - `GET /_security/api_key`
3. Invalidate API Key(s) `DELETE /_security/api_key`
The API keys can be used to access Elasticsearch using `Authorization`
header, where the auth scheme is `ApiKey` and the credentials, is the
base64 encoding of API key Id and API key separated by a colon.
Example:-
```
curl -H "Authorization: ApiKey YXBpLWtleS1pZDphcGkta2V5" http://localhost:9200/_cluster/health
```
Closes#34383
As mapping types are being removed throughout Elasticsearch, the use of
`_type` in pipeline simulation requests is deprecated. Additionally, the
default `_type` used if one is not supplied has been changed to `_doc` for
consistency with the rest of Elasticsearch.
Coalesces two calls into one in a scroll example so all callouts are at
the end of the line. This is the only sort of callouts that are
supported by asciidoctor and we'd like to start building our docs with
asciidoctor.
At present we don't have any mechanism to stop folks adding more inline
callouts but we ought to be able to have one in a few weeks. For now,
though, removing these inline callouts is a step in the right direction.
Relates to #38335
Reduces the leader and follower check timeout to 3 * 10 = 30s instead of 3 * 30 = 90s, with 30s still
being a very long time for a node to be completely unresponsive.
This adds a dedicated field mapper that supports nanosecond resolution -
at the price of a reduced date range.
When using the date field mapper, the time is stored as milliseconds since the epoch
in a long in lucene. This field mapper stores the time in nanoseconds
since the epoch - which means its range is much smaller, ranging roughly from
1970 to 2262.
Note that aggregations will still be in milliseconds.
However docvalue fields will have full nanosecond resolution
Relates #27330
Introduce client-side sorting of groups based on aggregate
functions. To allow this, the Analyzer has been extended to push down
to underlying Aggregate, aggregate function and the Querier has been
extended to identify the case and consume the results in order and sort
them based on the given columns.
The underlying QueryContainer has been slightly modified to allow a view
of the underlying values being extracted as the columns used for sorting
might not be requested by the user.
The PR also adds minor tweaks, mainly related to tree output.
Close#35118
Because concurrent sync requests from a primary to its replicas could be
in flight, it can be the case that an older retention leases collection
arrives and is processed on the replica after a newer retention leases
collection has arrived and been processed. Without a defense, in this
case the replica would overwrite the newer retention leases with the
older retention leases. This commit addresses this issue by introducing
a versioning scheme to retention leases. This versioning scheme is used
to resolve out-of-order processing on the replica. We persist this
version into Lucene and restore it on recovery. The encoding of
retention leases is starting to get a little ugly. We can consider
addressing this in a follow-up.
There are a two major features that are not yet supported by BKD Backed geo_shape: MultiPoint queries, and CONTAINS relation. It is important we are explicitly clear in the documentation that using the new approach may not work for users that depend on these features. This commit adds an IMPORTANT NOTE section to geo_shape docs that explicitly highlights these missing features and what should be done if they are an absolute necessity.
In 7.x Java timestamp formats are the default timestamp format and
there is no need to prefix them with "8". (The "8" prefix was used
in 6.7 to distinguish Java timestamp formats from Joda timestamp
formats.)
This change removes the "8" prefixes from timestamp formats in the
output of the ML file structure finder.
This commit enables the use of TLSv1.3 with security by enabling us to
properly map `TLSv1.3` in the supported protocols setting to the
algorithm for a SSLContext. Additionally, we also enable TLSv1.3 by
default on JDKs that support it.
An issue was uncovered with the MockWebServer when TLSv1.3 is used that
ultimately winds up in an endless loop when the client does not trust
the server's certificate. Due to this, SSLConfigurationReloaderTests
has been pinned to TLSv1.2.
Closes#32276
Currently aggregate functions can operate only directly on fields.
They cannot be used on top of scalar functions as painless scripting
is currently not supported.
With #37000 we made sure that fnial reduction is automatically disabled
whenever a localClusterAlias is provided with a SearchRequest.
While working on #37838, we found a scenario where we do need to set a
localClusterAlias yet we would like to perform a final reduction in the
remote cluster: when searching on a single remote cluster.
Relates to #32125
This commit adds support for a separate finalReduce flag to
SearchRequest and makes use of it in TransportSearchAction in case we
are searching against a single remote cluster.
This also makes sure that num_reduce_phases is correct when searching
against a single remote cluster: it makes little sense to return
`num_reduce_phases` set to `2`, which looks especially weird in case
the search was performed against a single remote shard. We should
perform one reduction phase only in this case and `num_reduce_phases`
should reflect that.
* line length
This change forbids negative field boost in the `query_string`, `simple_query_string`
and `multi_match` queries.
Negative boosts are not allowed in Lucene 8 (scores must be positive).
The backport of this change to 6x will turn the error into a deprecation warning
in order to raise the awareness of this breaking change in 7.0.
Closes#33309
In 6.3 trial licenses were changed to default to security
disabled, and ee added some heuristics to detect when security should
be automatically be enabled if `xpack.security.enabled` was not set.
This change removes those heuristics, and requires that security be
explicitly enabled (via the `xpack.security.enabled` setting) for
trial licenses.
Relates: #38009
Implements `geotile_grid` aggregation
This patch refactors previous implementation https://github.com/elastic/elasticsearch/pull/30240
This code uses the same base classes as `geohash_grid` agg, but uses a different hashing
algorithm to allow zoom consistency. Each grid bucket is aligned to Web Mercator tiles.
When the ingest node user agent parses the device field, it
will result in a string value. To match the ecs schema
this commit moves the value of the parsed device to an
object with an inner field named 'name'. There are not
any passivity concerns since this modifies an unreleased change.
closes#38094
relates #37329
FIRST and LAST can be used with one argument and work similarly to MIN
and MAX but they are implemented using a Top Hits aggregation and
therefore can also operate on keyword fields. When a second argument is
provided then they return the first/last value of the first arg when its
values are ordered ascending/descending (respectively) by the values of
the second argument. Currently because of the usage of a Top Hits
aggregation FIRST and LAST cannot be used in the HAVING clause of a
GROUP BY query to filter on the results of the aggregation.
Closes: #35639
With #37566 we have introduced the ability to merge multiple search responses into one. That makes it possible to expose a new way of executing cross-cluster search requests, that makes CCS much faster whenever there is network latency between the CCS coordinating node and the remote clusters. The coordinating node can now send a single search request to each remote cluster, which gets reduced by each one of them. from + size results are requested to each cluster, and the reduce phase in each cluster is non final (meaning that buckets are not pruned and pipeline aggs are not executed). The CCS coordinating node performs an additional, final reduction, which produces one search response out of the multiple responses received from the different clusters.
This new execution path will be activated by default for any CCS request unless a scroll is provided or inner hits are requested as part of field collapsing. The search API accepts now a new parameter called ccs_minimize_roundtrips that allows to opt-out of the default behaviour.
Relates to #32125
* Added SSL configuration options tests
Removed the allow.self.signed option from the documentation since we allow
by default self signed certificates as well.
* Added more tests
Today we pass `discovery.zen.minimum_master_nodes` to nodes started up in
tests, but for 7.x nodes this setting is not required as it has no effect.
This commit removes this setting so that nodes are started with more realistic
configurations, and deprecates it.
* Add ECS schema for user-agent ingest processor (#37727)
This switches the format of the user agent processor to use the schema from [ECS](https://github.com/elastic/ecs).
So rather than something like this:
```
{
"patch" : "3538",
"major" : "70",
"minor" : "0",
"os" : "Mac OS X 10.14.1",
"os_minor" : "14",
"os_major" : "10",
"name" : "Chrome",
"os_name" : "Mac OS X",
"device" : "Other"
}
```
The structure is now like this:
```
{
"name" : "Chrome",
"original" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36",
"os" : {
"name" : "Mac OS X",
"version" : "10.14.1",
"full" : "Mac OS X 10.14.1"
},
"device" : "Other",
"version" : "70.0.3538.102"
}
```
This is now the default for 7.0. The deprecated `ecs` setting in 6.x is not
supported.
Resolves#37329
* Remove `ecs` setting from docs