This adds support for templating in rank eval requests.
Relates to #20231
Problem: In it's current state the rank-eval request API forces the user to repeat complete queries for each test request. In most use cases the structure of the query to test will be stable with only parameters changing across requests, so this looks like lots of boilerplate json for something that could be expressed in a more concise way.
Uses templating/ ScriptServices to enable users to submit only one test request template and let them only specify template parameters on a per test request basis.
This fixes our cluster formation task to run REST tests against a mixed version cluster.
Yet, due to some limitations in our test framework `indices.rollover` tests are currently
disabled for the BWC case since they select the current master as the merge node which
happens to be a BWC node and we can't relocate all shards to it since the primaries are on
a higher version node. This will be fixed in a followup.
Closes#21142
Note: This has been cherry-picked from 5.0 and fixes several rest tests
as well as a BWC break in `OsStats.java`
When indices stats are requested via the node stats API, there is a
level parameter to request stats at the index, node, or shards
level. This parameter was not whitelisted when URL parsing was made
strict. This commit whitelists this parameter.
Additionally, there was some leniency in the parsing of this parameter
that has been removed.
Relates #21024
The create request now requires that an ID be present.
Currently the clients hard code a create method, but
we should just add a create REST spec so this method
can be autogenerated.
Today we don't parse alias filters on the coordinating node, we only forward
the alias patters to executing node and resolve it late. This has several problems
like requests that go through filtered aliases are never cached if they use date math,
since the parsing happens very late in the process even without rewriting. It also used
to be processed on every shard while we can only do it once per index on the coordinating node.
Another nice side-effect is that we are never prone to cluster-state updates that change an alias,
all nodes will execute the exact same alias filter since they are process based on the same
cluster state.
* Adding built-in sorting capability to _cat apis.
Closes#16975
* addressing pr comments
* changing value types back to original implementation and fixing cosmetic issues
* Changing compareTo, hashCode of value types to a better implementation
* Changed value compareTos to use Double.compare instead of if statements + fixed some failed unit tests
Today when parsing a request, Elasticsearch silently ignores incorrect
(including parameters with typos) or unused parameters. This is bad as
it leads to requests having unintended behavior (e.g., if a user hits
the _analyze API and misspell the "tokenizer" then Elasticsearch will
just use the standard analyzer, completely against intentions).
This commit removes lenient URL parameter parsing. The strategy is
simple: when a request is handled and a parameter is touched, we mark it
as such. Before the request is actually executed, we check to ensure
that all parameters have been consumed. If there are remaining
parameters yet to be consumed, we fail the request with a list of the
unconsumed parameters. An exception has to be made for parameters that
format the response (as opposed to controlling the request); for this
case, handlers are able to provide a list of parameters that should be
excluded from tripping the unconsumed parameters check because those
parameters will be used in formatting the response.
Additionally, some inconsistencies between the parameters in the code
and in the docs are corrected.
Relates #20722
* master: (1199 commits)
[DOCS] Remove non-valid link to mapping migration document
Revert "Default `include_in_all` for numeric-like types to false"
test: add a test with ipv6 address
docs: clearify that both ip4 and ip6 addresses are supported
Include complex settings in settings requests
Add production warning for pre-release builds
Clean up confusing error message on unhandled endpoint
[TEST] Increase logging level in testDelayShards()
change health from string to enum (#20661)
Provide error message when plugin id is missing
Document that sliced scroll works for reindex
Make reindex-from-remote ignore unknown fields
Remove NoopGatewayAllocator in favor of a more realistic mock (#20637)
Remove Marvel character reference from guide
Fix documentation for setting Java I/O temp dir
Update client benchmarks to log4j2
Changes the API of GatewayAllocator#applyStartedShards and (#20642)
Removes FailedRerouteAllocation and StartedRerouteAllocation
IndexRoutingTable.initializeEmpty shouldn't override supplied primary RecoverySource (#20638)
Smoke tester: Adjust to latest changes (#20611)
...
This commit changes the default behavior of `_flush` to block if other flushes are ongoing.
This also removes the use of `FlushNotAllowedException` and instead simply return immediately
by skipping the flush. Users should be aware if they set this option that the flush might or might
not flush everything to disk ie. no transactional behavior of some sort.
Closes#20569
Adds a cat api endpoint: /_cat/templates and its more specific version, /_cat/templates/{name}.
It looks something like:
$ curl "localhost:9200/_cat/templates?v"
name template order version
sushi_california_roll *avocado* 1 1
pizza_hawaiian *pineapples* 1
pizza_pepperoni *pepperoni* 1
The specified version (only allows * globs) looks like:
$ curl "localhost:9200/_cat/templates/pizza*"
name template order version
pizza_hawaiian *pineapples* 1
pizza_pepperoni *pepperoni* 1
Partially specified columns:
$ curl "localhost:9200/_cat/templates/pizza*?v=true&h=name,template"
name template
pizza_hawaiian *pineapples*
pizza_pepperoni *pepperoni*
The help text:
$ curl "localhost:9200/_cat/templates/pizza*?help"
name | n | template name
template | t | template pattern string
order | o | template application order number
version | v | version
Closes#20467
We can now run templates using `explain` and/or `profile` parameters.
Which is interesting when you have defined a complicated profile but want to debug it in an easier way than running the full query again.
You can use `explain` parameter when running a template:
```js
GET /_search/template
{
"file": "my_template",
"params": {
"status": [ "pending", "published" ]
},
"explain": true
}
```
You can use `profile` parameter when running a template:
```js
GET /_search/template
{
"file": "my_template",
"params": {
"status": [ "pending", "published" ]
},
"profile": true
}
```
The command-line arguments for Elasticsearch must now be specified using
-E. This commit fixes the usage of command-line arguments in the REST
API spec README.
The only repository we can be sure is safe to clean is `fs` so we clean
any snapshots in those repositories after each test. Other repositories
like url and azure tend to throw exceptions rather than let us fetch
their contents during the REST test. So we clean what we can....
Closes#18159
The refresh description should indicate that the affected shards are
refreshed as opposed to the entire index.
This was raised as a discrepancy on
discuss (https://discuss.elastic.co/t/refresh-parameter-of-index-api/59008/2)
on the .NET client that originates from code generated from the rest api
spec. The description has been updated in master but should be updated
for the 2.4.0 release.
We put the rest api spec into a jar for upload to maven, so that we can
use within external rest tests. This change adds making a pom for maven
(as well as producing sources and javadoc jars, even though they will be
empty, because maven central requires them).
This change replaces the fields parameter with stored_fields when it makes sense.
This is dictated by the renaming we made in #18943 for the search API.
The following list of endpoint has been changed to use `stored_fields` instead of `fields`:
* get
* mget
* explain
The documentation and the rest API spec has been updated to cope with the changes for the following APIs:
* delete_by_query
* get
* mget
* explain
The `fields` parameter has been deprecated for the following APIs (it is replaced by _source filtering):
* update: the fields are extracted from the _source directly.
* bulk: the fields parameter is used but fields are extracted from the source directly so it is allowed to have non-stored fields.
Some APIs still have the `fields` parameter for various reasons:
* cat.fielddata: the fields paramaters relates to the fielddata fields that should be printed.
* indices.clear_cache: used to indicate which fielddata fields should be cleared.
* indices.get_field_mapping: used to filter fields in the mapping.
* indices.stats: get stats on fields (stored or not stored).
* termvectors: fields are retrieved from the stored fields if possible and extracted from the _source otherwise.
* mtermvectors:
* nodes.stats: the fields parameter is used to concatenate completion_fields and fielddata_fields so it's not related to stored_fields at all.
Fixes#20155
This commit adds a health status parameter to the cat indices API for
filtering on indices that match the specified status (green|yellow|red).
Relates #20393
Add docs to template support for _msearch
Relates to #10885
Relates to #15674
* Reference those docs from the rest api spec for _msearch/template support.
memory are much less than the total memory, the percentage
returned could be 0%. The yaml tests check that the free/used
percentage are valid values by asserting `is_true`, but it
turns out that `is_true` returns false if the value is
assigned but it is 0 or even the string "0". This commit
changes the assertion in the yaml test to ensure the value
is greater than or equal to 0 instead.
This was an error-prone version type that allowed overriding previous
version semantics. It could cause primaries and replicas to be out of
sync however, so it has been removed.
Resolves#19769
This adds a version field to Templates, which is itself is unused by Elasticsearch, but exists for users to better manage their own templates. Like description, it's optional.
This was an error-prone version type that allowed overriding previous
version semantics. It could cause primaries and replicas to be out of
sync however, so it has been removed.
Resolves#19769
Currently it does not because our parsers do not support big integers/decimals
(on purpose) but we do not have to ask our parser for the number type, we can
just ask the jackson parser for a number representation of the value with the
right type.
Note that I did not add similar tests for big decimals because Jackson seems to
never return big decimals, even for decimal values that are out of the range of
values that can be represented by doubles.
Closes#11508
The mem section was buggy in cluster stats and removed. It is now added back with the same structure as in node stats, containing total memory, available memory, used memory and percentages. All the values are the sum of all the nodes across the cluster (or at least the ones that we were able to get the values from).
If elasticsearch controls the ID values as well as the documents
version we can optimize the code that adds / appends the documents
to the index. Essentially we an skip the version lookup for all
documents unless the same document is delivered more than once.
On the lucene level we can simply call IndexWriter#addDocument instead
of #updateDocument but on the Engine level we need to ensure that we deoptimize
the case once we see the same document more than once.
This is done as follows:
1. Mark every request with a timestamp. This is done once on the first node that
receives a request and is fixed for this request. This can be even the
machine local time (see why later). The important part is that retry
requests will have the same value as the original one.
2. In the engine we make sure we keep the highest seen time stamp of "retry" requests.
This is updated while the retry request has its doc id lock. Call this `maxUnsafeAutoIdTimestamp`
3. When the engine runs an "optimized" request comes, it compares it's timestamp with the
current `maxUnsafeAutoIdTimestamp` (but doesn't update it). If the the request
timestamp is higher it is safe to execute it as optimized (no retry request with the same
timestamp has been run before). If not we fall back to "non-optimzed" mode and run the request as a retry one
and update the `maxUnsafeAutoIdTimestamp` unless it's been updated already to a higher value
Relates to #19813
* Params improvements to Cluster Health API wait for shards
Previously, the cluster health API used a strictly numeric value
for `wait_for_active_shards`. However, with the introduction of
ActiveShardCount and the removal of write consistency level for
replication operations, `wait_for_active_shards` is used for
write operations to represent values for ActiveShardCount. This
commit moves the cluster health API's usage of `wait_for_active_shards`
to be consistent with its usage in the write operation APIs.
This commit also changes `wait_for_relocating_shards` from a
numeric value to a simple boolean value `wait_for_no_relocating_shards`
to set whether the cluster health operation should wait for
all relocating shards to complete relocation.
* Addresses code review comments
* Don't be lenient if `wait_for_relocating_shards` is set
While removing an index isn't actually an alias action, if we add
an alias action that deletes an index then we can delete and index
and add an alias with the same name as the index atomically, in
the same cluster state update.
Closes#20064
This commit adds the support for exclusion filter to the response filtering (filter_path) feature. It changes the XContentBuilder APIs so that it now accepts two types of filters: inclusive and exclusive. Filters are no more String arrays but sets of String instead.
Adds an explicit recoverySource field to ShardRouting that characterizes the type of recovery to perform:
- fresh empty shard copy
- existing local shard copy
- recover from peer (primary)
- recover from snapshot
- recover from other local shards on same node (shrink index action)
This makes GET operations more consistent with `_search` operations which expect
`(stored_)fields` to work on stored fields and source filtering to work on the
`_source` field. This is now possible thanks to the fact that GET operations
do not read from the translog anymore (#20102) and also allows to get rid of
`FieldMapper#isGenerated`.
The `_termvectors` API (and thus more_like_this too) was relying on the fact
that GET operations would extract fields from either stored fields or the source
so the logic to do this that used to exist in `ShardGetService` has been moved
to `TermVectorsService`. It would be nice that term vectors do not rely on this,
but this does not seem to be a low hanging fruit.
The network types in use on a cluster can be useful information to have,
so this commit adds aggregate metrics for the network types in use in a
cluster to the cluster stats.
Relates #20144
This change adds a special field named _none_ that allows to disable the retrieval of the stored fields in a search request or in a TopHitsAggregation.
To completely disable stored fields retrieval (including disabling metadata fields retrieval such as _id or _type) use _none_ like this:
````
POST _search
{
"stored_fields": "_none_"
}
````
Today we do a lot of accounting inside the engine to maintain locations
of documents inside the transaction log. This is only needed to ensure
we can return the documents source from the engine if it hasn't been refreshed.
Aside of the added complexity to be able to read from the currently writing translog,
maintainance of pointers into the translog this also caused inconsistencies like different values
of the `_ttl` field if it was read from the tlog or not. TermVectors are totally different if
the document is fetched from the tranlog since copy fields are ignored etc.
This chance will simply call `refresh` if the documents latest version is not in the index. This
streamlines the semantics of the `_get` API and allows for more optimizations inside the engine
and on the transaction log. Note: `_refresh` is only called iff the requested document is not refreshed
yet but has recently been updated or added.
#Relates to #19787
Adds ignoreUnavailable to the snapshot status API to be consistent
with the get snapshots API which has a similar parameter. If
ignoreUnavailable is set to true, then the snapshot status request
will ignore any snapshots that were not found in the repository,
instead of throwing a SnapshotMissingException.
Closes#18522
Currently both `PUT` and `POST` can be used to create indices. This commit
removes support for `POST index_name` so that we can use it to index documents
with auto-generated ids once types are removed.
Relates #15613
that have analyzer aliases in their analysis settings will still work, but
any attempts to create an alias for analyzers in newly created indices
will result in an IllegalArgumentException.
As a result, the setting `index.analysis.analyzer.{analyzerName}.alias` is
no longer supported.
Closes#18244
The payload option was introduced with the new completion
suggester implementation in v5, as a stop gap solution
to return additional metadata with suggestions.
Now we can return associated documents with suggestions
(#19536) through fetch phase using stored field (_source).
The additional fetch phase ensures that we only fetch
the _source for the global top-N suggestions instead of
fetching _source of top results for each shard.
Adds `warnings` syntax to the yaml test that allows you to expect
a `Warning` header that looks like:
```
- do:
warnings:
- '[index] is deprecated'
- quotes are not required because yaml
- but this argument is always a list, never a single string
- no matter how many warnings you expect
get:
index: test
type: test
id: 1
```
These are accessible from the docs with:
```
// TEST[warning:some warning]
```
This should help to force you to update the docs if you deprecate
something. You *must* add the warnings marker to the docs or the build
will fail. While you are there you *should* update the docs to add
deprecation warnings visible in the rendered results.
Today, when listing thread pools via the cat thread pool API, thread
pools are listed in a column-delimited format. This is unfriendly to
command-line tools, and inconsistent with other cat APIs. Instead,
thread pools should be listed in a row-delimited format.
Additionally, the cat thread pool API is limited to a fixed list of
thread pools that excludes certain built-in thread pools as well as all
custom thread pools. These thread pools should be available via the cat
thread pool API.
This commit improves the cat thread pool API by listing all thread pools
(built-in or custom), and by listing them in a row-delimited
format. Finally, for each node, the output thread pools are sorted by
thread pool name.
Relates #19721
* Rename operation to result and reworking responses
* Rename DocWriteResponse.Operation enum to DocWriteResponse.Result
These are just easier to interpret names.
Closes#19664
Before this commit when an index pattern is used to filter the cluster state, only indices metadata are populated and routing table is just empty. This commit aligns the behavior of the filtering of cluster state's routing table with the filtering of cluster state's metadata so that coherent data are returned for both routing table & metadata when index pattern is requested.
This adds an extra REST handler for "_ingest/pipeline" so that users do not need to supply "_ingest/pipeline/*" to get all of them.
- Also adds a teardown section to related REST-tests for ingest.
Performing the bulk request shown in #19267 now results in the following:
```
{"_index":"test","_type":"test","_id":"1","_version":1,"_operation":"create","forced_refresh":false,"_shards":{"total":2,"successful":1,"failed":0},"status":201}
{"_index":"test","_type":"test","_id":"1","_version":1,"_operation":"noop","forced_refresh":false,"_shards":{"total":2,"successful":1,"failed":0},"status":200}
```
When the request body is missing, all documents in the target index are counted.
As mentioned in #19422, the same should happen when the request body is an empty
json object. This is also the behaviour for the `_search` endpoint and the two
APIs should behave in the same way.
With #19140 we started persisting the node ID across node restarts. Now that we have a "stable" anchor, we can use it to generate a stable default node name and make it easier to track nodes over a restarts. Sadly, this means we will not have those random fun Marvel characters but we feel this is the right tradeoff.
On the implementation side, this requires a bit of juggling because we now need to read the node id from disk before we can log as the node node is part of each log message. The PR move the initialization of NodeEnvironment as high up in the starting sequence as possible, with only one logging message before it to indicate we are initializing. Things look now like this:
```
[2016-07-15 19:38:39,742][INFO ][node ] [_unset_] initializing ...
[2016-07-15 19:38:39,826][INFO ][node ] [aAmiW40] node name set to [aAmiW40] by default. set the [node.name] settings to change it
[2016-07-15 19:38:39,829][INFO ][env ] [aAmiW40] using [1] data paths, mounts [[ /(/dev/disk1)]], net usable_space [5.5gb], net total_space [232.6gb], spins? [unknown], types [hfs]
[2016-07-15 19:38:39,830][INFO ][env ] [aAmiW40] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-07-15 19:38:39,837][INFO ][node ] [aAmiW40] version[5.0.0-alpha5-SNAPSHOT], pid[46048], build[473d3c0/2016-07-15T17:38:06.771Z], OS[Mac OS X/10.11.5/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_51/25.51-b03]
[2016-07-15 19:38:40,980][INFO ][plugins ] [aAmiW40] modules [percolator, lang-mustache, lang-painless, reindex, aggs-matrix-stats, lang-expression, ingest-common, lang-groovy, transport-netty], plugins []
[2016-07-15 19:38:43,218][INFO ][node ] [aAmiW40] initialized
```
Needless to say, settings `node.name` explicitly still works as before.
The commit also contains some clean ups to the relationship between Environment, Settings and Plugins. The previous code suggested the path related settings could be changed after the initial Environment was changed. This did not have any effect as the security manager already locked things down.
making the test wait until all urgent requests are completed before
finishing, so that tear down can properly delete the created index
and cleanup. Without this wait, it was possible that the test would
finish and cleanup the deleted indices would happen before the
index creation even processed, causing the test to leave a created
index behind.
In the lack of tests the analyzer.alias feature was pretty much not working
at all on current master. Issues like #19163 showed some serious problems for users
using this feature upgrading to an alpha version.
This change fixes the processing order and allows aliases to be set for
existing analyzers like `default`. This change also ensures that if `default`
is aliased the correct analyzer is used for `default_search` etc.
Closes#19163
Add parser for anonymous char_filters/tokenizer/token_filters
Using Settings in AnalyzeRequest for anonymous definition
Add breaking changes document
Closed#8878
creation in the REST tests, as we no longer need it due
to index creation now waiting for active shard copies
before returning (by default, it waits for the primary of
each shard, which is the same as ensuring yellow health).
Relates #19450
Before returning, index creation now waits for the configured number
of shard copies to be started. In the past, a client would create an
index and then potentially have to check the cluster health to wait
to execute write operations. With the cluster health semantics changing
so that index creation does not cause the cluster health to go RED,
this change enables waiting for the desired number of active shards
to be active before returning from index creation.
Relates #9126
The Java API supports this while mostly used for tests it can also be useful in
production environments. For instance if something is automated like a settings change
and we execute some health right after it the settings update might have some consequences
like a reroute which hasn't been fully applied since the preconditions are not fulfilled yet.
For instance if not all shards started the settings update is applied but the reroute won't move
currently initializing shards like in the shrink API test. Sure this could be done by waiting for
green before but if the cluster moves shards due to some side-effects waiting for all events is
still useful. I also took the chance to add unittests to Priority.java
Closes#19419
This change adds the type of the field in the fieldstats response.
It can be one of the following:
* "integer" for byte, short, integer and long
* "float" for float, half-float and double
* "date" for date
* "ip" for ip
* "text" for string, keyword and text.
Closes#17750
* master: (192 commits)
[TEST] Fix rare OBOE in AbstractBytesReferenceTestCase
Reindex from remote
Rename writeThrowable to writeException
Start transport client round-robin randomly
Reword Refresh API reference (#19270)
Update fielddata.asciidoc
Fix stored_fields message
Add missing footer notes in mapper size docs
Remote BucketStreams
Add doc values support to the _size field in the mapper-size plugin
Bump version to 5.0.0-alpha5.
Update refresh.asciidoc
Update shrink-index.asciidoc
Change Debian repository for Vagrant debian-8 box
[TEST] fix test to account for internal empyt reference optimization
Upgrade to netty 3.10.6.Final (#19235)
[TEST] fix histogram test when extended bounds overlaps data
Remove redundant modifier
Simplify TcpTransport interface by reducing send code to a single send method (#19223)
Fix style violation in InstallPluginCommand.java
...
Rename `fields` to `stored_fields` and add `docvalue_fields`
`stored_fields` parameter will no longer try to retrieve fields from the _source but will only return stored fields.
`fields` will throw an exception if the user uses it.
Add `docvalue_fields` as an adjunct to `fielddata_fields` which is deprecated. `docvalue_fields` will try to load the value from the docvalue and fallback to fielddata cache if docvalues are not enabled on that field.
Closes#18943
No need to match against yaml responses via regexes in REST tests, yaml responses can be properly parsed via ObjectPath instead. Few REST tests need to be updated accordingly.
This commits adds support for a `teardown` section that can be defined in REST tests to
clean up any items that may have been created by the test and are not cleaned up by
deletion of indices and templates.
This commit fixes several NPEs caused by implicitly performing a get request for a document that exists with its _source disabled and then trying to access the source. Instead of causing an NPE the following queries will throw an exception with a "source disabled" message (similar behavior as if the document does not exist).:
- GeoShape query for pre-indexed shape (throws IllegalArgumentException)
- Percolate query for an existing document (throws IllegalArgumentException)
A Terms query with a lookup will ignore the document if the source does not exist (same as if the document does not exist).
GET and HEAD requests for the document _source will return a 404 if the source is disabled (even if the document exists).
* master: (416 commits)
docs: removed obsolete information, percolator queries are not longer loaded into jvm heap memory.
Upgrade JNA to 4.2.2 and remove optionality
[TEST] Increase timeouts for Rest test client (#19042)
Update migrate_5_0.asciidoc
Add ThreadLeakLingering option to Rest client tests
Add a MultiTermAwareComponent marker interface to analysis factories. #19028
Attempt at fixing IndexStatsIT.testFilterCacheStats.
Fix docs build.
Move templates out of the Search API, into lang-mustache module
revert - Inline reroute with process of node join/master election (#18938)
Build valid slices in SearchSourceBuilderTests
Docs: Convert aggs/misc to CONSOLE
Docs: migration notes for _timestamp and _ttl
Group client projects under :client
[TEST] Add client-test module and make client tests use randomized runner directly
Move upgrade test to upgrade from version 2.3.3
Tasks: Add completed to the mapping
Fail to start if plugin tries broken onModule
Remove duplicated read byte array methods
Rename `fields` to `stored_fields` and add `docvalue_fields`
...
This commit moves template support out of the Search API to its own dedicated Search Template API in the lang-mustache module. It provides a new SearchTemplateAction that can be used to render templates before it gets delegated to the usual Search API. The current REST endpoint are identical, but the Render Search Template endpoint now uses the same Search Template API with a new "simulate" option. When this option is enabled, the Search Template API only renders template and returns immediatly, without executing the search.
Closes#17906
`stored_fields` parameter will no longer try to retrieve fields from the _source but will only return stored fields.
`fields` will throw an exception if the user uses it.
Add `docvalue_fields` as an adjunct to `fielddata_fields` which is deprecated. `docvalue_fields` will try to load the value from the docvalue and fallback to fielddata cache if docvalues are not enabled on that field.
Closes#18943
This adds a get task API that supports GET /_tasks/${taskId} and
removes that responsibility from the list tasks API. The get task
API supports wait_for_complation just as the list tasks API does
but doesn't support any of the list task API's filters. In exchange,
it supports falling back to the .results index when the task isn't
running any more. Like any good GET API it 404s when it doesn't
find the task.
Then we change reindex, update-by-query, and delete-by-query to
persist the task result when wait_for_completion=false. The leads
to the neat behavior that, once you start a reindex with
wait_for_completion=false, you can fetch the result of the task by
using the get task API and see the result when it has finished.
Also rename the .results index to .tasks.
By default the number of searches msearch executes is capped by the number of
nodes multiplied with the default size of the search threadpool. This default can be
overwritten by using the newly added `max_concurrent_searches` parameter.
Before the msearch api would concurrently execute all searches concurrently. If many large
msearch requests would be executed this could lead to some searches being rejected
while other searches in the msearch request would succeed.
The goal of this change is to avoid this exhausting of the search TP.
Closes#17926
Folded grok processor into ingest-common module.
The rest tests have been moved to ingest-common module as well, because these tests don't run in the rest-api-spec module but in the distribution:integ-test-zip module
and adding a test plugin there felt just wrong to me. I think this is ok. I left a tiny ingest rest test behind in that tests with an empty pipeline.
Removed messy tests, these tests were already covered in the rest tests
Added ingest test plugin in test infra so that each module testing integration with ingest doesn't need write its own plugin
Moved reindex ingest tests to qa module
Closes#18490
This adds support for setting the refresh request parameter to
`wait_for` in the `index`, `delete`, `update`, and `bulk` APIs. When
`refresh=wait_for` is set those APIs will not return until their
results have been made visible to search by a refresh.
Also it adds a `forced_refresh` field to the response of `index`,
`delete`, `update`, and to each item in a bulk response. This will
be true for requests with `?refresh` or `?refresh=true` and will be
true for some requests (see below) with `refresh=wait_for` but ought
to otherwise always be false.
`refresh=wait_for` is implemented as a list of
`Tuple<Translog.Location, Consumer<Boolean>>`s in the new `RefreshListeners`
class that is managed by `IndexShard`. The dynamic, index scoped
`index.max_refresh_listeners` setting controls a maximum number of
listeners allowed in any shard. If more than that many listeners
accumulate in the engine then a refresh will be forced, the thread that
adds the listener will be blocked until the refresh completes, and then the
listener will be called with a `forcedRefresh` flag so it knows that it was
the "straw that broke the camel's back". These listeners are only used by
`refresh=wait_for` and that flag manifests itself as `forced_refresh` being
`true` in the response.
About half of this change comes from piping async-ness down to the appropriate
layer in a way that is compatible with the ongoing with with sequence ids.
Closes#1063
You can look up the winding story of all the commits here:
https://github.com/elastic/elasticsearch/pull/17986
Here are the commit messages in case they are intersting to you:
commit 59a753b89109828d2b8f0de05cb104fc663cf95e
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 10:18:23 2016 -0400
Replace a method reference with implementing an interface
Saves a single allocation and forces more commonality
between the WriteResults.
commit 31f7861a85b457fb7378a6f27fa0a0c171538f68
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 10:07:55 2016 -0400
Revert "Replace static method that takes consumer with delegate class that takes an interface"
This reverts commit 777e23a6592c75db0081a53458cc760f4db69507.
commit 777e23a6592c75db0081a53458cc760f4db69507
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 09:29:35 2016 -0400
Replace static method that takes consumer with delegate class that takes an interface
Same number of allocations, much less code duplication.
commit 9b49a480ca9587a0a16ebe941662849f38289644
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 08:25:38 2016 -0400
Patch from boaz
commit c2bc36524fda119fd0514415127e8901d94409c8
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:46:27 2016 -0400
Fix docs
After updating to master we are actually testing them.
commit 03975ac056e44954eb0a371149d410dcf303e212
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:20:11 2016 -0400
Cleanup after merge from master
commit 9c9a1deb002c5bebb2a997c89fa12b3d7978e02e
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:09:14 2016 -0400
Breaking changes notes
commit 1c3e64ae06c07a85f7af80534fab88279adb30b4
Merge: 9e63ad6 f67e580
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:00:05 2016 -0400
Merge branch 'master' into block_until_refresh2
commit 9e63ad6de52d0b28f0b6d7203721baf1ebf6f56b
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 13:21:27 2016 -0400
Test for TransportWriteAction
commit 522ecb59d39b3c9e8df0d3b8df34b9e7aeaf0ce9
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:30:18 2016 -0400
Document deprecation
commit 0cd67b947f58867e704a1f0e66928a6fb5a11f11
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:26:23 2016 -0400
Deprecate setRefresh(boolean)
Users should use `setRefresh(RefreshPolicy)` instead.
commit aeb1be3f2c501990b33fb1f8230d496035f498ef
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:12:27 2016 -0400
Remove checkstyle suppression
It is fixed
commit 00d09a9caa638b6f90f4896b5502dd98d8fad56e
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:08:28 2016 -0400
Improve comment
commit 788164b898a6ee2878a273961230122b7386c3c9
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:01:01 2016 -0400
S/ReplicatedWriteResponse/WriteResponse/
Now it lines up with WriteRequest.
commit b74cf3fe778352b140355afcaa08d3d4412d749d
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 18:27:52 2016 -0400
Preserve `?refresh` behavior
`?refresh` means the same things as `?refresh=true`.
commit 30f972bdaeaaa0de6fe67746cdb8628aa86f5a8c
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 17:39:05 2016 -0400
Handle hanging documents
If a document is added to the index during a refresh we weren't properly
firing its refresh listener. This happened because the way we detect
whether a refresh makes something visible or not is imperfect. It is
ok because it always errs on the side of thinking that something isn't
yet visible.
So when a document arrives during a refresh the refresh listeners
won't think it made it into a refresh when, often, it does. The way
we work around this is by telling Elasticsearch that it ought to
trigger a refresh if there are any pending refresh listeners even
if there aren't pending documents to update. Lucene short circuits
the refresh so it doesn't take that much effort, but the refresh
listeners still get the signal that a refresh has come in and they
still pick up the change and notify the listener.
This means that the time that a listener can wait is actually slightly
longer than the refresh interval.
commit d523b5702b60c7ba309fb0dcf3cd3a4798f11960
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 14:34:01 2016 -0400
Explain Integer.MAX_VALUE
commit 4ffb7c0e954343cc1c04b3d7be2ebad66d3a016b
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 14:27:39 2016 -0400
Fire all refresh listeners in a single thread
Rather than queueing a runnable each.
commit 19606ec3bbe612095df45eba734c5b7eb2709c01
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 14:09:52 2016 -0400
Assert translog ordering
commit 6bb4e5c75e850f4a42518f06fbc955f7ec76d245
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 13:17:44 2016 -0400
Support null RefreshListeners in InternalEngine
Just skip using it.
commit 74be1480d6e44af2b354ff9ea47c234d4870b6c2
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 18:02:03 2016 -0400
Move funny ShardInfo hack for bulk into bulk
This should make it easier to understand because it is closer to where it
matters....
commit 2b771f8dabd488e056cfdc9989608d18264ddfb0
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:39:46 2016 -0400
Pull listener out into an inner class with javadoc and stuff
commit 058481ad72019c0492b03a7a4ac32a48673697d3
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:33:42 2016 -0400
Fix javadoc links
commit d2123b1cabf29bce8ff561d4a4c1c1d5b42bccad
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:28:09 2016 -0400
Make more stuff final
commit 8453fc4f7850f6a02fb5971c17a942a3e3fd9f7b
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:26:48 2016 -0400
Javadoc
commit fb16d2fc7016c1e8e1621d481e8781c7ef43326c
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 16:14:48 2016 -0400
Rewrite refresh docs
commit 5797d1b1c4d233c0db918c0d08c21731ddccd05e
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 15:02:34 2016 -0400
Fix forced_refresh flag
It wasn't being set.
commit 43ce50a1de250a9e073a2ca6cbf55c1b4c74b11b
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 14:02:56 2016 -0400
Delay translog sync and flush until after refresh
The sync might have occurred for us during the refresh so we
have less work to do. Maybe.
commit bb2739202e084703baf02cfa58f09517598cf14e
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 13:08:08 2016 -0400
Remove duplication in WritePrimaryResult and WriteReplicaResult
commit 2f579f89b4867a880396f2e7fcffc508449ff2de
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 12:19:05 2016 -0400
Clean up registration of RefreshListeners
commit 87ab6e60ca5ba945bf0fba84784b2bbe53506abf
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 11:28:30 2016 -0400
Shorten lock time in RefreshListeners
Also use null to represent no listeners rather than an empty list.
This saves allocating a new ArrayList every refresh cycle on every
index.
commit 0d49d9c5720dadfb67da3fa760397bf6d874601c
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 10:46:18 2016 -0400
Flip relationship between RefreshListeners and Engine
Now RefreshListeners comes to Engine from EngineConfig.
commit b2704b8a39382953f8f91a9743e894ee289f7514
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 09:37:58 2016 -0400
Remove unused imports
Maybe I added them?
commit 04343a22647f19304d9dc716b3fac9b183227f63
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 09:37:52 2016 -0400
Javadoc
commit da1e765678890a02d61d8a29aa433274beb5e00c
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 09:26:35 2016 -0400
Reply with non-null
Also move the fsync and flush to before the refresh listener stuff.
commit 5d8eecd0d904b497844b4c81c46477bd6178ed3a
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 08:58:47 2016 -0400
Remove funky synchronization in AsyncReplicaAction
commit 1ec71eea0f4e1228ae1497d982307be818ef4b65
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 08:01:14 2016 -0400
s/LinkedTransferQueue/ArrayList/
commit 7da36a4ceed2ccf7955138c3b005237fa41efcb4
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 07:46:38 2016 -0400
More cleanup for RefreshListeners
commit 957e9b77007c32ee75dde152c6622bab065d5993
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 07:34:13 2016 -0400
/Consumer<Runnable>/Executor/
commit 4d8bf5d4a70dcc56150c8d8d14165cd23d308b3c
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 22:20:42 2016 -0400
explain
commit 15d948a348089bb2937eec5ac4e96f3ec67dbe32
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 22:17:59 2016 -0400
Better....
commit dc28951d02973fc03b4d51913b5f96de14b75607
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 21:09:20 2016 -0400
Javadocs and compromises
commit 8eebaa89c0a1ee74982fbe0d56d1485ca2ae09db
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 20:52:49 2016 -0400
Take boaz's changes to their logic conclusion and unbreak important stuff like bulk
commit 7056b96ea412f275005b93e3570bcff895859ed5
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 15:49:32 2016 -0400
Patch from boaz
commit 87be7eaed09a274cc6a99d1a3da81d2d7bf9dd64
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 15:49:13 2016 -0400
Revert "Move async parts of replica operation outside of the lock"
This reverts commit 13807ad10b6f5ecd39f98c9f20874f9f352c5bc2.
commit 13807ad10b6f5ecd39f98c9f20874f9f352c5bc2
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 22:53:15 2016 -0400
Move async parts of replica operation outside of the lock
commit b8cadcef565908b276484f7f5f988fd58b38d8b6
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 16:17:20 2016 -0400
Docs
commit 91149e0580233bf79c2273b419fe9374ca746648
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 15:17:40 2016 -0400
Finally!
commit 1ff50c2faf56665d221f00a18d9ac88745904bf5
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 15:01:53 2016 -0400
Remove Translog#lastWriteLocation
I wasn't being careful enough with locks so it wasn't right anyway.
Instead this builds a synthetic Tranlog.Location when you call
getWriteLocation with much more relaxed equality guarantees. Rather
than being equal to the last Translog.Location returned it is
simply guaranteed to be greater than the last translog returned
and less than the next.
commit 55596ea68b5484490c3637fbad0d95564236478b
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 14:40:06 2016 -0400
Remove listener from shardOperationOnPrimary
Create instead asyncShardOperationOnPrimary which is called after
all of the replica operations are started to handle any async
operations.
commit 3322e26211bf681b37132274ee158ae330afc28b
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 17:20:02 2016 -0400
Increase default maximum number of listeners to 1000
commit 88171a8322a424e624d48960fb4c98dd43e4d671
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 16:40:57 2016 -0400
Rename test
commit 179c27c4f829f2c6ded65967652cf85adaf2ae52
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 16:35:27 2016 -0400
Move refresh listeners into their own class
They still live at the IndexShard level but they live on their
own in RefreshListeners which interacts with IndexShard using a
couple of callbacks and a registration method. This lets us test
the listeners without standing up an entire IndexShard. We still
test the listeners against an InternalEngine, because the interplay
between InternalEngine, Translog, and RefreshListeners is complex
and important to get right.
commit d8926d5fc1d24b4da8ccff7e0f0907b98c583c41
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 11:02:38 2016 -0400
Move refresh listeners into IndexShard
commit df91cde398eb720143a85a8c6fa19bdc3a74e07d
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 16:01:03 2016 -0400
unused import
commit 066da45b08148b266e4173166662fc1b3f66ed53
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 15:54:11 2016 -0400
Remove RefreshListener interface
Just pass a Translog.Location and a Consumer<Boolean> when registering.
commit b971d6d3301c7522b2e7eb90d5d8dd96a77fa625
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 14:41:06 2016 -0400
Docs for setForcedRefresh
commit 6c43be821eaf61141d3ec520f988aad3a96a3941
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 14:34:39 2016 -0400
Rename refresh setter and getter
commit e61b7391f91263a4c4d6107bfbc2a828bbcc805c
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 22:48:09 2016 -0400
Trigger listeners even when there is no refresh
Each refresh gives us an opportunity to pick up any listeners we may
have left behind.
commit 0c9b0477085c021f503db775640d25668e02f635
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 20:30:06 2016 -0400
REST
commit 8250343240de7e63118c663a230a7a314807a754
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 19:34:22 2016 -0400
Switch to estimated count
We don't need a linear time count of the number of listeners - a volatile
variable is good enough to guess. It probably undercounts more than it
overcounts but it isn't a huge problem.
commit bd531167fe54f1bde6f6d4ddb0a8de5a7bcc18a2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 18:21:02 2016 -0400
Don't try and set forced refresh on bulk items without a response
NullPointerExceptions are bad. If the entire request fails then the user
has worse problems then "did these force a refresh".
commit bcfded11515af5e0b3c3e36f3c2f73f5cd26512e
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 18:14:20 2016 -0400
Replace LinkedList and synchronized with LinkedTransferQueue
commit 8a80cc70a76375a7593745884cb987535b37ca80
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 17:38:24 2016 -0400
Support for update
commit 1f36966742f851b7328015151ef6fc8f95299af2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 15:46:06 2016 -0400
Cleanup translog tests
commit 8d121bf35eb265b8a0aee9710afeb1b054a113d4
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 15:40:53 2016 -0400
Cleanup listener implementation
Much more testing too!
commit 2058f4a808762c4588309f21b13b677245832f2c
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:45:55 2016 -0400
Pass back information about whether we refreshed
commit e445cb0cb91ebdbcfdbf566696edb2bf1c84a882
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:03:31 2016 -0400
Javadoc
commit 611cbeeaeb458f4b428bfc43a1ee6652adf4baff
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:01:40 2016 -0400
Move ReplicationResponse
now it is in the same package as its request
commit 9919758b644fd73895fb88cd6a4909a8387eb2e2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:00:14 2016 -0400
Oh boy that wasn't working
commit 247cb483c4459dea8e95e0e3bd2e4bf8d452c598
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 10:29:37 2016 -0400
Basic block_until_refresh exposed to java client
and basic "is it plugged in" style tests.
commit 46c855c9971cb2b748206d2afa6a2d88724be3ba
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 10:11:10 2016 -0400
Move test to own class
commit a5ffd892d0a352ae7e9757f2640fc2a1fa656bf2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 07:44:25 2016 -0400
WIP
commit 213bebb6ece11b85d17e44af9a54fc2e5e332d39
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 21:35:52 2016 -0400
Add refresh listeners
commit a2bc7f30e6d4857a1224ef5a89909b36c8f33731
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 21:11:55 2016 -0400
Return last written location from refresh
commit 85033a87551da89f36a23d4dfd5016db218e08ee
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 20:28:21 2016 -0400
Never reply to replica actions while you have the operation lock
This last thing was causing periodic test failures because we were
replying while we had the operation lock. Now, we probably could get
away with that in most cases but the tests don't like it and it isn't
a good idea to do network io while you have a lock anyway. So this
prevents it.
commit 1f25cf35e796835b3827b8a4110e09e5de61784c
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 19:56:18 2016 -0400
Cleanup
commit 52c5f7c3f04710901f503334239a611c0e21c85a
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 19:33:00 2016 -0400
Add a listener to shard operations
commit 5b142dc331214c8eef90587144f4b3f959f9eced
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 18:03:52 2016 -0400
Cleanup
commit 3d22b2d7ceb473db339259452a7c4f117ce86069
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:59:55 2016 -0400
Push the listener into shardOperationOnPrimary
commit 34b378943b8185451acf6350f661c0ad33b5836d
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:48:47 2016 -0400
Doc
commit b42b8da968d42cc7414020c7b199606a5dcce50a
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:45:40 2016 -0400
Don't finish early if the primary finishes early
We use a "fake" pending shard that we resolve when the replicas have
all started.
commit 0fc045b56e1e02a48c30383ac50a281d5af7e0b6
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:30:06 2016 -0400
Make performOnPrimary asyncS
Instead of returning Tuple<Response, ReplicaRequest> it returns
ReplicaRequest and takes a ActionListener<Response> as an argument.
We call the listener immediately to preserve backwards compatibility
for now.
commit 80119b9a26ede96a865af45904c3ac69d5b19b59
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:51:53 2016 -0400
Factor out common code in shardOperationOnPrimary
commit 0642083676702618f900fa842c08802a04c1a53e
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:32:29 2016 -0400
Factor out common code from shardOperationOnReplica
commit 8bdc415fedaaa9f2d0c555590a13ec4699a7c3f7
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:23:28 2016 -0400
Create ReplicatedMutationRequest
Superclass for index, delete, and bulkShard requests.
commit 0f8fa846a2822c4293df32fed18c9b99660b39ff
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:10:30 2016 -0400
Create TransportReplicatedMutationAction
It is the superclass of replication actions that mutate data: index, delete,
and shardBulk. shardFlush and shardRefresh are replication actions but they
do not extend TransportReplicatedMutationAction because they don't change
the data, only shuffle it around.
If this option is enabled on a processor it silently catches any processor related failure and continues executing the rest of the pipeline.
Closes#18493
This adds a low level primitive operations to shrink an existing
index into a new index with a single shard. This primitive expects
all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node.
To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only:
```BASH
$ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{
"settings" : {
"index.routing.allocation.require._name" : "shrink_node_name",
"index.blocks.write" : true
}
}
```
once all shards are started on the shrink node. the new index can be created via:
```BASH
$ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{
"settings" : {
"index.codec" : "best_compression",
"index.number_of_replicas" : 1
}
}'
```
This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc.
The shrink operation does not modify the source index, if a shrink operation should
be canceled or if the shrink failed, the target index can simply be deleted and
all resources are released.
Closed indices are already displayed when no indices are explicitly selected. This commit ensures that closed indices are also shown when wildcard filtering is used. It also addresses another issue that is caused by the fact that the cat action is based internally on 3 different cluster states (one when we query the cluster state to get all indices, one when we query cluster health, and one when we query indices stats). We currently fail the cat request when the user specifies a concrete index as parameter that does not exist. The implementation works as intended in that regard. It checks this not only for the first cluster state request, but also the subsequent indices stats one. This means that if the index is deleted before the cat action has queried the indices stats, it rightfully fails. In case the user provides wildcards (or no parameter at all), however, we fail the indices stats as we pass the resolved concrete indices to the indices stats request and fail to distinguish whether these indices have been resolved by wildcards or explicitly requested by the user. This means that if an index has been deleted before the indices stats request gets to execute, we fail the overall cat request. The fix is to let the indices stats request do the resolving again and not pass the concrete indices.
Closes#16419Closes#17395
Significant changes:
* AbstractQueryTestCase has moved to the test framework module, in order for query builder tests in modules and plugins
* Added support to AbstractQueryTestCase to register plugins
* Lift the restriction that only one percolator could be added per index. This validation existed in MapperService, but because the percolator moved to a module it could no longer exist there. Instead of bringing it back it was removed. This validation existed since the percolator cache only supported one percolator query per document, since the percolator cache has been removed this restriction could removed as well.
* While moving percolator tests to the new module, also removed a couple of tests for the deprecated percolate and mpercolate api. These APIs are now sugar APIs for bwc and rediect to the searvh and msearvh APIs. Some tests were still testing as if percolate and mpercolate API did the percolation, but this no longer the case and these tests could be removed.
Today if a shard fails during initialization phase due to misconfiguration, broken disks,
missing analyzers, not installed plugins etc. elasticsaerch keeps on trying to initialize
or rather allocate that shard. Yet, in the worst case scenario this ends in an endless
allocation loop. To prevent this loop and all it's sideeffects like spamming log files over
and over again this commit adds an allocation decider that stops allocating a shard that
failed more than N times in a row to allocate. The number or retries can be configured via
`index.allocation.max_retry` and it's default is set to `5`. Once the setting is updated
shards with less failures than the number set per index will be allowed to allocate again.
Internally we maintain a counter on the UnassignedInfo that is reset to `0` once the shards
has been started.
Relates to #18417
Before 5.0 for it was required that the percolator queries were cached in jvm heap as Lucene queries for two reasons:
1) Performance. The percolator evaluated all percolator queries all the time. There was no pre-selecting queries that are likely to match like we have today.
2) Updates made to percolator queries were visible in realtime, Today these changes are visible in near realtime. So updating no longer requires the percolator to have the queries in jvm heap.
So having the percolator queries in jvm heap via the percolator cache is now less attractive. Especially when there are many percolator queries then these queries can consume many GBs of jvm heap.
Removing the percolator cache does make the percolate query slower compared to how the execution time in 5.0.0-alpha1 and alpha2, but it is still faster compared to 2.x and before.
Sorts an array of values in ascending or descending order. If all elements are numerics, they will be sorted numerically. If values are strings, or mixtures of strings/numbers, the elements will be sorted lexicographically.
Currently terms on an ip address try to put their binary representation in the
json response. With this commit, they would return a formatted ip address:
```
"buckets": [
{
"key": "192.168.1.7",
"doc_count": 1
}
]
```
All other values are errors.
Add java test for throttling. We had a REST test but it only ran against
one node so it didn't catch serialization errors.
Add Simple round trip test for rethrottle request
* Add isSearchable and isAggregatable (collapsed to true if any of the instances of that field are searchable or aggregatable).
* Accept wildcards in field names.
* Add a section named conflicts for fields with the same name but with incompatible types (instead of throwing an exception).
The url that takes an id has a trailing forward slash, not really an
error but as its the only url in the whole spec that does this it
triggered my OCD :)
* `rename` processor, renamed `to` to `target_field`
* `date` processor, renamed `match_field` to `field` and renamed `match_formats` to `formats`
* `geoip` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
* `attachment` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
Closes#17835
* Added an extra `field` parameter to the `percolator` query to indicate what percolator field should be used. This must be an existing field in the mapping of type `percolator`.
* The `.percolator` type is now forbidden. (just like any type that starts with a `.`)
This only applies for new indices created on 5.0 and later. Indices created on previous versions the .percolator type is still allowed to exist.
The new `percolator` field type isn't active in such indices and the `PercolatorQueryCache` knows how to load queries from these legacy indices.
The `PercolatorQueryBuilder` will not enforce that the `field` parameter is of type `percolator`.
We advertise in our documentation that byte units are like `kb`, `mb`... But we actually only support the simple notation `k` or `m`.
This commit adds support for the documented form and keeps the non documented options to avoid any breaking change.
It also adds support for `micros`, `nanos` and `d` as a time unit in `_cat` API.
Remove the support for `b` as a SizeValue unit. Actually, for numbers, when using raw numbers without unit, there is no text to add/parse after the number. For example, you don't write `10` as `10b`. We support option like `size=` in `_cat` API which means that we want to display raw data without unit (singles).
Documentation updated accordingly.
Add test for the empty size option.
Fix missing TimeValues options for some cat APIs
Running `gradle install` on the rest-api-spec fails because there is no available install
task. This change applies the nexus plugin so that we can install and should also enable
publishing as part of the uploadArchives task.
If ts=0, cat health disable epoch and timestamp
Be Constant String timestamp and epoch
Move timestamp and epoch to Table
Add rest-api test and test
Closes#10109
By default, tasks are grouped by node. However, task execution in elasticsearch can be quite complex and an individual task that runs on a coordinating node can have many subtasks running on other nodes in the cluster. This commit makes it possible to list task grouped by common parents instead of by node. When this option is enabled all subtask are grouped under the coordinating node task that started all subtasks in the group. To group tasks by common parents, use the following syntax:
GET /tasks?group_by=parents
This allows the user to update the reindex throttle on the fly, with changes
that speed up the throttling being applied immediately and changes that
slow down the throttling being applied during the next batch. This means
that if a user throttles reindex in such a way that it tries to sleep for
16 years and then realizes that they've done something wrong then they
can change the throttle and reindex will wake up again. We don't apply
slow downs immediately so we never get in danger of losing the scan context.
Also, if reindex is canceled while it is sleeping (how it honor throttling)
then it'll immediately wake up and cancel itself.
`text` fields will have fielddata disabled by default. Fielddata can still be
enabled on an existing index by setting `fielddata=true` in the mappings.
This adds a new `/_cluster/allocation/explain` API that explains why a
shard can or cannot be allocated to nodes in the cluster. Additionally,
it will show where the master *desires* to put the shard, according to
the `ShardsAllocator`.
It looks like this:
```
GET /_cluster/allocation/explain?pretty
{
"index": "only-foo",
"shard": 0,
"primary": false
}
```
Though, you can optionally send an empty body, which means "explain the
allocation for the first unassigned shard you find".
The output when a shard is unassigned looks like this:
```
{
"shard" : {
"index" : "only-foo",
"index_uuid" : "KnW0-zELRs6PK84l0r38ZA",
"id" : 0,
"primary" : false
},
"assigned" : false,
"unassigned_info" : {
"reason" : "INDEX_CREATED",
"at" : "2016-03-22T20:04:23.620Z"
},
"nodes" : {
"V-Spi0AyRZ6ZvKbaI3691w" : {
"node_name" : "Susan Storm",
"node_attributes" : {
"bar" : "baz"
},
"final_decision" : "NO",
"weight" : 0.06666675,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
},
"Qc6VL8c5RWaw1qXZ0Rg57g" : {
"node_name" : "Slipstream",
"node_attributes" : {
"bar" : "baz",
"foo" : "bar"
},
"final_decision" : "NO",
"weight" : -1.3833332,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists"
} ]
},
"PzdyMZGXQdGhqTJHF_hGgA" : {
"node_name" : "The Symbiote",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 2.3166666,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
}
}
}
```
And when the shard *is* assigned, the output looks like:
```
{
"shard" : {
"index" : "only-foo",
"index_uuid" : "KnW0-zELRs6PK84l0r38ZA",
"id" : 0,
"primary" : true
},
"assigned" : true,
"assigned_node_id" : "Qc6VL8c5RWaw1qXZ0Rg57g",
"nodes" : {
"V-Spi0AyRZ6ZvKbaI3691w" : {
"node_name" : "Susan Storm",
"node_attributes" : {
"bar" : "baz"
},
"final_decision" : "NO",
"weight" : 1.4499999,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
},
"Qc6VL8c5RWaw1qXZ0Rg57g" : {
"node_name" : "Slipstream",
"node_attributes" : {
"bar" : "baz",
"foo" : "bar"
},
"final_decision" : "CURRENTLY_ASSIGNED",
"weight" : 0.0,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists"
} ]
},
"PzdyMZGXQdGhqTJHF_hGgA" : {
"node_name" : "The Symbiote",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 3.6999998,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
}
}
}
```
Only "NO" decisions are returned by default, but all decisions can be
shown by specifying the `?include_yes_decisions=true` parameter in the
request.
Resolves#14593
In #17198, we removed suggest transport action, which
used the `suggest` threadpool to execute requests. Now
`suggest` threadpool is unused and suggest requests are
executed on the `search` threadpool.
In 5.0 we don't allow index settings to be specified on the node level ie.
in yaml files or via commandline argument. This can cause problems during
upgrade if this was used extensively. For instance if analyzers where
specified on a node level this might cause the index to be closed when
imported (see #17187). In such a case all indices relying on this
must be updated via `PUT /${index}/_settings`. Yet, this API has slightly
different semantics since it overrides existing settings. To make this less
painful this change adds a `preserve_existing` parameter on that API to ensure
we have the same semantics as if the setting was applied on the node level.
This change also adds a better error message and a change to the migration guide
to ensure upgrades are smooth if index settings are specified on the node level.
If a index setting is detected this change fails the node startup and prints a message
like this:
```
*************************************************************************************
Found index level settings on node level configuration.
Since elasticsearch 5.x index level settings can NOT be set on the nodes
configuration like the elasticsearch.yaml, in system properties or command line
arguments.In order to upgrade all indices the settings must be updated via the
/${index}/_settings API. Unless all settings are dynamic all indices must be closed
in order to apply the upgradeIndices created in the future should use index templates
to set default values.
Please ensure all required values are updated on all indices by executing:
curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{
"index.number_of_shards" : "1",
"index.query.default_field" : "main_field",
"index.translog.durability" : "async",
"index.ttl.disable_purge" : "true"
}'
*************************************************************************************
```
Also replaced the PercolatorQueryRegistry with the new PercolatorQueryCache.
The PercolatorFieldMapper stores the rewritten form of each percolator query's xcontext
in a binary doc values field. This make sure that the query rewrite happens only during
indexing (some queries for example fetch shapes, terms in remote indices) and
the speed up the loading of the queries in the percolator query cache.
Because the percolator now works inside the search infrastructure a number of features
(sorting fields, pagination, fetch features) are available out of the box.
The following feature requests are automatically implemented via this refactoring:
Closes#10741Closes#7297Closes#13176Closes#13978Closes#11264Closes#10741Closes#4317
This change adds the infrastructure to run the rest tests on a multi-node
cluster that users 2 different minor versions of elasticsearch. It doesn't implement
any dedicated BWC tests but rather leverages the existing REST tests.
Since we don't have a real version to test against, the tests uses the current version
until the first minor / RC is released to ensure the infrastructure works.
Relates to #14406Closes#17072
This commit adds fields bytes_recovered and files_recovered to the cat
recovery API. These fields, respectively, indicate the total number of
bytes and files recovered. Additionally, for consistency, some totals
fields and translog recovery fields have been renamed.
Closes#17064
The ingest stats include the following statistics:
* `ingest.total.count`- The total number of document ingested during the lifetime of this node
* `ingest.total.time_in_millis` - The total time spent on ingest preprocessing documents during the lifetime of this node
* `ingest.total.current` - The total number of documents currently being ingested.
* `ingest.total.failed` - The total number ingest preprocessing operations failed during the lifetime of this node
Also these stats are returned on a per pipeline basis.
_wait_for_completion defaults to false. If set to true then the API will
wait for all the tasks that it finds to stop running before returning. You
can use the timeout parameter to prevent it from waiting forever. If you
don't set a timeout parameter it'll default to 30 seconds.
Also adds a log message to rest tests if any tasks overrun the test. This
is just a log (instead of failing the test) because lots of tasks are run
by the cluster on its own and they shouldn't cause the test to fail. Things
like fetching disk usage from the other nodes, for example.
Switches the request to getter/setter style methods as we're going that
way in the Elasticsearch code base. Reindex is all getter/setter style.
Closes#16906
Move some test methods from AnalylzeActionIT to RestAnalyzeActionTest
Allow string explain param if it can parse
Fix wrong param name in rest-api-spec
Closes#16925
Internally the put pipeline API uses this information in node info API to validate if all specified processors in a pipeline exist on all nodes in the cluster.
The cluster stats api now returns counts for each node role. The `master_data`, `master_only`, `data_only` and `client` fields have been removed from the response in favour of `master`, `data`, `ingest` and `coordinating_only`. The same node can have multiple roles, hence contribute to multiple roles counts. Every node is implicitly a coordinating node, so whenever a node has no explicit roles, it will be counted as coordinating only.
_cat/nodes used to return `c` for client node or `d` for data node as part of the node.role column. This commit changes it to return `m` for master eligible, `d` for data and/or `i` for ingest. A node with no explicit roles will be a coordinating only node and marked with `-`. A node can obviously have multiple roles. The master column has been adapted to return only whether a node is the current master (`*`) or not (`-`).
Elasticsearch 5.0 doesn't support indices wiht legacy checksums anymore.
The last time we write legacy checksums was in 1.3.0 which was based
on lucene 4.9 already which means that all files have CRC32 checksums.
All indices that Elasticsearch can read today must be written with
lucene version >= 4.8 anyway so we can drop this layer of backwards
compatibility entirely.
Since we are close to upgrading to Lucene 6.0 we should get rid of this
in a more contiained change than the lucene upgrade.
The `ingest_took` is separate from `took`, which keeps track how much time is spent on indexing/deleting/updating.
The `ingest_took` is only visible in the rest response if at least for one bulk item has ingest enabled.
`catch: param` is designed to catch errors generated by client-side validation logic when users don't supply valid parameters to an API request. This test though is testing the server-side validation of pipeline aggregations, and so a "param" catch is invalid. Instead we will just test for a parse_exception error type using a regex.
Expose http address in cat/nodes and cat/nodeattrs APIs
We expose a lot of information like IP address and port but never
expose the http address/ip:port in the CAT API. It's nice to have it
there too since otherwise json parsing is required to get this information
We expose a lot of information like IP address and port but never
expose the http address/ip:port in the CAT API. It's nice to have it
there too since otherwise json parsing is required to get this information
Elasticsearch should reject ids that are this long, to ensure a document
always remains retrievable for clients that impose a maximum URI length
Closes#16034
The `keyword` field is intended to replace `not_analyzed` string fields. It is
indexed and has doc values by default, and doesn't support enabling term
vectors.
Although it doesn't support setting an analyzer for now, there are plans for
it to support basic normalization in the future such as case folding.
Only tasks that extend CancellableTask can be cancelled using this mechanism. If a cancellable task has children it can elect to cancel all child tasks as well. In this case a special ban parent request is sent to all nodes. This request does two things: 1) it prevents any tasks with the banned parent task from being started, and 2) it cancels all currently running tasks that have the banned task as a parent. The ban is lifted as soon as the coordinating node notifies all other nodes that the cancelled task has finished executing. If the coordinating node leaves the cluster before it has a chance to lift its bans, all bans set by this coordinating node are automatically removed.
As an option a task can elect to automatically cancel all child tasks if their parent task was running on a node that just left the cluster. This option makes sense for cancellable heavy tasks that have no side-effects and only return results to the coordinating node. With the coordinating node gone, it doesn't make sense to run such tasks any longer since their results will be most likely discarded.
The cat API previously used the Content-Type header field for
determining the media type of the response. This is in opposition to the
HTTP spec which specifies the Accept header field for this purpose. This
commit replaces the use of the Content-Type header field with the Accept
header field in the cat API.
Closes#14421
This processor is useful when all elements of a json array need to be processed in the same way.
This avoids that a processor needs to be defined for each element in an array.
Also it is very likely that it is unknown how many elements are inside an json array.
Retrieving distributed DF for TermVectors is beside it's esotheric justification
a very slow process and can cause serious load on the cluster. We also don't have nearly
enough testing for this stuff and given the complexity we should remove it rather than carrying it
around.
When there is an exception thrown during pipeline creation within
Rest calls (in put pipeline, and simulate) We now return a structured
error response to the user with details around which processor's
configuration is the cause of the issue, or which configuration property
is misconfigured, etc.
We used to have a disabled test around cluster put settings as it left cluster settings behind without a way to remove them. That has been in fixed in the cluster put settings api, so the test can be re-enabled.
The search_after parameter provides a way to efficiently paginate from one page to the next. This parameter accepts an array of sort values, those values are then used by the searcher to sort the top hits from the first document that is greater to the sort values.
This parameter must be used in conjunction with the sort parameter, it must contain exactly the same number of values than the number of fields to sort on.
NOTE: A field with one unique value per document should be used as the last element of the sort specification. Otherwise the sort order for documents that have the same sort values would be undefined. The recommended way is to use the field `_uuid` which is certain to contain one unique value for each document.
Fixes#8192
Merge feature/ingest branch into master branch.
This adds the ingest feature to ES that allows to preprocess document before indexing on an ingest node.
By default a node is an ingest node. Documents are preprocessed via a pipeline. A pipeline consists
out of one or more processors Each processor makes one or more modifications to a document processed.
There are many types of processors available out-of-the-box that are designed to make a specific change to a document being processed. In a cluster many pipeline can be configured via dedicated pipeline APIs. An new option on the bulk
and index APIs allows to control what pipeline is picked for preprocessing. If no pipeline is specified then the ingest
feature is skipped and no preprocessing takes place.
Site plugins used to be used for things like kibana and marvel, but
there is no longer a need since kibana (and marvel as a kibana plugin)
uses node.js. This change removes site plugins, as well as the flag for
jvm plugins. Now all plugins are jvm plugins.
This change affects get alias, get aliases as well as cat aliases. They all return closed indices too by default. get alias and get aliases also allow to return open indices only through the `expand_wildcards` option (set it to `open`).
Closes#14982
Warmers are now barely useful and will be removed in 3.0. Note that this only
removes the warmer API and query-based warmers. We still have warmers internally
for eg. global ordinals.
Close#15607
* Added percolator field mapper that extracts the query terms and indexes these terms with the percolator query.
* At percolate time these extracted terms are used to query percolator queries that are like to be evaluated. This can significantly cut down the time it takes to percolate. Whereas before all percolator queries were evaluated if they matches with the document being percolated.
* Changes made to percolator queries are no longer immediately visible, a refresh needs to happen before the changes are visible.
* By default the percolate api only returns upto 10 matches instead of returning all matching percolator queries.
* Made percolate more modular, so that it is easier to add unit tests.
* Added unit tests for the percolator.
Closes#12664Closes#13646
Adds task manager class and enables all activities to register with the task manager. Currently, the immutable Transport*Activity class represents activity itself shared across all requests. This PR adds and an additional structure Task that keeps track of currently running requests and can be used to communicate with these requests using TransportTaskAction.
Related to #15117
This adds the required changes/checks so that the build can run on
FreeBSD.
There are a few things that differ between FreeBSD and Linux:
- CPU probes return -1 for CPU usage
- `hot_threads` cannot be supported on FreeBSD
From OpenJDK's `os_bsd.cpp`:
```c++
bool os::is_thread_cpu_time_supported() {
#ifdef __APPLE__
return true;
#else
return false;
#endif
}
```
So this API now returns (for each FreeBSD node):
```
curl -s localhost:9200/_nodes/hot_threads
::: {Devil Hunter Gabriel}{q8OJnKCcQS6EB9fygU4R4g}{127.0.0.1}{127.0.0.1:9300}
hot_threads is not supported on FreeBSD
```
- multicast fails in native `join` method - known bug:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193246
Which causes:
```
1> Caused by: java.net.SocketException: Invalid argument
1> at java.net.PlainDatagramSocketImpl.join(Native Method)
1> at java.net.AbstractPlainDatagramSocketImpl.join(AbstractPlainDatagramSocketImpl.java:179)
1> at java.net.MulticastSocket.joinGroup(MulticastSocket.java:323)
1> at org.elasticsearch.plugin.discovery.multicast.MulticastChannel$Plain.buildMulticastSocket(MulticastChannel.java:309)
```
So these tests are skipped on FreeBSD.
Resolves#15562
Do not to load fields from _source when using the `fields` option.
Non stored (non existing) fields are ignored by the fields visitor when using the `fields` option.
Fixes#10783
Support * wildcard to retrieve stored fields when using the `fields` option.
Supported pattern styles are "xxx*", "*xxx", "*xxx*" and "xxx*yyy".
This adds support for arbitrary headers sent with each REST request, it
will allow us to test things like different xcontent-encoding (see
50_with_headers.yaml for what this looks like).
Headers are specified at the same level as `catch`, so a request would
look like:
```yaml
- do:
headers:
Content-Type: application/yaml
get:
index: test_1
type: _all
id: 1
```
This commit fixes a test bug in the cat shards REST test. In
particular, there was a race condition in the test that would cause the
test to sometimes fail. The race condition is that some of the shards
would go to state STARTED after the sync flush was issued. These shards
would (correctly) show up in the output as having state started but
without a sync_id. However, the expected output was written to only
look for shards that have state STARTED and a sync_id, or shards that
are still INITIALIZING or are UNASSIGNED and (of course) do not have a
sync_id. The best approach here is to just simplify the test.
The completion suggester provides auto-complete/search-as-you-type functionality.
This is a navigational feature to guide users to relevant results as they are typing, improving search precision.
It is not meant for spell correction or did-you-mean functionality like the term or phrase suggesters.
The completions are indexed as a weighted FST (finite state transducer) to provide fast Top N prefix-based
searches suitable for serving relevant results as a user types.
closes#10746
Similarly to what we did with the search api, we can now also move query parsing on the coordinating node for the validate query api. Given that the explain api is a single shard operation (compared to search which is instead a broadcast operation), this doesn't change a lot in how the api works internally. The main benefit is that we can simplify the java api by requiring a structured query object to be provided rather than a bytes array that will get parsed on the data node. Previously if you specified a QueryBuilder it would be serialized in json format and would get reparsed on the data node, while now it doesn't go through parsing anymore (as expected), given that after the query-refactoring we are able to properly stream queries natively. Note that the WrapperQueryBuilder can be used from the java api to provide a query as a string, in that case the actual parsing of the inner query will happen on the data node.
Relates to #10217Closes#14384
This change removes the leftover pom files. A couple files were left for
reference, namely in qa tests that have not yet been migrated (vagrant
and multinode). The deb and rpm assemblies also still exist for
reference when finishing their setup in gradle.
See #13930
We have two types of parse methods for queries: one for the inner query, to be used once the parser is positioned within the query element, and one for the whole query source, including the query element that wraps the actual query.
With the search refactoring we ended up using the former in count, cat count and delete by query, whereas we should have used the former. It ends up working properly given that we have a registered (deprecated) query called "query", which used to allow to wrap a filter into a query, but this has the following downsides:
1) prevents us from removing the deprecated "query" query
2) we end up supporting a top level query that is not wrapped within a query element (pre 1.0 syntax iirc that shouldn't be supported anymore)
This commit finally removes the "query" query and fixes the related parsing bugs. We also had some tests that were providing queries in the wrong format, those have been fixed too.
Closes#13326Closes#14304
This adds an API for force merging lucene segments. The `/_optimize` API is now
deprecated and replaced by the `/_forcemerge` API, which has all the same flags
and action, just a different name.
Currently it's not possible to specify a timeout for nodes operations (such as node info, node stats, cluster stats and hot threads) via REST-based APIs.
The `_create` API is handy way to specify an index operation should only be done if the document doesn't exist. This is currently implemented in explicit code paths all the way down to the engine. However, conceptually this is no different than any other versioned operation - instead of requiring a document is on a specific version, we require it to be deleted (or non-existent). This PR removes Engine.Create in favor of a slight extension in the VersionType logic.
There are however a couple of side effects:
- DocumentAlreadyExistsException is removed and VersionConflictException is used instead (with an improved error message)
- Update will reject version parameters if the upsert option is used (it doesn't compute anyway).
- Translog.Create is also removed infavor of Translog.Index (that's OK because their binary format was the same, so we can just read Translog.Index of the translog file)
Closes#13955
This commit removes all the opaque bytes for extra_source and template_source.
Instead source and extra_source etc. are represented as SearchSourceBuilder which can
in-place be modified and is updated with the content of the request parameters.
Template Source is parsed and evaluated which in-turn replaces the actual source.
We moved a lot of repositories into elasticsearch, but in their new
location they retained their LICENSE.txt and NOTICE.txt files. These are
all the same, and having the license and notice and the root of the
repository should be sufficient.
Adds a node attribute to all test runs and uses the attribute to test
`_cat/nodeattrs`.
Note that its quite possible create an impressively slow regex while doing
this and you have to be careful. See comment in commit for more if curious.
Closes#12558
detect_noop is pretty cheap and noop updates compartively expensive so this
feels like a sensible default.
Also had to do some testing and documentation around how _ttl works with
detect_noop.
Closes#11282
In order to have consistent deploys across several repositories,
we should deploy to sonatype first, then mirror those contents,
and then upload to s3.
This means, the aws wagon is not needed anymore.
Adds an explicit description the RPM package so it doesn't inherit the description from the POM.
Closes#12550
Also, modified descriptions for deb and rpm packages to be the same and to reference the documentation rather than listing features that are out of date.
Conflicting mappings that were allowed before v2.0 can cause runaway shard failures on upgrade. This commit adds a check that prevents a cluster from starting if it contains such indices as well as restoring such indices from a snapshot into already running cluster.
Closes#11857
this change was added recently which uses default timezone for the creation
date on CAT endpoints. We should be consistent and use UTC across the board.
This commit adds #getDefaultTimzone() to forbidden API and fixes the REST tests.
Relates to #11688
Store information reports on which nodes shard copies exist, the shard
copy version, indicating how recent they are, and any exceptions
encountered while opening the shard index or from earlier engine failure.
closes#10952
Today we have a intermediate hierarchy for shard and index exceptions
which makes it hard to introduce generic exceptions like ResourceNotFoundException
intoduced in this commit. This commit breaks up the hierarchy by adding index and shard
as a special internal header that gets rendered for every exception that fills that header.
This commit removes dedicated exceptions like `IndexMissingException` or
`IndexShardMissingException` in favour of `ResourceNotFoundException`
This test asserts that the first recovery result is of type SNAPSHOT.
This assertion might not be true depending on the rendering order if
a replica is recovered quick enough. This commit disables replicas and
their recovery since it's not the purpose of this test.
Today everything is tight to having the next version as the latest.
In order to work towards 2.0.0.beta1 we need to fix all the usage of
2.0.0-SNAPSHOT to reflect the version we will release soon.
Usually we do this on the release branch but to simplify things I wanna
keep this on master for now and move to 2.1.0-SNAPSHOT on master once
we created a 2.0 branch.
Closes#12148
This information was stored with the snapshot but wasn't available on the interface. Knowing the version of elasticsearch that created the snapshot can be useful to determine the minimal version of the cluster that is required in order to restore this snapshot.
Closes#11980
This commit adds support to retrieve fields when using the bulk update API. This functionality was previously available for the update API
but not for the bulk update API.
Closes#11527