In #17198, we removed suggest transport action, which
used the `suggest` threadpool to execute requests. Now
`suggest` threadpool is unused and suggest requests are
executed on the `search` threadpool.
Switching from using list of BytesReference to real SortBuilder list in
SearchSourceBuilder, TopHitsAggregatorBuilder and TopHitsAggregatorFactory.
Removing SortParseElement and related sort parsers.
We can be better at checking `buffer_size` and `chunk_size` for S3 repositories.
For example, we know that:
* `buffer_size` should be more than `5mb`
* `chunk_size` should be no more than `5tb`
* `buffer_size` should be lower than `chunk_size`
Otherwise, setting `buffer_size` is useless.
For the record:
`chunk_size` is a Snapshot setting whatever the implementation is.
`buffer_size` is an S3 implementation setting.
Let say that you are snapshotting a 500mb file. If you set `chunk_size` to `200mb`, then Snapshot service will call S3 repository to snapshot 3 files with the following sizes:
* `200mb`
* `200mb`
* `100mb`
If you set `buffer_size` to `100mb` (AWS maximum size recommendation), the first file of `200mb` will be uploaded on S3 using the multipart feature in 2 chunks and the workflow is basically the following:
* create the multipart request and get back an `id` from AWS S3 platform
* upload part1: `100mb`
* upload part2: `100mb`
* "commit" the full upload using the `id`.
Closes#17244.
We lost some accounting code in the translog recover code during refactoring
which triggers a very rare assertion. If we fail on a recovery target with an
illegal mapping update (which can happen if the clusterstate is behind), then
we miss to rollback the # of processed ops in that batch and once we resume
the batch we trip an assertion that the stats are off.
This commit brings back the code lost in 8bc2332d9a
and improves the comment that explains why we need this rollback logic.
Now that string has been splitted into text and keyword, we use text as a
dynamic type when encountering string fields in a json document. However
this does not play well with existing templates that look like
```
{
"mapping": {
"index": "not_analyzed",
"type": "{dynamic_type}"
},
"match": "*"
}
```
Since we want existing templates to keep working as much as possible in 5.0,
this commit adds a hack to dynamic templates so that elasticsearch will create
a keyword field if the `index` property is set and is either `no` or
`not_analyzed`, similarly to what was done in #16991.
While this will make upgrades easier, we still need to figure out a way to
allow users to create keyword fields when using dynamic types.
The fielddata settings in mappings have been refatored so that:
- text and string have a `fielddata` (boolean) setting that tells whether it
is ok to load in-memory fielddata. It is true by default for now but the
plan is to make it default to false for text fields.
- text and string have a `fielddata_frequency_filter` which contains the same
thing as `fielddata.filter.frequency` used to (but validated at parsing time
instead of being unchecked settings)
- regex fielddata filtering is not supported anymore and will be dropped from
mappings automatically on upgrade.
- text, string and _parent fields have an `eager_global_ordinals` (boolean)
setting that tells whether to load global ordinals eagerly on refresh.
- in-memory fielddata is not supported on keyword fields anymore at all.
- the `fielddata` setting is not supported on other fields that text and string
and will be dropped when upgrading if specified.
Currently, both Gsub and Grok parse regex strings during
Pipeline creation. Thrown parsing exceptions were leaking out, this
commit wraps those exceptions in ElasticsearchParseExceptions.
This commit mocks the value of rlimit infinity in the max size virtual
memory check test. This is to avoid attempting to load the native C
library during the test on Windows which would lead to a permissions
violation (the native C library needs to be loaded before the security
manager is setup).
Archive cluster level settings if unknown or broken
We already archive index level settings if we find an unknown or invalid/broken
value for a setting on node startup. The same could potentially happen for persistent
cluster level settings if we remove a setting or if we add validation to a setting that
didn't exist in the past. To ensure that only valid settings are recovered into the cluster
state we archive them (prefix them with `archive.` and log a warning. Tools that check the
cluster settings can then warn users that they have broken settings in their clusterstate that
got archived.
This commit adds a bootstrap check on Linux and OS X for the max size of
virtual memory (address space) to the user running the Elasticsearch
process.
Closes#16935
Currently if you run an `exists` query on an object, it will resolve all sub
fields and create a disjunction for all those fields. However the `_field_names`
mapper indexes paths for objects so we could query object paths directly.
I also changed the query parser to reject `exists` queries if the `_field_names`
field is disabled since it would be a big performance trap.
We already archive index level settings if we find an unknown or invalid/broken
value for a setting on node startup. The same could potentially happen for persistent
cluster level settings if we remove a setting or if we add validation to a setting that
didn't exist in the past. To ensure that only valid settings are recovered into the cluster
state we archive them (prefix them with `archive.` and log a warning. Tools that check the
cluster settings can then warn users that they have broken settings in their clusterstate that
got archived.
This commit sets up the default filesystem used during install plugins
tests. A hack is neeeded to handle the temporary directory because the
system property "java.io.tmpdir" will have been initialized to a value
that is sensible for the default filesystem, but not necessarily to a
value that makes sense for the mock filesystem in use during the
tests. This property is restored after each test.
For the refactoring of SortBuilders related to #10217, each SortBuilder needs to get a build()
method that produces a SortField according to the SortBuilder parameters on the shard.
This commit fixes string formatting issues in the error handling and
provides a bettter error message if malformed input is detected.
This commit also adds tests for both situations.
Relates to #17212
this is the last step to remove node level service from IndexShard.
This means that tests can now more easily create an IndexShard instance
without starting a node and removes the dependency between IndexShard and Client/ScriptService
When a master is elected, it reaches out to all master nodes for their cluster state, selecting the one with the highest version. At the moment, we do another round to select the index metadata with the highest version as well. This is not needed - the election of a cluster state is enough - we should just use whatever indices are in it.
Closes#17233
In #17187, we upgrade index state after upgrading
index folder structure. As we don't have to write
the upgraded state in the old index folder structure,
we can cleanup how we write upgraded index state.
We can do better than just throwing an error when we don't find a
setting. It's actually trivial to leverage lucenes slow LD StringDistance
to find possible candiates for a setting to detect missspellings and suggest
a possible setting.
This commit adds error messages like:
* `unknown setting [index.numbe_of_replica] did you mean [index.number_of_replicas]?`
rather than just reporting the setting as unknown
Today if something is wrong with the IndexMetaData we detect it very
late and most of the time if that happens we already allocated the index
and get endless loops and full log files on data-nodes. This change tries
to verify IndexService creattion during initial state recovery on the master
and if the recovery fails the index is imported as `closed` and won't be allocated
at all.
Closes#17187
In 5.0 we don't allow index settings to be specified on the node level ie.
in yaml files or via commandline argument. This can cause problems during
upgrade if this was used extensively. For instance if analyzers where
specified on a node level this might cause the index to be closed when
imported (see #17187). In such a case all indices relying on this
must be updated via `PUT /${index}/_settings`. Yet, this API has slightly
different semantics since it overrides existing settings. To make this less
painful this change adds a `preserve_existing` parameter on that API to ensure
we have the same semantics as if the setting was applied on the node level.
This change also adds a better error message and a change to the migration guide
to ensure upgrades are smooth if index settings are specified on the node level.
If a index setting is detected this change fails the node startup and prints a message
like this:
```
*************************************************************************************
Found index level settings on node level configuration.
Since elasticsearch 5.x index level settings can NOT be set on the nodes
configuration like the elasticsearch.yaml, in system properties or command line
arguments.In order to upgrade all indices the settings must be updated via the
/${index}/_settings API. Unless all settings are dynamic all indices must be closed
in order to apply the upgradeIndices created in the future should use index templates
to set default values.
Please ensure all required values are updated on all indices by executing:
curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{
"index.number_of_shards" : "1",
"index.query.default_field" : "main_field",
"index.translog.durability" : "async",
"index.ttl.disable_purge" : "true"
}'
*************************************************************************************
```
This commit adds a hack to detect when Jimfs throws an IAE where it
should be throwing an UOE. Namely, the method
FileSystemProvider#createDirectory should be throwing an UOE if an
attempt is made to set attributes that the filesystem does not support,
but instead Jimfs violates this and throws an IAE.
There are some implementation of StreamInput that implement the available method
and there are others that do not implement this method. This change makes the
available method abstract in the StreamInput class and implements the method where
it was not previously implemented.
This commit refactors the unit tests for installing plugins to test
against mock filesystems (as well as the native filesystem) for better
test coverage. This commit also adds tests that cover the POSIX
attributes handling when installing plugins (e.g., ensuring that the
plugins directory has the right permissions, the bin directory has
execute permissions, and the config directory has the same owner and
group as its parent).
We current have a ClusterService interface, implemented by InternalClusterService and a couple of test classes. Since the decoupling of the transport service and the cluster service, one can construct a ClusterService fairly easily, so we don't need this extra indirection.
Closes#17183
During the aggregation refactoring the default value for `keyed` in the `percentiles` and `percentile_ranks` aggregation was inadvertently changed from `true` to `false`. This change reverts the defaults to the old (correct) value
Also replaced the PercolatorQueryRegistry with the new PercolatorQueryCache.
The PercolatorFieldMapper stores the rewritten form of each percolator query's xcontext
in a binary doc values field. This make sure that the query rewrite happens only during
indexing (some queries for example fetch shapes, terms in remote indices) and
the speed up the loading of the queries in the percolator query cache.
Because the percolator now works inside the search infrastructure a number of features
(sorting fields, pagination, fetch features) are available out of the box.
The following feature requests are automatically implemented via this refactoring:
Closes#10741Closes#7297Closes#13176Closes#13978Closes#11264Closes#10741Closes#4317
This commit fixes the line-length checkstyle violations in
InstallPluginCommand.java and removes this from the list of files for
which the line-length check is suppressed.
Also adding checks for median SortMode on non-numeric field types
to FieldSortBuilder, removing some unused code and switching
GeoDistanceSortBuilder to using ParseField.
Provide better error message when an incompatible node connects to a node
We should give a better exception message when an incompatible node connects
and we receive a messeage. This commit adds a clear excpetion based on the
protocol version received instead of throwing cryptic messages about not fully reaed
buffer etc.
Relates to #17090
We should give a better exception message when an incompatible node connects
and we receive a messeage. This commit adds a clear excpetion based on the
protocol version received instead of throwing cryptic messages about not fully reaed
buffer etc.
Relates to #17090
The previous method sorted first by _score, then _uid. In certain situations, this allowed
floating point errors to slightly alter the sort order, causing test failure.
We only sort on _uid now, which should be deterministic and allow comparison of ten
documents. Not quite as useful, but less fragile and we still check to make sure num hits
and max score are identical.
Closes#17164
If this query doesn't take the child type into account then it can match other
child document types pointing to the same parent type and that have the same id too.
Today we allow to set all kinds of index level settings on the node level which
is error prone and difficult to get right in a consistent manner.
For instance if some analyzers are setup in a yaml config file some nodes might
not have these analyzers and then index creation fails.
Nevertheless, this change allows some selected settings to be specified on a node level
for instance:
* `index.codec` which is used in a hot/cold node architecture and it's value is really per node or per index
* `index.store.fs.fs_lock` which is also dependent on the filesystem a node uses
All other index level setting must be specified on the index level. For existing clusters the index must be closed
and all settings must be updated via the API on each of the indices.
Closes#16799
This commit fixes an OOM error that happens when the XContentParser.readList() method is asked to parse a single value instead of an array. It fixes the UpdateRequest parsing as well as remove some leniency in the readList() method so that it expect to be in an array before parsing values.
closes#15338
Now we have 16870 we can enable the request cache by default. The caching can still be disabled on a per request basis and can still be disabled in the settings, only the default value has changed. For now this is done regardless of whether the shard is active or inactive.
Closes#17134
This change adds a rewrite phase to the queries on the shard before they are assessed for caching or executed. This allows the opportunity to rewrite queries as faster running simpler queries based on attributes known to only the shard itself. The first query to implement this is the RangeQueryBuilder which will rewrite to a MatchAllQueryBuilder if the range of terms on the shard is a subset of the query and rewrites to a MatchNoneQueryBuilder if the range of terms on the shard is completely outside the query.
Currently, global and index level state format type can be configured through gateway.format.
This commit removes the ability to configure format type for these states.
Now we always store these states in SMILE format and ensure we always write them
to disk in the most compact way.
Adds support for scheduling commands to run at a later time on another
thread pool in the current thread's context:
```java
Runnable someCommand = () -> {System.err.println("Demo");};
someCommand = threadPool.getThreadContext().preserveContext(someCommand);
threadPool.schedule(timeValueMinutes(1), Names.GENERAL, someCommand);
```
This happens automatically for calls to `threadPool.execute` but `schedule`
and `scheduleWithFixedDelay` don't do that, presumably because scheduled
tasks are usually context-less. Rather than preserve the current context
on all scheduled tasks this just makes it possible to preserve it using
the syntax above.
To make this all go it moves the Runnables that wrap the commands from
EsThreadPoolExecutor into ThreadContext.
This, or something like it, is required to support reindex throttling.
For the current refactoring of SortBuilders related to #10217,
each SortBuilder should get a build() method that produces a
SortField according to the SortBuilder parameters on the shard.
This change also slightly refactors the current parse method in
SortParseElement to extract an internal parse method that returns
a list of sort fields only needs a QueryShardContext as input
instead of a full SearchContext. This allows using this internal
parse method for testing.
Without this commit fetching the status of a reindex from a node that isn't
coordinating the reindex will fail. This commit properly registers reindex's
status so this doesn't happen. To do so it moves all task status registration
into NetworkModule and creates a method to register other statuses which the
reindex plugin calls.
When specifying more than one `filter` in a `constant_score`
query, the last one will be the only one that will be
executed, overwriting previous filters. It should rather
raise a ParseException to notify the user that only one
filter query is accepted.
Closes#17126
Fix a potential parsing problem in GeoDistanceSortParser.
For an input like `{ [...], "coerce" = true, "ignore_malformed" = false }` the parser will
fail to parse the ignore_malformed boolean flag and will fall through to the last
else-branch where the boolean flag will be parsed as geo-hash and `ignore_malformed`
treated as field name.
The build currently uses the old maven support in gradle. This commit
switches to use the newer maven-publish plugin. This will allow future
changes, for example, easily publishing to artifactory.
An additional part of this change makes publishing of build-tools part
of the normal publishing, instead of requiring a separate upload step
from within buildSrc. That also sets us up for a follow up to enable
precomit checks on the buildSrc code itself.
This commit removes some dead code that resulted from removing the
ability for a field to have different names (after enforcing that fields
have the same full and index name).
Closes#17127
Test revealed a potential problem in the current GeoDistanceSortParser.
For an input like `{ [...], "coerce" = true, "ignore_malformed" = false }
the parser will fail to parse the `ignore_malformed` boolean flag and
will fall through to the last else-branch where the boolean flag will be
parsed as geo-hash and `ignore_malformed` treated as field name.
Adding fix and test that will fail with the old parser code.
Try to renew sync ID if `flush=true` on forceMerge
Today we do a force flush which wipes the sync ID if there is one which
can cause the lost of all benefits of the sync ID ie. fast recovery.
This commit adds a check to renew the sync ID if possible. The flush call
is now also not forced since the IW will show pending changes if the forceMerge added new segments.
if we keep using force we will wipe the sync ID even if no renew was actually needed.
Closes#17019
Adding methods and tests to ScriptSortBuilder that makes it implement NamedWritable and adds a fromXContent() method needed to read itseld from xContent. Also changing sortMode() setters in
FieldSortBuilder, GeoDistanceSortBuilder and ScriptSortBuilder to accept an enum instead of a String
value.
Relates to #15178
IndicesStore checks for `allocated elsewhere` for every shard not alocated on the local node
On each cluster-state update we check on the local node if we can delete some shards content.
For this we linearly walk all shards and check if they are allocated and started on another node
and if we can delete them locally. if we can delete them locally we go and ask other nodes if we can
delete them and then if the shared IS active elsewhere issue a state update task to delete it. Yet,
there is a bug in IndicesService#canDeleteShardContent which returns `true` even if that shards
datapath doesn't exist on the node which causes tons of unnecessary node to node communication and
as many state update task to be issued. This can have large impact on the cluster state processing
speed.
**NOTE:** This only happens for shards that have at least one shard allocated on the node ie. if an `IndexService` exists.
On each clusterstate update we check on the local node if we can delete some shards content.
For this we linearly walk all shards and check if they are allocated and started on another node
and if we can delete them locally. if we can delete them locally we go and ask other nodes if we can
delete them and then if the shared IS active elsewhere issue a state update task to delete it. Yet,
there is a bug in IndicesService#canDeleteShardContent which returns `true` even if that shards
datapath doesn't exist on the node which causes tons of unnecessary node to node communciation and
as many state update task to be issued. This can have large impact on the cluster state processing
speed.
**NOTE:** This only happens for shards that have at least one shard allocated on the node
ie. if an `IndexService` exists.
Closes#17106
Today we do a force flush which wipes the sync ID if there is one which
can cause the lost of all benefits of the sync ID ie. fast recovery.
This commit adds a check to renew the sync ID if possible. The flush call
is now also not forced since the IW will show pending changes if the forceMerge added new segments.
if we keep using force we will wipe the sync ID even if no renew was actually needed.
Closes#17019
Add infrastructure to run REST tests on a multi-version cluster
This change adds the infrastructure to run the rest tests on a multi-node
cluster that users 2 different minor versions of elasticsearch. It doesn't implement
any dedicated BWC tests but rather leverages the existing REST tests.
Since we don't have a real version to test against, the tests uses the current version
until the first minor / RC is released to ensure the infrastructure works.
Given the amount of problems this change already found I think it's worth having this run with our test suite by default. The structure of this infra will likely change over time but for now it's a step into the right direction. We will likely want to split it up into integTests and integBwcTests etc. so each plugin can have it's own bwc tests but that's left for future refactoring.
After another round of input from @cbuescher this adds a few more sanity
checks to request parsing.
In addition adds (back) support for the reverse option.
Today index names are often resolved lazily, only when they are really
needed. This can be problematic especially when it gets to mapping updates
etc. when a node sends a mapping update to the master but while the request
is in-flight the index changes for whatever reason we would still apply the update
since we use the name of the index to identify the index in the clusterstate.
The problem is that index names can be reused which happens in practice and sometimes
even in a automated way rendering this problem as realistic.
In this change we resolve the index including it's UUID as early as possible in places
where changes to the clusterstate are possible. For instance mapping updates on a node use a
concrete index rather than it's name and the master will fail the mapping update iff
the index can't be found by it's <name, uuid> tuple.
Closes#17048
Changes:
- no more option to configure eager/lazy loading of the norms (useless now
that orms are disk-based)
- only the `string`, `text` and `keyword` fields support the `norms` setting
- the `norms` setting takes a boolean that decides whether norms should be
stored in the index but old options are still supported to give users time
to upgrade
- setting a `boost` no longer implicitely enables norms (for new indices only,
this is still needed for old indices)
Today, certain bootstrap properties are set and read via system
properties. This action-at-distance way of managing these properties is
rather confusing, and completely unnecessary. But another problem exists
with setting these as system properties. Namely, these system properties
are interpreted as Elasticsearch settings, not all of which are
registered. This leads to Elasticsearch failing to startup if any of
these special properties are set. Instead, these properties should be
kept as local as possible, and passed around as method parameters where
needed. This eliminates the action-at-distance way of handling these
properties, and eliminates the need to register these non-setting
properties. This commit does exactly that.
Additionally, today we use the "-D" command line flag to set the
properties, but this is confusing because "-D" is a special flag to the
JVM for setting system properties. This creates confusion because some
"-D" properties should be passed via arguments to the JVM (so via
ES_JAVA_OPTS), and some should be passed as arguments to
Elasticsearch. This commit changes the "-D" flag for Elasticsearch
settings to "-E".
To make API's output more easy to read we are suppressing stack traces (#12991) unless explicitly requested by setting `error_trace=true` on the request. To compensate we are logging the stacktrace into the logs so people can look it up even the error_trace wasn't enabled. Currently we do so using the `INFO` level which can be verbose if an api is called repeatedly by some automation. For example, if someone tries to read from an index that doesn't exist we will respond with a 404 exception and log under info every time. We should reduce the level to `DEBUG` as we do with other API driven errors. Internal errors (rest codes >=500) are logged as WARN.
Closes#16627
This change adds the infrastructure to run the rest tests on a multi-node
cluster that users 2 different minor versions of elasticsearch. It doesn't implement
any dedicated BWC tests but rather leverages the existing REST tests.
Since we don't have a real version to test against, the tests uses the current version
until the first minor / RC is released to ensure the infrastructure works.
Relates to #14406Closes#17072
When unzipping a plugin zip, the zip entries are resolved relative to
the directory being unzipped into. However, there are currently no
checks that the entry name was not absolute, or relatively points
outside of the plugin dir. This change adds a check for those two cases.
This adds methods and tests to ScriptSortBuilder that
makes it implement NamedWritable and adds the fromXContent
method needed to read itseld from xContent.
Test failures showed problems with passing down
the same collate parameter map reference from the
phrase suggestion builder to the context where.
This changes the collate parameter setters to
make a shallow copy of the map passed in.
This commit adds fields bytes_recovered and files_recovered to the cat
recovery API. These fields, respectively, indicate the total number of
bytes and files recovered. Additionally, for consistency, some totals
fields and translog recovery fields have been renamed.
Closes#17064
This tries to remove friction to upgrade to 5.0 that would be caused by mapping
changes:
- old ways to specify mapping settings (eg. store: yes instead of store:true)
will still work but a deprecation warning will be logged
- string mappings that only use the most common options will be upgraded
automatically to text/keyword
The change to move dynamic mapping handling to the end of document
parsing has an edge case which can cause dynamic mappings to fail
document parsing. If field a.b is added as an as part of the root update,
followed by a.c.d, then we need to expand the mappers on the stack,
since a is hidden inside the root update which exists on the stack.
This change adds a test for this case, as well as tries to better
document how the logic works for building up the stack before adding a
dynamic mapper.
Sequence of events that lead to the NPE:
- avg metric returns NaN for buckets
- Movavg skips NaN or null buckets, and simply re-uses the existing bucket (e.g. doesn't add
a 'movavg' field)
- Derivative references Movavg, the bucket resolution returns null because Movavg wasn't added
to the bucket, NPE when trying to subtract null values
The ingest stats include the following statistics:
* `ingest.total.count`- The total number of document ingested during the lifetime of this node
* `ingest.total.time_in_millis` - The total time spent on ingest preprocessing documents during the lifetime of this node
* `ingest.total.current` - The total number of documents currently being ingested.
* `ingest.total.failed` - The total number ingest preprocessing operations failed during the lifetime of this node
Also these stats are returned on a per pipeline basis.
Currently, the cluster service is tightly coupled to the transport service by both managing node connections and requiring the bound address in order to create the local disco node. This commit introduces a new NodeConnectionsService which is in charge of node connection management and makes it possible to remove all network related calls from the cluster service. The local DiscoNode is now created by DiscoveryNodeService and is set both the cluster service and the transport service during node start up.
Closes#16788Closes#16872
Currently all SortBuilder implementations have their separate order
field. This PR moves this up to SortBuilder, together with setter
and getter and makes sure the default is set to SortOrder.ASC
except for `_score` sorting where the default is SortOrder.DESC.
This commit updates the documentation for GeoPointField by removing all references to the coerce and doc_values parameters. DocValues are enabled in lucene GeoPointField by default (required for boundary filtering). The QueryBuilders are updated to automatically normalize points (ignoring the coerce parameter) for any index created onOrAfter version 2.2.
Use index UUID to lookup indices on IndicesService
Today we use the index name to lookup index instances on the IndicesService
which applied to search requests but also to index deletion etc. This commit
moves the interface to expect an Index instance which is a tuple
and looks up the index by uuid rather than by name. This prevents accidental modification
of the wrong index if and index is recreated or searching from the wrong index in such a case.
Accessing an index that has the same name but different UUID will now result in an IndexNotFoundException.
Today we use the index name to lookup index instances on the IndicesService
which applied to search reqeusts but also to index deletion etc. This commit
moves the interface to expcet and `Index` instance which is a <name, uuid> tuple
and looks up the index by uuid rather than by name. This prevents accidential modificaiton
of the wrong index if and index is recreated or searching from the _wrong_ index in such a case.
Accessing an index that has the same name but different UUID will now result in an IndexNotFoundException.
Closes#17001
This commit adds a guard against an old cluster state that arrives out
of order from the last seen cluster state from the current master from
polluting the pending cluster states queue. Without this guard, such a
state can end up stuck in the pending states queue.
Occasionally the .geohash suffix in Geo{Distance|DistanceRange}Query would conflict with a mapping that defines a sub-field by the same name. This occurs often with nested and multi-fields a mapping defines a geo_point sub-field using the field name "geohash". Since the QueryParser already handles parsing geohash encoded geopoints without requiring the ".geohash" suffix, the suffix parsing can be removed altogether.
This commit removes the .geohash suffix parsing, adds explicit test coverage for the nested query use-case, and adds random distance queries to the nested query test suite.
In particular, this test ensures we don't restart the master node until
we know the index deletion has taken effect on master and the master
eligible nodes.
Closes#16917Closes#16890
This change makes ScoreSortBuilder implement NamedWriteable, adds
equals() and hashCode() and also implements parsing ScoreSortBuilder
back from xContent. This is needed for the ongoing Search refactoring.
This was lost in refactoring even on the 2.x branch. The slow-log
is not per index not per shard anymore such that we don't add the
shard ID as the logger prefix. This commit adds back the index
name as part of the logging message not as a prefix on the logger
for better testabilitly.
Closes#17025
_wait_for_completion defaults to false. If set to true then the API will
wait for all the tasks that it finds to stop running before returning. You
can use the timeout parameter to prevent it from waiting forever. If you
don't set a timeout parameter it'll default to 30 seconds.
Also adds a log message to rest tests if any tasks overrun the test. This
is just a log (instead of failing the test) because lots of tasks are run
by the cluster on its own and they shouldn't cause the test to fail. Things
like fetching disk usage from the other nodes, for example.
Switches the request to getter/setter style methods as we're going that
way in the Elasticsearch code base. Reindex is all getter/setter style.
Closes#16906
Currently the message stays in the `UnassignedInfo` for the shard,
however, it would be very useful to know the exact point (time-wise)
that the cancellation happened when diagnosing an issue.
Relates to debugging #16357
* master: (350 commits)
Note to configuration docs on number of threads
Reduce maximum number of threads in boostrap check
Limit generic thread pool
Remove NodeService injection to Discovery
Prevent closing index during snapshot restore
[TEST] Fix newline issue in PluginCliTests on Windows
ParseFieldMatcher should log when using deprecated settings. #16988
fix checkstyle error
Add test for the index_options on a keyword field. #16990
Analysis : Allow string explain param in JSON
Analysis : Allow string explain param in JSON
fix typo
Remove SNAPSHOT from versions in plugin descriptors
Add support for alpha versions
Enable unmap hack for java 9
Simplify mock scripts
Adding `time_zone` parameter to daterange-aggregation docs
Adding tests for `time_zone` parameter for date range aggregation
Added ingest info to node info API, which contains a list of available processors.
Remove bw compat from size mapper
...
Apparently lucene6 is way more picky with respect to corrupting files
that are not fsynced that's why this test sometimes failed after the lucene6
upgrade.
This commit reduces the maximum number of threads required in the
bootstrap check. This limit can be reduced since the generic thread pool
is no longer unbounded.
Relates #17003
The generic thread pool was previously configured to be able to create
an unlimited number of threads. The thinking is that tasks that are
submitted to its work queue must execute and should not block waiting
for a worker. However, in cases of heavy load, this can lead to an
explosion in the number of threads; this can even lead to a feedback
loop that exacerbates the problem. What is more, this can even bump into
OS limits on the number of threads that can be created.
This commit limits the number of threads in the generic thread pool to
four times the bounded number of processors.
Relates #17003
This merge commit merges #16682 which adds the ability to define a index-global
default similarity but at the same time prevents overriding built-in similarities
for new indices. Old indices where able to do this int the past which should not
be punished even though this is not possible anymore.
Closes#16594Closes#16682
This removes the need for accessing the SearchContext when parsing Sort elements
to queries. After applying the patch only a QueryShardContext is needed.
Relates to #15178
Move some test methods from AnalylzeActionIT to RestAnalyzeActionTest
Allow string explain param if it can parse
Fix wrong param name in rest-api-spec
Closes#16925
We removed leniencey from version parsing which caught problems with
-SNAPSHOT suffixes on plugin properies. This commit removes the -SNAPSHOT
from both es and the extension version and adds tests to ensure we can
parse older versions that allowed -SNAPSHOT in BWC way.
Elasticsearch 5.0 will come with alpha versions which is not supported
in the current version scheme. This commit adds support for aplpha starting
with es 5.0.0 in a backwards compatible way.
Date range aggregations used to be unable to use a `time_zone` parameter to e.g. be applied
int date math roundings like in `now/d` (see #10130 as an example). After the aggregation
refactoring, the time_zone parameter has been pulled up to ValuesSourceAggregatorBuilder
and can now be used in date range aggregations as well.
This change adds randomized time zone settings to the existing IT tests to verify that the
`time_zone` parameter is honored when calculating the bucket boundaries. Also moving
the DateRangeTests from module-groovy/messy back to core as DateRangeIT, sharing
common script mocks with DateHistogramIT and adding documentation for the
`time_zone` parameter in the date range aggregation docs.
Closes#10130
Internally the put pipeline API uses this information in node info API to validate if all specified processors in a pipeline exist on all nodes in the cluster.
Closes#16964
Squashed commit of the following:
commit a23f9d2d29220991aa498214530753d7a5a148c6
Merge: eec9c4e 0b0a251
Author: Robert Muir <rmuir@apache.org>
Date: Mon Mar 7 04:12:02 2016 -0500
Merge branch 'master' into lucene6
commit eec9c4e5cd11e9c3e0b426f04894bb2a6dae4f21
Merge: bc67205 675d940
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 13:45:00 2016 -0500
Merge branch 'master' into lucene6
commit bc67205bdfe1526eae277ab7856fc050ecbdb7b2
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 09:56:31 2016 -0500
fix test bug
commit a60723b007ff12d97b1810cef473bd7b553a0327
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Mar 4 15:35:35 2016 +0100
Fix SimpleValidateQueryIT to put braces around boosted terms
commit ae3a49d7ba7ced448d2a5262e5d8ec98671a9090
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Mar 4 15:27:25 2016 +0100
fix multimatchquery
commit ae23fdb88a8f6d3fb7ba60fd1aaf3fd72d899aa5
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Mar 4 15:20:49 2016 +0100
Rewrite DecayFunctionScoreIT to be independent of the similarity used
This test relied a lot on the term scoring and compared scores
that are dependent on the similarity. This commit changes the base query
to be a predictable constant score query.
commit 366c2d518c35d31251033f1b6f6a93f6e2ae327d
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Mar 4 14:06:14 2016 +0100
Fix scoring in tests due to changes to idf calculation.
Lucene 6 uses a different default similarity as well as a different
way to calculate IDF. In contrast to older version lucene 6 uses docCount per field
to calculate the IDF not the # of docs in the index to overcome the sparse field
cases.
commit dac99fd64ac2fa71b8d8d106fe68825e574c49f8
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 08:21:57 2016 -0500
don't hardcoded expected termquery score
commit 6e9f340ba49ab10eed512df86d52a121aa775b0f
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 08:04:45 2016 -0500
suppress deprecation warning until migrated to points
commit 3ac8908424b3fdad44a90a4f7bdb3eff7efd077d
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 07:21:43 2016 -0500
Remove invalid test: all commits have IDs, and its illegal to do this.
commit c12976288124ad1a26467e7e848fb810548e7eab
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 07:06:14 2016 -0500
don't test with unsupported back compat
commit 18bbfe76128570bc70883bf91ff4c44c82d27817
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 07:02:18 2016 -0500
remove now invalid lucene 4 backcompat test
commit 7e730e572886f0ef2d3faba712e4256216ff01ec
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 06:58:52 2016 -0500
remove now invalid lucene 4 backwards test
commit 244d2ab6868ba5ac9e0bcde3c2833743751a25ec
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 06:47:23 2016 -0500
use 6.0 codec
commit 5f64d4a431a6fdaa1234adca23f154c2a1de8284
Author: Robert Muir <rmuir@apache.org>
Date: Fri Mar 4 06:43:08 2016 -0500
compile, javadocs, forbidden-apis, etc
commit 1f273cd62a7fe9ca8f8944acbbfc5cbdd3d81ccb
Merge: cd33921 29e3443
Author: Simon Willnauer <simonw@apache.org>
Date: Fri Mar 4 10:45:29 2016 +0100
Merge branch 'master' into lucene6
commit cd33921ac742ef9fb351012eff35f3c7dbda7264
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:58:37 2016 -0500
fix hunspell dictionary loading
commit c7fdbd837b01f7defe9cb1c24e2ec65604b0dc96
Merge: 4d4190f d8948ba
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:41:53 2016 -0500
Merge branch 'master' into lucene6
commit 4d4190fd82601aaafac6b8254ccb3edf218faa34
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:39:14 2016 -0500
remove nocommit
commit 77ca69e288b1a41aa9595c921ed166c272a00ea8
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:38:24 2016 -0500
clean up numericutils vs legacynumericutils
commit a466d696fbaad04b647ffbc0857a9439b583d0bf
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:32:43 2016 -0500
upgrade spatial4j
commit 5412c747a8cfe638bacedbc8233163cb75cc3dc5
Author: Robert Muir <rmuir@apache.org>
Date: Thu Mar 3 23:19:28 2016 -0500
move to 6.0.0-snapshot-8eada27
commit b32bfe924626b87e540692375ece09e7c2edb189
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 11:30:09 2016 +0100
Fix some test compile errors.
commit 6ccde35e9840b03c68d1a2cd47c7923a06edf64a
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 11:25:51 2016 +0100
Current Lucene version is 6.0.0.
commit f62e1015d931b4cc04c778298a8fa1ba65e97ad9
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 11:20:48 2016 +0100
Fix compile errors in NGramTokenFilterFactory.
commit 6837c6eabf96075f743649da9b9b52dd39611c58
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 10:50:59 2016 +0100
Fix the edge ngram tokenizer/filter.
commit ccd7f070de5efcdfbeb34b9555c65c4990bf1ba6
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 10:42:44 2016 +0100
The missing value is now accessible through a getter.
commit bd3b77f9b28e5b05daa3d49683a9922a6baf2963
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 10:41:51 2016 +0100
Remove IndexCacheableQuery.
commit 05f3091c347aeae80eeb16349ac51d2b53cf86f7
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 10:39:43 2016 +0100
Fix compilation of function_score queries.
commit 81cda79a2431ac78f56b0cc5a5765387f662d801
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Mar 3 10:35:02 2016 +0100
Fix compile errors in BlendedTermQuery.
commit 70994ce8dd1eca0b995870974a38e20f26f96a7b
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 23:33:03 2016 -0500
add bug ID
commit 29d4f1a71f36f646b5a6060bed3db019564a279d
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 21:02:32 2016 -0500
easy .store changes
commit 5e1a1e6fd665fa455e88d3a8987362fad5f44bb1
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 20:47:24 2016 -0500
cleanups mostly around boosting
commit 333a669ec6c305ada5645d13ed1da0e19ec1d053
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 20:27:56 2016 -0500
more simple fixes
commit bd5cd98a1e089c866b6b4a5e159400b110140ce6
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 19:49:38 2016 -0500
more easy fixes and removal of ancient cruft
commit a68f419ee47da5f9c9ce5b372f01d707e902474c
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 19:35:02 2016 -0500
cutover numerics
commit 4ca5dc1fa47dd5892db00899032133318fff3116
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 18:34:18 2016 -0500
fix some constants
commit 88710a17817086e477c6c021ec346d0534b7fb88
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 18:14:25 2016 -0500
Add spatial-extras jar as a core dependency
commit c8cd6726583e5ce3f546ed355d4eca037164a30d
Author: Robert Muir <rmuir@apache.org>
Date: Wed Mar 2 18:03:33 2016 -0500
update to lucene 6 jars
This commit simplifies and consolidates the two different
implementations of terminals used in tests. There is now a single
MockTerminal which captures output, and allows accessing as one large
string (with unix style \n as newlines), as well as configuring
input.
If the recovery throws an exception we miss to notify the recovery
listener and bubble up the uncaught exception. This commit uses
an AbstractRunnable that also catches rejected execution exceptions
etc. and notifies the listener accordingly.
ClusterModuleTests tests what ShardsAllocatorModuleIT tests without starting
a cluster. Unittests should be preferrred over IT tests anyway and the instantiation
of the balanced shards allocator is tested with every other integration test.
This seems to be an error introduced in refactoring around #16821
where we now wait 30seconds by default if the node already joined
a cluster and got a master. This can slow down tests dramatically
espeically on slow boxes and notebooks.
Closes#16956
No changes were really needed in our test infra as it didn't use `node.client`. Yet it didn't take into account ingest nodes, what we used to call client nodes in InternalTestCluster were actually ingest only nodes, which now become coordinating only nodes.
Also renamed some method to get rid of the node client terminology as much as possible in favour or coordinating only node.
The cluster stats api now returns counts for each node role. The `master_data`, `master_only`, `data_only` and `client` fields have been removed from the response in favour of `master`, `data`, `ingest` and `coordinating_only`. The same node can have multiple roles, hence contribute to multiple roles counts. Every node is implicitly a coordinating node, so whenever a node has no explicit roles, it will be counted as coordinating only.
_cat/nodes used to return `c` for client node or `d` for data node as part of the node.role column. This commit changes it to return `m` for master eligible, `d` for data and/or `i` for ingest. A node with no explicit roles will be a coordinating only node and marked with `-`. A node can obviously have multiple roles. The master column has been adapted to return only whether a node is the current master (`*`) or not (`-`).
A node can now have roles, Role is an enum made of master, data, ingest. A ndoe with no roles is simplicitly a coordinating only node. Roles are resolved once at construction time based on node attributes and never serialized. Moving DiscoveryNode to Writeable helps cleaning up the code, making fields final allow to easily see where roles need to be initialized and do it in one single place.
As discussed in #16565, the node.client setting is an unnecessary shortcut to node.data: false and node.master: false. We have places where we treat nodes with node.client set to true differently compared to master false and data false, which is not correct. Also, with the addition of node.ingest or potentially new roles, it becomes confusing to figure out if a node client should support ingestion or not.
This commit removes the node.client setting in favour being explicit using node.master, node.data and node.ingest instead.
Decommissioning a node or applying a filter inclusion / exclusion can potentially lead to many shards that need to be moved to other nodes. This commit reuses the model across all
shard movements in an allocation round: It calculates the shard model once and simulates the application of all shards that can be moved on this model.
Closes#16926
Today we might run into a rejected execution exception when
we shutdown the node while handling a transport exception. The
exception is run in a seperate thread but that thread might
not be able to execute due to the shutdown. Today we barf and fill
the logs with large exception. This commit catches this exception
and logs it as debug logging instead.
Elasticsearch 5.0 doesn't support indices wiht legacy checksums anymore.
The last time we write legacy checksums was in 1.3.0 which was based
on lucene 4.9 already which means that all files have CRC32 checksums.
All indices that Elasticsearch can read today must be written with
lucene version >= 4.8 anyway so we can drop this layer of backwards
compatibility entirely.
Since we are close to upgrading to Lucene 6.0 we should get rid of this
in a more contiained change than the lucene upgrade.
On shared FS / shadow replicas we rely on a lock retry if the lock has
not yet been relesed on a relocated primary. This commit adds this `hack`
for shared filesystems only.
Closes#16936
The field name is a required argument for all suggesters, but
it was specified via a field() setter in SuggestionBuilder so far.
This changes field name to being a mandatory constructor argument
and lets suggestion builders throw an error if field name is missing
or the empty string.
This commit modifies TransportBulkAction to use relative time instead of
absolute time when measuring how long a bulk request took to be
processed, and adds tests for this functionality.
Closes#16916
`writeLockTimeout` has been removed in Lucene 6 completely and since we have
the shard locking mechanism now for quite a while we don't need this anymore.
Shards should only be allocated once all resources are released such that there
can't be any other shard holding the lock to that index in any sane situation.