This test should demonstrate that a single (larger)
request is processed but on of multiple large concurrent requests
is rejected. This test broke too early under some circumstances in
network mode as the limit is quite low.
With this commit we reduce the size of the individual large
requests but issue more concurrent ones thus increasing stability
of this test.
We advertise in our documentation that byte units are like `kb`, `mb`... But we actually only support the simple notation `k` or `m`.
This commit adds support for the documented form and keeps the non documented options to avoid any breaking change.
It also adds support for `micros`, `nanos` and `d` as a time unit in `_cat` API.
Remove the support for `b` as a SizeValue unit. Actually, for numbers, when using raw numbers without unit, there is no text to add/parse after the number. For example, you don't write `10` as `10b`. We support option like `size=` in `_cat` API which means that we want to display raw data without unit (singles).
Documentation updated accordingly.
Add test for the empty size option.
Fix missing TimeValues options for some cat APIs
The queryShardContext we create during setup was sometimes
accessed directly, sometimes by making a copy through
the createShardContext() helper. This should be the default.
Also making sure that strict parsing is switched on via
IndexSettings in the test testup.
This change cleans up a few things in QueryParseContext and QueryShardContext
that make it hard to reason about the state of these context objects and are
thus error prone and should be simplified.
Currently the parser that used in QueryParseContext can be set and reset any
time from the outside, which makes reasoning about it hard. This change makes
the parser a mandatory constructor argument removes ability to later set a
different ParseFieldMatcher. If none is provided at construction time, the
one set inside the parser is used. If a ParseFieldMatcher is specified at
construction time, it is implicitely set on the parser that is beeing used.
The ParseFieldMatcher is only kept inside the parser.
Also currently the QueryShardContext historically holds an inner QueryParseContext
(in the super class QueryRewriteContext), that is mainly used to hold the parser
and parseFieldMatcher. For that reason, the parser can also be reset, which leads
to the same problems as above. This change removes the QueryParseContext from
QueryRewriteContext and removes the ability to reset or retrieve it from the
QueryShardContext. Instead, `QueryRewriteContext#newParseContext(parser)` can be
used to create new parse contexts with the given parser from a shard context
when needed.
Currently we are able to set a ParseFieldMatcher on XContentParsers,
mainly to conveniently carry it around to be available where the
actual parsing happens. This was just recently introduced together
with ObjectParser so that ObjectParser can make use of deprecation
logging and throwing errors while parsing.
This however is trappy because we create parsers in so many places in
the code and it is easy to forget setting the right ParseFieldMatcher.
Instead we should hold the ParseFieldMatcher only in the parse contexts
(e.g. QueryParseContext).
This PR removes the ParseFieldMatcher from XContentParser. ObjectParser
can still make use of it because we can make the otherwise unbounded
`context` type to extend an interface that makes sure contexts used in
ObjectParser can supply a ParseFieldMatcher. Contexts in ObjectParser
are now no longer optional, but it is sufficient to pass in a small
lambda expression in places where no other context is available.
Relates to #17417
and remove its PROTOTYPE. This is the first aggregation builder that
serializes its targetValueType so ValuesSourceAggregatorBuilder had to
grow support for that.
Relates to #17085
The current api allows for choosing which "case" response json keys are
written in. This has the options of camelCase or underscore. camelCase
is going to be deprecated from the query apis. However, with the case
api, it is not necessary to deprecate, as users who were using it in 2.x
can transition completely on 2.x before upgrading by simply using
the underscore option.
This change removes the 'case' option from rest apis.
see #8988
During the bulk action a hierachy of tasks is getting created: bulk->bulk[s] (coord node) -> bulk[s] (primary shard node) -> bulk[s][p] and bulk[s][r]. Due to a bug the first bulk[s] task didn't have bulk's task id is set as a parent id. This commit fixes this bug.
Handling of the current path when parsing a document is very sensitive.
This fixes a subtle bug in array parsing, where the path that was added
by parsing an array would not be cleared. It also adds a hard state
check at the end of parsing to ensure we ended with a clean path.
The doc parser uses a context object to store the state of parsing,
namely the existing mappings, new mappings, and the parsed document.
Currently this uses a threadlocal which is "reset" for each doc parsed.
However, the thread local doesn't actually save anything, since
resetting is constructing new objects. This change removes the thread
local, which also simplifies the mapper service as it now does not need
to be closeable.
In 2.0 we began restricting fields to not contains dots in their names.
This change adds back part of dots in fieldnames support. Specifically,
it allows indexing documents that contain dots in the field names, when
the correct corresponding mappers exist. For example, if mappings
contain an object field `foo`, and a subfield `bar`, then indexing a
document with `foo.bar` will work.
see #15951
This commit really reverts the inadvertent removal of allowing duplicate
calls to Node#start to be a no-op (but was mistakenly restored to
Node#stop in ddfa3a661510f25c2ce431dfd6fb86ac11eb8888).
This makes all numeric fields including `date`, `ip` and `token_count` use
points instead of the inverted index as a lookup structure. This is expected
to perform worse for exact queries, but faster for range queries. It also
requires less storage.
Notes about how the change works:
- Numeric mappers have been split into a legacy version that is essentially
the current mapper, and a new version that uses points, eg.
LegacyDateFieldMapper and DateFieldMapper.
- Since new and old fields have the same names, the decision about which one
to use is made based on the index creation version.
- If you try to force using a legacy field on a new index or a field that uses
points on an old index, you will get an exception.
- IP addresses now support IPv6 via Lucene's InetAddressPoint and store them
in SORTED_SET doc values using the same encoding (fixed length of 16 bytes
and sortable).
- The internal MappedFieldType that is stored by the new mappers does not have
any of the points-related properties set. Instead, it keeps setting the index
options when parsing the `index` property of mappings and does
`if (fieldType.indexOptions() != IndexOptions.NONE) { // add point field }`
when parsing documents.
Known issues that won't fix:
- You can't use numeric fields in significant terms aggregations anymore since
this requires document frequencies, which points do not record.
- Term queries on numeric fields will now return constant scores instead of
giving better scores to the rare values.
Known issues that we could work around (in follow-up PRs, this one is too large
already):
- Range queries on `ip` addresses only work if both the lower and upper bounds
are inclusive (exclusive bounds are not exposed in Lucene). We could either
decide to implement it, or drop range support entirely and tell users to
query subnets using the CIDR notation instead.
- Since IP addresses now use a different representation for doc values,
aggregations will fail when running a terms aggregation on an ip field on a
list of indices that contains both pre-5.0 and 5.0 indices.
- The ip range aggregation does not work on the new ip field. We need to either
implement range aggs for SORTED_SET doc values or drop support for ip ranges
and tell users to use filters instead. #17700Closes#16751Closes#17007Closes#11513
This commit contains the following improvements/fixes:
1. Renaming method names and variables to better reflect the purpose
of the method and the semantics of the variable.
2. For deleting indexes, replace the closed parameter passed to the
delete index/store methods with obtaining the index's state from the
IndexSettings that is already passed in.
3. Added tests to the IndexWithShadowReplicaIT suite, some of which
show issues in the shadow replica delete process that are captured in
Github issue 17695.
Closes#17638
When we pass both XContentParser and QueryParseContext to a method this can be trappy because
we cannot make sure that the parser contained in the context and the parser passed as an argument
are the same.
This removes the parser argument from methods where we currently have both the parser and the parse
context as arguments and instead retrieves the parse from the context inside the method.
The change adds a new option to the geo_* queries: ignore_unmapped. If this option is set to false, the toQuery method on the QueryBuilder will throw an exception if the field specified in the query is unmapped. If the option is set to true, the toQuery method on the QueryBuilder will return a MatchNoDocsQuery. The default value is false so the queries work how they do today (throwing an exception on unmapped field)
It's use tempted the creation of PROTOTYPEs. The only classes that
legitimately implement a readFrom method are those extending from
Diffable - such behavior is part of cluster state management and
out of scope for the PROTOTYPE cleanup.
When we pass down both parser and QueryParseContext to a method, we cannot
make sure that the parser contained in the context and the parser that is
parsed as an argument have the same state. This removes the parser argument
from methods where we currently have both the parser and the parse context
as arguments and instead retrieves the parse from the context inside the
method.
Since OpenJDK virtual machines have G1 GC but do not have a java.vm.name
that contains HotSpot, this test fails on OpenJDK. Instead, the
java.vm.name condition should be expanded to include OpenJDK virtual
machines.
* Inner hits can now only be provided and prepared via setter in the nested, has_child and has_parent query.
* Also made `score_mode` a required constructor parameter.
* Moved has_child's min_child/max_children validation from doToQuery(...) to a setter.
This commit adds a simple test that JvmInfo is correctly able to
determine whether or not G1 GC is running. As the JvmInfo G1 GC logic is
only applies to HotSpot, the test is constructed to do the same. The
test determines whether or not G1 GC is running by inspecting the test
JVM argument line.
The change adds a new option to the `nested`, `has_parent`, `has_children` and `parent_id` queries: `ignore_unmapped`. If this option is set to false, the `toQuery` method on the QueryBuilder will throw an exception if the type/path specified in the query is unmapped. If the option is set to true, the `toQuery` method on the QueryBuilder will return a MatchNoDocsQuery. The default value is `false`so the queries work how they do today (throwing an exception on unmapped paths/types)
With the previous breaker limit spurious failures on transport level
can happen in the test. With the new limit we ensure that the breaker
breaks due to a too large HTTP request instead.
Previously the sigma variable in the `extended_stats_bucket` pipeline aggregation was not being serialised in `ExtendedStatsBucketPipelineAggregator`. This PR fixes that.
It also corrects the initial value of sumOfSquares to be 0.
Closes#17701
#14259 added a check to honor rebalancing policies (i.e., rebalance only on green state) when moving shards due to changes in allocation filtering rules. The rebalancing policy is there to make sure that we don't try to even out the number of shards per node when we are still missing shards. However, it should not interfere with explicit user commands (allocation filtering) or things like the disk threshold wanting to move shards because of a node hitting the high water mark.
#14259 was done to address #14057 where people reported that using allocation filtering caused many shards to be moved at once. This is however a none issue - with 1.7 (where the issue was reported) and 2.x, we protect recovery source nodes by limitting the number of concurrent data streams they can open (i.e., we can have many recoveries, but they will be throttled). In 5.0 we came up with a simpler and more understandable approach where we have a hard limit on the number of outgoing recoveries per node (on top of the incoming recoveries we already had).
This commit removes the last remaining uses of JAVA_OPTS. Now searching
the codebase for the regex '(?<!ES_)JAVA_OPTS' only shows the uses
warning of its removal and the note about it in the migration docs.
* Tokens in the same position are grouped into a SynonymQuery..
* The default operator is applied on tokens in different positions.
* The wildcard is applied to the terms in the last position only.
Fixes#2183
This commit modifies the boostrap checks to output all failing checks
instead of early-failing on the first check and then possibly failing
again after the user resolves the first failing check.
Closes#17474
This commit moves the execution of the bootstrap checks to after network
services are started. This gives us the flexibility to not merely check
if any of the network settings are set, but instead be smarter about it
and check if the network settings are set in a way that means that the
node will be communicating over an external network (either directly, or
via a proxy). As an bonanza, executing the bootstrap checks in this way
enables us to have the node name in the logs!
Closes#17570
If ts=0, cat health disable epoch and timestamp
Be Constant String timestamp and epoch
Move timestamp and epoch to Table
Add rest-api test and test
Closes#10109
With this commit we limit the size of all in-flight requests on
HTTP level. The size is guarded by the same circuit breaker that
is also used on transport level. Similarly, the size that is used
is HTTP content length.
Relates #16011
With this commit we limit the size of all in-flight requests on
transport level. The size is guarded by a circuit breaker and is
based on the content size of each request.
By default we use 100% of available heap meaning that the parent
circuit breaker will limit the maximum available size. This value
can be changed by adjusting the setting
network.breaker.inflight_requests.limit
Relates #16011
The cluster reroute API had a copy of NamedWriteableRegistry's behavior
inside it in the form of AllocationCommands#registerFactory and
AllocationCommands#lookupFactorySafe. There isn't a reason to duplicate
that effort. So this replaces all of AllocationCommand#Factory with
query-like registration in NetworkModule. Why NetworkModule? Because
the transport client needs it.
Makes Writeable not depend on StreamableReader. Keeps the default readFrom
implementation for backwards compatibility during the PROTOTYPE removal
but that'll go when those are gone.
Makes Diffable not extend StreamableReader. Instead Diffable has a readFrom
method. The PROTOTYPE removal will not get to cluster state for a long time
so that method will stay.
Now only a few things implement StreamableReader. They will be addressed
individually and then we'll remove StreamableReader.
When an index is recovered from disk it's metadata is imported first and the master reaches out to the nodes looking for shards of that index. Sometimes those requests reach other nodes before the cluster state is processed by them. At the moment, that situation disables the checking of the store, which requires the meta data (indices with custom path need to know where the data is). When corruption hits this means we may assign a shard to node with corrupted store, which will be caught later on but causes confusion. Instead we can try loading the meta data from disk in those cases.
Relates to #17630
When it comes to query parsing, either a field is tokenized and it would go
through analysis with its search_analyzer. Or it is not tokenized and the
raw string should be passed to termQuery(). Since numeric fields are not
tokenized and also declare a search analyzer, values would currently go through
analysis twice...
We have a couple places in the code base that assume that search is always done
on the inverted index. However with the new points API in Lucene 6, this is not
true anymore. This commit makes MappedFieldType.indexedValueForSearch protected
and fixes call sites to keep working for field types that use the inverted
index and either work differently ar throw an exception otherwise. For instance,
it will still be possible to run cross_fields multi match queries on numeric
fields, but the score contributions will not be blended as well as before, and
significant terms aggregations on long terms will not be possible anymore since
points do not record document frequencies.
This commit removes `MappedFieldType.value` and simplifies
`MappedFieldType.valueforSearch`. `valueforSearch` was used to post-process
values that come for stored fields (eg. to convert a long back to a string
representation of a date in the case of a date field) and also values that
are extracted from the source but only in the case of GET calls: it would
not be called when performing source filtering on search requests.
`valueforSearch` is now only called for stored fields, since values that are
extracted from the source should already be formatted as expected.
In both cases, what elasticsearch is really interested in is whether the field
is an analyzed string field. So it can just check `tokenized()` instead.
* upgrades numerics to new Point format
* updates geo api changes
* adds GeoPointDistanceRangeQuery as XGeoPointDistanceRangeQuery
* cuts over to ES GeoHashUtils
TL;DR This commit should not have any impact on terms aggs, it will just make
supporting ipv6 easier.
Currently only the numeric terms aggs propagate the DocValueFormat instance since
we use numerics to represent also dates or ip addresses. Since string terms aggs
are only used for text/keyword/string fields, they do not use the format and just
call toUt8String(). However when we support ipv6, ip addresses as well will be
encoded in sorted doc values (just like strings) so we will need to use the
DocValueFormat to format the keys.
CBOR is natively supported in Elasticsearch and allows for byte arrays.
This means, that by using CBOR the user can prevent base64 conversions
for the data being sent back and forth.
This PR adds support to extract data from a byte array in addition to
a string. This also required to add a ByteArrayValueSource class.
In #17133 we introduce request size limit handling and need a custom
channel implementation. In order to ensure we delegate all methods
it is better to have this channel implement an interface instead of
an abstract base class (so changes on the interface turn into
compile errors).
Relates #17133
Sometimes we get a test failure caused by search contexts left open.
The tests include a stack trace of the call that opened the context
but nothing else about the context. This adds more information about
the context that has been left open like what query it was running,
what shard it targeted, and whether or not it was a scroll.
Relates to #17582
When you implement an ingest factory which implements `Closeable`:
```java
public static final class Factory extends AbstractProcessorFactory<MyProcessor> implements Closeable {
@Override
public void close() throws IOException {
logger.debug("closing my processor factory");
}
}
```
The `close()` method is never called which could lead to some leak threads when we close a node.
The `ProcessorsRegistry#close()` method exists though and seems to do the right job:
```java
@Override
public void close() throws IOException {
List<Closeable> closeables = new ArrayList<>();
for (Processor.Factory factory : processorFactories.values()) {
if (factory instanceof Closeable) {
closeables.add((Closeable) factory);
}
}
IOUtils.close(closeables);
}
```
But apparently this method is never called in `Node#stop()`.
Closes#17625.
We have both `Settings.settingsBuilder` and `Settings.builder` that do exactly
the same thing, so we should keep only one. I kept `Settings.builder` since it
has my preference but also it is the one that we use in examples of the Java API.
* Create one AllField field per field eligible for _all.
* Add a positionIncrementGap (with a size of 100, not configurable) between
each entry in order to distinguish fields when doing phrase query on _all.
This removes the inconsistent output of IP addresses. The format was parsing-unfriendly and it makes it hard
to reason about API responses, such as to _nodes.
With this change in place, it will never print the hostname as part of the default format, which has the
added benefit that it can be used consistently for URIs, which was not the case when the hostname might
appear at the front with "hostname/ip:port".
CORS headers and methods config parameters must be read as arrays. This
commit fixes the issue. It affects http.cors.allow-methods and
http.cors.allow-headers.
Fixes#17483
Aggregations need to perform instanceof calls on MappedFieldType instances in
order to know how they should be parsed or formatted. Instead, we should let
the field types provide a formatter/parser that can can be used.
This change makes the root (/) rest api delegate to a transport action to get the
data for the response. This aligns this rest api with all of the other apis, which
delegate to one or more actions.
In doing this, unit tests were added to provide coverage of the RestMainAction
and the associated classes.
Because sigma is also used at reduce time, it should be passed to empty aggs.
Otherwise it causes bugs when an empty aggregation is used to perform reduction
is it would assume a sigma of zero.
Closes#17362
Today we fail if you try to add a field and an object from another type already
has the same name. However, we do NOT fail if you insert the field first and the
object afterwards. This leads to bad bugs since mappings are not necessarily
parsed in the same order at recovery time, so a mapping update could succeed and
then you would fail to reopen the index.
This commit removes a superfluous check when validing incoming cluster
states. The check in question prevents out-of-order cluster states from
the same master from entering the queue. However, such out-of-order
cluster states will be cleaned from the queue when a commit message for
that cluster state arrives or a commit message for any higher-versioned
cluster state arrives.
This removes PROTOTYPEs from ScoreFunctionsBuilders. To do so we rework
registration so it doesn't need PROTOTYPEs and lines up with the recent
changes to query registration.
This PR fixes a bug where a NPE was thrown when parsing a moving average pipeline aggregation request which did not specify a window size.
Closes#17516
This adds shuffling of xContent similar to #17521 to the aggregation and pipeline aggregation base test.
The additional shuffling uncovered that some aggregation builders internally store some properties in a
way that made the equals() testing fail when the xContent is shuffled.
For TopHitsAggregatorBuilder, the internal scriptFields parameter was changed to a set because the order
they appear in the xContent should not matter. For FiltersAggregatorBuilder, the internal list of KeyedFilters
is sorted by key now. As a side effect, the keys in the aggregation response are now not always in the same
order as the filters in the query, but sorted by key as well (unless they are anonymous).
DiscoveryNode is immutable yet we rebuild DiscoveryNode#toString on
every invocation. Most importantly, this just leads to unnecessary
allocations. This is most germane to ZenDiscovery and the processing of
cluster states where DiscoveryNode#toString is invoked when submitting
update tasks and processing cluster state updates.
Closes#17543
This fix ensures the filter and filters aggregation will not throw a NPE when `{}` is passed in as a filter. Instead `{}` is interpreted as a MatchAllDocsQuery.
Closes#17518
This commit adds a guard preventing old cluster states from entering
into the pending queue when we are following a master, and cleans old
cluster states from the pending queue when processing a commit.
* [TEST] check registered queries one by one in SearchModuleTests
* Switch to using ParseField to parse query names
If we have a deprecated query name, at the moment we don't have a way to log any deprecation warning nor fail when we are in strict mode. With this change we use ParseField, which will take care of the camel casing that we currently do manually (so that one day we can remove it more easily). This also means, that each query will have a unique preferred name, and all the other names are deprecated.
Terms query "in" synonym is now formally deprecated, as well as fuzzy_match, match_fuzzy, match_phrase and match_phrase_prefix for match query, mlt for more_like_this and geo_bbox for geo_bounding_box. All these will be removed in 6.0.
Every QueryParser holds now a ParseField constant called QUERY_NAME_FIELD that holds the name for it. The first name is the preferred one, all the others are deprecated. The first name is taken from the NAME constant already present in each query builder object, so that we somehow keep the serialization constant separated from ParseField. This change also allowed us to remove the names method from the QueryParser interface.
The `phrase` and `phrase_prefix` options in the `MatchQueryBuilder` have been deprecated in favour of using the new `MatchPhraseQueryBuilder` and `MatchPhrasePrefixQueryBuilder`. This is not a breaking change since `MatchQueryBuilder` still supports `phrase` and `phrase_prefix` but this option will be removed from the `MatchQueryBuilder` in the future (probably in 6.0)
Relates to https://github.com/elastic/elasticsearch/pull/17458#discussion_r58351998
Currently if thread cpu time is not supported (for instance, on
operating systems such as FreeBSD), an `IllegalStateException` is thrown
in `HotThreads#innerDetect()` that causes the API to return a useless
response.
This changes the check to be earlier, substituting a message for the
hot_threads output (in case some nodes *do* support it).
Additionally, if an exception is thrown during the hot_threads
generation it is now logged and the best effort output is returned.
When a shard is delayed, we now show output like:
```json
{
"shard" : {
"index" : "i",
"index_uuid" : "QzoKda9aQCG_hCaZQ18GEg",
"id" : 3,
"primary" : false
},
"assigned" : false,
"unassigned_info" : {
"reason" : "NODE_LEFT",
"at" : "2016-04-04T16:44:47.520Z",
"details" : "node_left[HyRLmMLxR5m_f58RKURApQ]"
},
"allocation_delay" : "59.9s",
"allocation_delay_ms" : 59910,
"remaining_delay" : "38.9s",
"remaining_delay_ms" : 38991,
"nodes" : {
"jKiyQcWFTkyp3htyyjxoCw" : {
"node_name" : "Landslide",
"node_attributes" : { },
"final_decision" : "YES",
"weight" : 1.0,
"decisions" : [ ]
},
"9bzF0SgoQh-G0F0sRW_qew" : {
"node_name" : "Caretaker",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 2.0,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [9bzF0SgoQh-G0F0sRW_qew] on which it already exists"
} ]
}
}
}
```
Where the new addition is this section:
```
"allocation_delay" : "59.9s",
"allocation_delay_ms" : 59910,
"remaining_delay" : "38.9s",
"remaining_delay_ms" : 38991,
```
Which shows the configured delay as well as the remaining delay until
the shard can be considered "assignable". This data is only shown if the
shard is unassigned.
Relates to #17372
Otherwise, when trying to calculate the amount of disk usage *after* the
shard has been allocated, it has incorrectly subtracted the shadow
replica size.
Resolves#17460
* master: (156 commits)
Make JNA calls optional
Added RPM metadata
Remove PROTOTYPE from MLT.Item
Remove PROTOTYPE from VersionType
Fix mistake in TopHits change
Remove PROTOTYPEs from highlighting
Clean up some log messages
Command line arguments with comma must be quoted on windows
Cluster Health should run on applied states, even if waitFor=0 #17440
ingest: make concrete processor impl final, like all other processor concrete impls.
Improve some test method comments.
Document task id's as string in the rest spec
Replace FieldStatsProvider with a method on MappedFieldType. #17334
cleanup test
Remove MathUtils. #17454
Addressing review comments
fix javadocs
Make TranslogConfig immutable and pass TranslogGeneration as a ctor arg to Translog
[reindex] Don't get rejected
Remove redundant commit - #openTranslog() already commits in that case
...
The introduction of max number of processes and max size virtual memory
checks inadvertently made JNA non-optional on OS X and Linux. This
commit wraps these calls in a check to see if JNA is available so that
JNA remains optional.
Closes#17492
Make TranslogConfig immutable and pass TranslogGeneration as a ctor arg to Translog
This mutable state is confusing and is easily missed. By default this is null and
wipes all translog. This commit makes the TranslogGeneration mandatory on the Translog
constructor and removes the mutalbe state.
We already protect against making decisions based on an inflight cluster state if someone asks for a waitFor rule (like wait for green). We should do the same for normal health checks as well (unless timeout is set to 0) as it be trappy to debug failures when health says the cluster is in a certain state, but that state wasn't applied yet.
Closes#17440
FieldStatsProvider had to perform instanceof calls to properly handle dates or
ip addresses. By moving the logic to MappedFieldType, each field type can check
whether all values are within bounds its way.
Note that this commit only keeps rewriting support for dates, which are the only
field for which the rewriting mechanism is likely to help (because of time-based
indices).
This mutable state is confusing and is easily missed. By default this is null and
wipes all translog. This commit makes the TranslogGeneration mandatory on the Translog
constructor and removes the mutalbe state.
Move translog recover outside of the engine
We changed the way we manage engine memory buffers to an
open model where each shard can essentially has infinite memory.
The indexing memory controller is responsible for moving memory to disk
when it's needed. Yet, this doesn't work today when we recover from store/translog
since the engine is not fully initialized such that IMC has no access to the engine,
neither to it's memory buffer nor can it move data to disk.
The biggest issue here is that translog recovery happends inside the Engine constructor
which is problematic by itself since it might take minutes and uses a not yet fully
initialzied engine to perform write operations on.
This change detaches the translog recovery and makes it the responsibility of the caller
to run it once the engine is fully constructed or skip it if not necessary.
Changes QueryParser into a @FunctionalInterface and provides a way to
register queries using that. Cuts match and function_score queries over
to that registration method as a proof of concept.
Once all queries have been cut over we can remove their PROTOTYPES.
Currently our testing of parsing query builders is limited to the
default order of the parameters that each builders toXContent()
method produces. To better test real queries where the order of
parameters can be different, this change adds a helper
method to ESTestCase that takes a XContentBuilder and randomly
shuffles the order of the fields inside an object. This is
used in AbstractQueryTestCase, but it can be used in other similar
places in the future.
The refactoring of RescoreBuilder and QueryRescoreBuilder moved parsing
previously in RescoreParseElement into the builders. This removes the
left over parse element.
By default, tasks are grouped by node. However, task execution in elasticsearch can be quite complex and an individual task that runs on a coordinating node can have many subtasks running on other nodes in the cluster. This commit makes it possible to list task grouped by common parents instead of by node. When this option is enabled all subtask are grouped under the coordinating node task that started all subtasks in the group. To group tasks by common parents, use the following syntax:
GET /tasks?group_by=parents
We changed the way we manage engine memory buffers to an
open model where each shard can essentially has infinite memory.
The indexing memory controller is responsible for moving memory to disk
when it's needed. Yet, this doesn't work today when we recover from store/translog
since the engine is not fully initialized such that IMC has no access to the engine,
neither to it's memory buffer nor can it move data to disk.
The biggest issue here is that translog recovery happends inside the Engine constructor
which is problematic by itself since it might take minutes and uses a not yet fully
initialzied engine to perform write operations on.
This change detaches the translog recovery and makes it the responsibility of the caller
to run it once the engine is fully constructed or skip it if not necessary.
This allows the user to update the reindex throttle on the fly, with changes
that speed up the throttling being applied immediately and changes that
slow down the throttling being applied during the next batch. This means
that if a user throttles reindex in such a way that it tries to sleep for
16 years and then realizes that they've done something wrong then they
can change the throttle and reindex will wake up again. We don't apply
slow downs immediately so we never get in danger of losing the scan context.
Also, if reindex is canceled while it is sleeping (how it honor throttling)
then it'll immediately wake up and cancel itself.
The old HighlightParseElement was only left because it was still
used in tests and some places in InnerHits. This PR removes it
and replaces the tests that checked that the original parse element
and the rafactored highlighter code produce the same output with
new tests that compare builder input to the SearchContextHighlight
that is created.
This commit adds the new `action.search.shard_count.limit` setting which
configures the maximum number of shards that can be queried in a single search
request. It has a default value of 1000.
Today we don't invoke `IndexingOperationListeners` when we are running
a recovery form store or replaying translog from remote. This is problematic since
the actual code path for indexing is different between normal indexing and recovery.
An important detail is left out on recovery since we implemented the `IndexingMemoryController`
as an `IndexingOperationListener` we might never flush the `IndexWriter` of a recovering shard
which can lead to `OOMs` on node startup / recovery.
Both top level and inline inner hits are now covered by InnerHitBuilder.
Although there are differences between top level and inline inner hits,
they now make use of the same builder logic.
The parsing of top level inner hits slightly changed to be more readable.
Before the nested path or parent/child type had to be specified as encapsuting
json object, now these settings are simple fields. Before this was required
to allow streaming parsing of inner hits without missing contextual information.
Once some issues are fixed with inline inner hits (around multi level hierachy of inner hits),
top level inner hits will be deprecated and removed in the next major version.
This commit introduces SearchOperationListeneres which allow to hook
into search operation lifecycle and execute operations like slow-logs
and statistic collection in a transparent way. SearchOperationListenrs
can be registered on the IndexModule just like IndexingOperationListeners.
The main consumers (slow log) have already been moved out of IndexService
into IndexModule which reduces the dependency on IndexService as well as
IndexShard and makes slowlogging transparent.
Closes#17398
`text` fields will have fielddata disabled by default. Fielddata can still be
enabled on an existing index by setting `fielddata=true` in the mappings.
Remove ability to specify arbitrary node attributes with `node.` prefix
Today the basic node settings like `node.data` and `node.master` can't really be fully validated
since we allow to specify custom user attributes on the node level. We have to, in order to
support that, add a wildcard setting for `node.*` to let these setting pass validation.
Instead we should require a more contraint prefix like `node.attr.` that defines a namespace
that is reserved for user attributes.
This commit adds a new namespace for attributes in `node.attr`.
Closes#17280
Today the basic node settings like `node.data` and `node.master` can't really be fully validated
since we allow to specify custom user attributes on the node level. We have to, in order to
support that, add a wildcard setting for `node.*` to let these setting pass validation.
Instead we should require a more contraint prefix like `node.attr.` that defines a namespace
that is reserved for user attributes.
This commit adds a new namespace for attributes in `node.attr`.
Closes#17280
This adds a `created` flag to `IndexingOperationListener#postIndex` to
easily differentiate between updates and creates on the listener level.
Closes#17333
Merges #17340
With restriction for the total number of fields introduced in #17357 this test can fail if a large number of records is randomly selected for indexing.
This is to prevent mapping explosion when dynamic keys such as UUID are used as field names. index.mapping.total_fields.limit specifies the total number of fields an index can have. An exception will be thrown when the limit is reached. The default limit is 1000. Value 0 means no limit. This setting is runtime adjustable
Closes#11443
Transport client was replacing the address of the nodes connecting to with the ones received from the liveness api rather keeping the original listed nodes. Written a test for that.
* master: (25 commits)
Replication operation that try to perform the primary phase on a replica should be retried
split long line in ConvertProcessorTests
add type conversion support to ConvertProcessor
percolator: Make explain use the two phase iterator
test: make sure we don't flush during indexing the percolator queries
Added experimental annotation to the update-by-query and reindex docs
Fixed bad YAML in reindex REST test: 50_routing.yaml
Update-by-query rest tests: fixed bad yaml and deleted a client-dependent test
Prevents exception being raised when ordering by an aggregation which wasn't collected
The reindex body is now required, which changes the exception thrown by the REST test
Docs: Included Nodes Task API and tidied reindex/update-by-query
Rename update-by-query REST tests to update_by_query
REST: The body is required in the reindex API
The source parameter should not be defined in the delete-by-query REST spec
Renamed update-by-query REST spec to update_by_query
Fix test bug in TypeQueryBuilderTests.
Add comment why it is safe to check the number of nested fields in MapperService.merge.
Automatically add a sub keyword field to string dynamic mappings. #17188
Type filters should not have a performance impact when there is a single type. #17350
Add API to explain why a shard is or isn't assigned
...
In extreme cases a local primary shard can be replaced with a replica while a replication request is in flight and the primary action is applied to the shard (via `acquirePrimaryOperationLock()). #17044 changed the exception used in that method to something that isn't recognized as `TransportActions.isShardNotAvailableException`, causing the operation to fail immediately instead of retrying. This commit fixes this by check the primary flag before
acquiring the lock. This is safe to do as an IndexShard will never be demoted once a primary.
Closes#17358
If a terms aggregation was ordered by a metric nested in a single bucket aggregator which did not collect any documents (e.g. a filters aggregation which did not match in that term bucket) an ArrayOutOfBoundsException would be thrown when the ordering code tried to retrieve the value for the metric. This fix fixes all numeric metric aggregators so they return their default value when a bucket ordinal is requested which was not collected.
Closes#17225
For geo distance sort parsing: Disallow anything but
VALUE_STRING as geo hash, disallow resetting field
name for geo fields.
Also make error message for wrong lat/lon values more
verbose by including the affected field name.
If you add a string field to a document, it will have the following default
mapping:
```
{
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
```
Today, if you call /index/type/_search instead of /index/_search, elasticsearch
will automatically insert a type filter to only match documents of the given
type. This commit uses a new TypeQuery instead of a TermQuery for this filter,
which rewrites to a MatchAllDocsQuery when all documents of a shard match the
filtered type. This is helpful because BooleanQuery has a special rewrite rule
to remove MatchAllDocsQuery as FILTER clauses. So for instance if your query is
`+body:"quick fox" #_type:my_type`, it will be rewritten to
`+body:"quick fox" #*:*` which is then rewritten to `body:"quick fox"`.
This adds a new `/_cluster/allocation/explain` API that explains why a
shard can or cannot be allocated to nodes in the cluster. Additionally,
it will show where the master *desires* to put the shard, according to
the `ShardsAllocator`.
It looks like this:
```
GET /_cluster/allocation/explain?pretty
{
"index": "only-foo",
"shard": 0,
"primary": false
}
```
Though, you can optionally send an empty body, which means "explain the
allocation for the first unassigned shard you find".
The output when a shard is unassigned looks like this:
```
{
"shard" : {
"index" : "only-foo",
"index_uuid" : "KnW0-zELRs6PK84l0r38ZA",
"id" : 0,
"primary" : false
},
"assigned" : false,
"unassigned_info" : {
"reason" : "INDEX_CREATED",
"at" : "2016-03-22T20:04:23.620Z"
},
"nodes" : {
"V-Spi0AyRZ6ZvKbaI3691w" : {
"node_name" : "Susan Storm",
"node_attributes" : {
"bar" : "baz"
},
"final_decision" : "NO",
"weight" : 0.06666675,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
},
"Qc6VL8c5RWaw1qXZ0Rg57g" : {
"node_name" : "Slipstream",
"node_attributes" : {
"bar" : "baz",
"foo" : "bar"
},
"final_decision" : "NO",
"weight" : -1.3833332,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists"
} ]
},
"PzdyMZGXQdGhqTJHF_hGgA" : {
"node_name" : "The Symbiote",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 2.3166666,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
}
}
}
```
And when the shard *is* assigned, the output looks like:
```
{
"shard" : {
"index" : "only-foo",
"index_uuid" : "KnW0-zELRs6PK84l0r38ZA",
"id" : 0,
"primary" : true
},
"assigned" : true,
"assigned_node_id" : "Qc6VL8c5RWaw1qXZ0Rg57g",
"nodes" : {
"V-Spi0AyRZ6ZvKbaI3691w" : {
"node_name" : "Susan Storm",
"node_attributes" : {
"bar" : "baz"
},
"final_decision" : "NO",
"weight" : 1.4499999,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
},
"Qc6VL8c5RWaw1qXZ0Rg57g" : {
"node_name" : "Slipstream",
"node_attributes" : {
"bar" : "baz",
"foo" : "bar"
},
"final_decision" : "CURRENTLY_ASSIGNED",
"weight" : 0.0,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists"
} ]
},
"PzdyMZGXQdGhqTJHF_hGgA" : {
"node_name" : "The Symbiote",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 3.6999998,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
}
}
}
```
Only "NO" decisions are returned by default, but all decisions can be
shown by specifying the `?include_yes_decisions=true` parameter in the
request.
Resolves#14593
Today we might run into a rejected execution exception when we shutdown the node while handling a transport exception. The exception is run in a seperate thread but that thread might not be able to execute due to the shutdown. Today we barf and fill the logs with large exception. This commit catches this exception and logs it as debug logging instead.
Extends changes made in 8652cd8