Indexing ids in binary form should help with indexing speed since we would
have to compare fewer bytes upon sorting, should help with memory usage of
the live version map since keys will be shorter, and might help with disk
usage depending on how efficient the terms dictionary is at compressing
terms.
Since we can only expect base64 ids in the auto-generated case, this PR tries
to use an encoding that makes the binary id equal to the base64-decoded id in
the majority of cases (253 out of 256). It also specializes numeric ids, since
this seems to be common when content that is stored in Elasticsearch comes
from another database that uses eg. auto-increment ids.
Another option could be to require base64 ids all the time. It would make things
simpler but I'm not sure users would welcome this requirement.
This PR should bring some benefits, but I expect it to be mostly useful when
coupled with something like #24615.
Closes#18154
Adds a unit test that checks the TermSuggestionContext contents that is the result
of TermSuggestionBuilder#build vs. the values the original builder contains.
Transport profiles unfortunately have never been validated. Yet, it's very
easy to make a mistake when configuring profiles which will most likely stay
undetected since we don't validate the settings but allow almost everything
based on the wildcard in `transport.profiles.*`. This change removes the
settings subset based parsing of profiles but rather uses concrete affix settings
for the profiles which makes it easier to fall back to higher level settings since
the fallback settings are present when the profile setting is parsed. Previously, it was
unclear in the code which setting is used ie. if the profiles settings (with removed
prefixes) or the global node setting. There is no distinction anymore since we don't pull
prefix based settings.
This change adds validation to the RemoteClusterConnection to ensure
we always use seed nodes from the same cluster. While we still allow to use
an arbitrary cluster alias we ensure that we, once we connected to a cluster the first time,
we always check against that initial cluster name when we execute a seed node handshake.
sequence number data in Lucene commit points. Instead, the test
retrieves the _seq_no value from the commit point directly and converts
it to a Long value.
This change adds a basic unit test for the SuggestionSearchContext that is
created as output of SuggestionBuilder#build. The current test only adds checks
for the common fields (like text, prefix, fieldName etc...).
Relates to #17118
* Refactor PathTrie and RestController to use a single trie for all methods
This changes `PathTrie` and `RestController` to use a single `PathTrie` for all
endpoints, it also allows retrieving the endpoints' supported HTTP methods more
easily.
This is a spin-off and prerequisite of #24437
* Use EnumSet instead of multiple if conditions
* Make MethodHandlers package-private and final
* Remove duplicate registerHandler method
* Remove public modifier
Today when we run out of disk all kinds of crazy things can happen
and nodes are becoming hard to maintain once out of disk is hit.
While we try to move shards away if we hit watermarks this might not
be possible in many situations. Based on the discussion in #24299
this change monitors disk utilization and adds a flood-stage watermark
that causes all indices that are allocated on a node hitting the flood-stage
mark to be switched read-only (with the option to be deleted). This allows users to react on the low disk
situation while subsequent write requests will be rejected. Users can switch
individual indices read-write once the situation is sorted out. There is no
automatic read-write switch once the node has enough space. This requires
user interaction.
The flood-stage watermark is set to `95%` utilization by default.
Closes#24299
This commit causes a replica to throwback its local checkpoint to the
global checkpoint when learning of a new primary through a replica
operation.
Relates #25452
In 6.x we prevent multiple types and default to `index.mapping.single_type: false`
This change removes the registered setting and ensures that it's preserved for
5.x indices.
Relates to #24961
All query builders written as self contained xContent objects, to we should mark
them accordingly using ToXContentObject. This also makes it possible to use
things like XContentHelper#toXContent to render query builders in tests.
* Adds rewrite phase to aggregations
This change adds aggregations to the rewrite performed by the `SearchSourceBuilder`. This means that `AggregationBuilder`s are able to implement a `rewrite()` method where they can return a new `AggregationBuilder` which is functionally the same but in a more primitive form. This is exactly analogous to the rewrite done by the `QueryBuilder`s.
The first aggregation to implement the rewrite are the filter and filters aggregations so they can rewrite the filters they contain.
Closes#17676
* Removes rewrite from PipelineAggregationBuilder
Rewrite is based on shard level information. Since pipeline aggregation are run in the reduce phase it doesn’t make sense to rewrite them on the shards. In fact eventually we shouldn’t be transporting them to the shards at all and should be retaining them on the coordinating node for execution in the reduce phase
* Addresses review comments
* addresses more review comments
* Fixed imports
The constructor using `types` has been deprecated for a while now (starting with
ES 5.1.). It can be removed in the next mayor version. Since types are optional
they should be added with the #types() setter.
* Adds check for negative search request size
This change adds a check to `SearchSourceBuilder` to throw and exception if the size set on it is set to a negative value.
Closes#22530
* fix error in reindex
* update re-index tests
* Addresses review comment
* Fixed tests
* Added random negative size test
* Fixes test
QueryParseContext is currently only used as a wrapper for an XContentParser, so
this change removes it entirely and changes the appropriate APIs that use it so
far to only accept a parser instead.
We have two ways to filter XContent:
- The first method is to parse the XContent as a map and use
XContentMapValues.filter(). This method filters the content of the map
using an automaton. It is used for source filtering, both at search and
indexing time. It performs well but can generate a lot of objects and
garbage collections when large XContent are filtered. It also returns
empty objects (see f2710c16eb) when all
the sub fields have been filtered out and handle dots in field names as
if they were sub fields.
- The second method is to parse the XContent and copy the XContentParser
structure to a XContentBuilder initialized with includes/excludes
filters. This method uses the Jackson streaming filter feature. It is
used by the Response Filtering ('filter_path') feature. It does not
generate a lot of objects, and does not return empty objects and also
does not handle dots in field names explicitely.
Both methods have similar goals but different tests. This commit changes
the current XContentBuilder test class so that it becomes a more generic
testing class and we can now ensure that filtering methods generate the
same results.
It also removes some tests from the XContentMapValuesTests class that
should be in XContentParserTests.
The significance aggs return Lucene index-level statistics that when merged are assumed to be from different shards. The Aggregator unit tests assume segments can be treated as shards and thus break the significance stats and introduce double-counting of background doc frequencies. This change addresses this problem by ensuring test indexes have only one shard.
Closes#25429
If all nodes get disconnected before we can send the request we might
try to reconnect and that will fail with an ISE instead of the a transport
exception.
Closes#25301
ensureYellow ensures at least yellow.
Also, since we only have 1 replica, we don't need to index for it to know about the primary term promotion
Closes#25287
This commit makes the use of the global network settings explicit instead
of implicit within NetworkService. It cleans up several places where we fall
back to the global settings while we should have used tcp or http ones.
In addition this change also removes unnecessary settings classes
The replica replication response object has an extra allocationId field that contains the allocation id of the replica on which the request was executed. As we are sending the allocation id with the actual replica replication request, and check when executing the replica replication action that the allocation id of the replica shard is what we expect, there is no need to communicate back the allocation id as part of the response object.
When a user requests a cluster allocation explain in a situation where
it does not make sense (for example, there are no unassigned shards), we
should consider this a bad request instead of a server error. Yet, today
by throwing an illegal state exception, these are treated as server
errors. This commit adjusts these so that they throw illegal argument
exceptions and are treated as bad requests.
Relates #25503
This commit adds a test for a scenario where a replica receives an extra
document that the promoted replica does not receive, misses the
primary/replica re-sync, and the recovers from the newly-promoted
primary.
Relates #25493
Failing to do so can cause other errors later on during query execution.
For example if `WrapperQueryBuilder` wraps a `GeoShapeQueryBuilder` that fetches the shape from an index then it will skip the shape fetching
and fail later with the error that no shapes have been fetched.
This commit adds an LRU set to used to determine if a keyed deprecation
message should be written to the deprecation logs, or only added to the
response headers on the thread context.
Relates #25474
We have various assertions that check we never block on transport
threads. This commit adds the thread names for the NioTransport to
these assertions.
With this change I had to fix two places where we were calling blocking
methods from the transport threads.
Currently QueryParseContext is only a thin wrapper around an XContentParser that
adds little functionality of its own. I provides helpers for long deprecated
field names which can be removed and two helper methods that can be made static
and moved to other classes. This is a first step in helping to remove
QueryParseContext entirely.
* Promote replica on the highest version node
This changes the replica selection to prefer to return replicas on the highest
version when choosing a replacement to promote when the primary shard fails.
Consider this situation:
- A replica on a 5.6 node
- Another replica on a 6.0 node
- The primary on a 6.0 node
The primary shard is sending sequence numbers to the replica on the 6.0 node and
skipping sending them for the 5.6 node. Now assume that the primary shard fails
and (prior to this change) the replica on 5.6 node gets promoted to primary, it
now has no knowledge of sequence numbers and the replica on the 6.0 node will be
expecting sequence numbers but will never receive them.
Relates to #10708
* Switch from map of node to version to retrieving the version from the node
* Remove uneeded null check
* You can pretend you're a functional language Java, but you're not fooling me.
* Randomize node versions
* Add test with random cluster state with multiple versions that fails shards
* Re-add comment and remove extra import
* Remove unneeded stuff, randomly start replicas a few more times
* Move test into FailedNodeRoutingTests
* Make assertions actually test replica version promotion
* Rewrite test, taking Yannick's feedback into account
When a setting is deprecated, if that setting is used repeatedly we
currently emit a deprecation warning every time the setting is used. In
cases like hitting settings endpoints over and over against a node with
a lot of deprecated settings, this can lead to excessive deprecation
warnings which can crush a node. This commit ensures that a given
setting only sees deprecation logging at most once.
Relates #25457
In #24477, a less verbose option was added to retrieve snapshot info via
GET /_snapshot/{repo}/{snapshots}. The point of adding this less
verbose option was so that if the repository is a cloud based one, and
there are many snapshots for which the snapshot info needed to be
retrieved, then each snapshot would require reading a separate snapshot
metadata file to pull out the necessary information. This can be costly
(performance and cost) on cloud based repositories, so a less verbose
option was added that only retrieves very basic information about each
snapshot that is all available in the index-N blob - requiring only one
read!
In order to display this less verbose snapshot info appropriately, logic
was added to not display those fields which could not be populated.
However, this broke integrators (e.g. ECE) that required these fields to
be present, even if empty. This commit is to return these fields in the
response, even if empty, if the verbose option is set.
This commit fixes a race condition in the node supplier used by the RemoteClusterConnection. The
node supplier stores an iterator over a set backed by a ConcurrentHashMap, but the get operation
of the supplier uses multiple methods of the iterator and is suceptible to a race between the
calls to hasNext() and next(). The test in this commit fails under the old implementation with a
NoSuchElementException. This commit adds a wrapper object over a set and a iterator, with all methods
being synchronized to avoid races. Modifications to the set result in the iterator being set to null
and the next retrieval creates a new iterator.