Previously when trying to listen on virtual interfaces during
bootstrap the application would stop working - the interface
couldn't be found by the NetworkUtils class.
The NetworkUtils utilize the underlying JDK NetworkInterface
class which, when asked to lookup by name only takes physical
interfaces into account, failing at virtual (or subinterfaces)
ones (returning null).
Note that when interating over all interfaces, both physical and
virtual ones are taken into account.
This changeset asks for all known interfaces, iterates over them
and matches on the given name as part of the loop, allowing it
to catch both physical and virtual interfaces.
As a result, elasticsearch can now also serve on virtual
interfaces.
A test case has been added which at least makes sure that all
iterable interfaces can be found by their respective name. (It's
not easily possible in a unit test to "fake" virtual interfaces).
Relates #19537
Fixes CORS handling so that it uses the defaults for http.cors.allow-methods
and http.cors.allow-headers if none are specified in the config.
Closes#19520
#19096 introduced a generic TCPTransport base class so we can have multiple TCP based transport implementation. These implementations can vary in how they respond internally to situations where we concurrently send, receive and handle disconnects and can have different exceptions. However, disconnects are important events for the rest of the code base and should be distinguished from other errors (for example, it signals TransportMasterAction that it needs to retry and wait for the a (new) master to come back). Therefore, we should make sure that all the implementations do the proper translation from their internal exceptions into ConnectTransportException which is used externally.
Similarly we should make sure that the transport implementation properly recognize errors that were caused by a disconnect as such and deal with them correctly. This was, for example, the source of a build failure at https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-intake/1080 , where a concurrency issue cause SocketException to bubble out of MockTcpTransport.
This PR adds a tests which concurrently simulates connects, disconnects, sending and receiving and makes sure the above holds. It also fixes anything (not much!) that was found it.
creation timeout so they process the index creation cluster state update
before the test finishes and attempts to cleanup. Otherwise, the index
creation cluster state update could be processed after the test finishes
and cleans up, thereby leaking an index in the cluster state that could
cause issues for other tests that wouldn't expect the index to exist.
Closes#19530
In the lack of tests the analyzer.alias feature was pretty much not working
at all on current master. Issues like #19163 showed some serious problems for users
using this feature upgrading to an alpha version.
This change fixes the processing order and allows aliases to be set for
existing analyzers like `default`. This change also ensures that if `default`
is aliased the correct analyzer is used for `default_search` etc.
Closes#19163
Add parser for anonymous char_filters/tokenizer/token_filters
Using Settings in AnalyzeRequest for anonymous definition
Add breaking changes document
Closed#8878
* rethrow script compilation exceptions into ingest configuration exceptions
* update readProcessor to rethrow any exception as an ElasticsearchException
Remove `ParseField` constants used for names where there are no deprecated
names and just use the `String` version of the registration method instead.
This is step 2 in cleaning up the plugin interface for extending
search time actions. Aggregations are next.
This is breaking for plugins because those that register a new query should
now implement `SearchPlugin` rather than `onModule(SearchModule)`.
When index creation is not acknowledged (due to a very low request timeout) it is possible that the index is still created.
If a subsequent index-exists request completes before the cluster state of the index creation has been fully applied, it
might miss the newly created index.
* Remove outdated aggregation registration method
* Remove AggregationStreams
* Adds StreamInput#readNamedWriteableList and
StreamOutput#writeNamedWriteableList convenience methods. We strive to
make the reading and writing from the streams terse so they are easier
to scan visually.
* Remove PipelineAggregatorStreams
* Remove stream info from InternalAggreation.Type
* Remove InternalAggregation#type
* Remove Streamable from PipelineAggregator
* Remove Streamable from MultiBucketsAggregation.Bucket
During query refactoring the query string query parameter
'auto_generate_phrase_queries' was accidentally renamed
to 'auto_generated_phrase_queries'.
With this commit we restore the old name.
Closes#19512
Currently if a string field is not_analyzed, but a
position_increment_gap is set, it will lookup the default analyzer and
set it, along with the position_increment_gap, before the code which
handles setting the keyword analyzer for not_analyzed fields has a
chance to run. This change adds a parsing check and test for that case.
ThreadPool#schedule can throw a rejected execution exception. Yet, the
rejected execution exception that it throws comes from the EsAbortPolicy
which throws an EsRejectedExecutionException. This exception does not
inherit from RejectedExecutionException so instead we must catch the
former instead of the latter.
A self-rescheduling runnable can hit a rejected execution exception but
this exception goes uncaught. Instead, this exception should be caught
and passed to the onRejected handler. Not catching handling this
rejected execution exception can lead to test failures. Namely, a race
condition can arise between the shutting down of the thread pool and
cancelling of the rescheduling of the task. If another reschedule fires
right as the thread pool is being terminated, the rescheduled task will
be rejected leading to an uncaught exception which will cause a test
failure. This commit addresses these issues.
Relates #19505
The ThreadPool#scheduleWithFixedDelay method does not make it clear that all scheduled runnable instances
will be run on the scheduler thread. This becomes problematic if the actions being performed include
blocking operations since there is a single thread and tasks may not get executed due to a blocking task.
This change includes a few different aspects around trying to prevent this situation. The first is that
the scheduleWithFixedDelay method now requires the name of the executor that should be used to execute
the runnable. All existing calls were updated to use Names.SAME to preserve the existing behavior.
The second aspect is the removal of using ScheduledThreadPoolExecutor#scheduleWithFixedDelay in favor of
a custom runnable, ReschedulingRunnable. This runnable encapsulates the logic to deal with rescheduling a
runnable with a fixed delay and mimics the behavior of executing using a ScheduledThreadPoolExecutor and
provides a ScheduledFuture implementation that also mimics that of the typed returned by a
ScheduledThreadPoolExecutor.
Finally, an assertion was added to BaseFuture to detect blocking calls that are being made on the scheduler
thread.
When looking at the logstash template, I noticed that it has definitions for
dynamic temilates with `match_mapping_type` equal to `byte` for instance.
However elasticsearch never tries to find templates that match the byte type
(only long or double as far as numbers are concerned). This commit changes
template parsing in order to ignore bad values of `match_mapping_type` (given
how the logstash template is popular, this would break many upgrades
otherwise). Then I hope to fail the parsing on bad values in 6.0.
We used to mutate it as part of building the aggregation. That
caused assertVersionSerializable to fail because it assumes that
requests aren't mutated after they are sent.
Closes#19481
Primary relocation violates two invariants that ensure proper interaction between document replication and peer recoveries, ultimately leading to documents not being properly replicated.
Invariant 1: Document writes must be replicated based on the routing table of a cluster state that includes all shards which have ongoing or finished recoveries. This is ensured by the fact that do not start a recovery that is not reflected by the cluster state available on the primary node and we always sample a fresh cluster state before starting to replicate write operations.
Invariant 2: Every operation that is not part of the snapshot taken for phase 2, must be succesfully indexed on the target replica (pending shard level errors which will cause the target shard to be failed). To ensure this, we start replicating to the target shard as soon as the recovery start and open it's engine before we take the snapshot. All operations that are indexed after the snapshot was taken are guaranteed to arrive to the shard when it's ready to index them. Note that this also means that the replication doesn't fail a shard if it's not yet ready to recieve operations - it's a normal part of a recovering shard.
With primary relocations, the two invariants can be possibly violated. Let's consider a primary relocating while there is another replica shard recovering from the primary shard.
Invariant 1 can be violated if the target of the primary relocation is so lagging on cluster state processing that it doesn't even know about the new initializing replica. This is very rare in practice as replica recoveries take time to copy all the index files but it is a theoretical gap that surfaces in testing scenarios.
Invariant 2 can be violated even if the target primary knows about the initializing replica. This can happen if the target primary replicates an operation to the intializing shard and that operation arrives to the initializing shard before it opens it's engine but arrives to the primary source after it has taken the snapshot of the translog. Those operations will be currently missed on the new initializing replica.
The fix to reestablish invariant 1 is to ensure that the primary relocation target has a cluster state with all replica recoveries that were successfully started on primary relocation source. The fix to reestablish invariant 2 is to check after opening engine on the replica if the primary has been relocated in the meanwhile and fail the recovery.
Closes#19248
RecoveryTarget increments a reference on the store once it's
created. If we fail to return the instance from the reset method
we leak a reference causing shard locks to not be released. This
change creates the reference in the return statement to ensure no
references are leaked
An initializing replica shard might not have an UnassignedInfo object, for example when it is a relocation target. The method allocatedPostIndexCreate does not account for this situation.
Today when we reset a recovery because of the source not being
ready or the shard is getting removed on the source (for whatever reason)
we wipe all temp files and reset the recovery without respecting any
reference counting or locking etc. all streams are closed and files are
wiped. Yet, this is problematic since we assert that some files are on disk
etc. when we finish writing a file. These assertions don't hold anymore if we
concurrently wipe the tmp files.
This change moves the logic out of RecoveryTarget into RecoveriesCollection which
basically clones the RecoveryTarget on reset instead which allows in-flight operations
to finish gracefully. This means we now have a single path for cleanups in RecoveryTarget
and can safely use assertions in the class since files won't be removed unless the recovery
is either canceled, failed or finished.
Closes #19473
Today they don't because the create index request that is implicitly created
adds an empty mapping for the type of the document. So to Elasticsearch it
looks like this type was explicitly created and `index.mapper.dynamic` is not
checked.
Closes#17592
The test would previously catch Throwable and then decide if it was a critical exception or not. As the catch block was changed from Throwable to Exception this made the test fail for non-critical exceptions. This commit changes the test so that exceptions are only thrown when they're unexpected.
This is the first pipeline aggregation that doesn't have its own
bucket type that needs serializing. It uses InternalHistogram instead.
So that required reworking the new-style `registerAggregation` method
to not require bucket readers. So I built `PipelineAggregationSpec` to
mirror `AggregationSpec`. It allows registering any number of bucket
readers or result readers.
The `client/transport` project adds a new jar build project that
pulls in all dependencies and configures all required modules.
Preinstalled modules are:
* transport-netty
* lang-mustache
* reindex
* percolator
The `TransportClient` classes are still in core
while `TransportClient.Builder` has only a protected construcutor
such that users are redirected to use the new `TransportClientBuilder`
from the new jar.
Closes#19412
Previously if the size of the search request was greater than zero we would not cache the request in the request cache.
This change retains the default behaviour of not caching requests with size > 0 but also allows the `request_cache=true` query parameter
to enable the cache for requests with size > 0
This is a tentative to revive #15939 motivated by elastic/beats#1941.
Half-floats are a pretty bad option for storing percentages. They would likely
require 2 bytes all the time while they don't need more than one byte.
So this PR exposes a new `scaled_float` type that requires a `scaling_factor`
and internally indexes `value*scaling_factor` in a long field. Compared to the
original PR it exposes a lower-level API so that the trade-offs are clearer and
avoids any reference to fixed precision that might imply that this type is more
accurate (actually it is *less* accurate).
In addition to being more space-efficient for some use-cases that beats is
interested in, this is also faster that `half_float` unless we can improve the
efficiency of decoding half-float bits (which is currently done using software)
or until Java gets first-class support for half-floats.
Today the default precision for the cardinality aggregation depends on how many
parent bucket aggregations it had. The reasoning was that the more parent bucket
aggregations, the more buckets the cardinality had to be computed on. And this
number could be huge depending on what the parent aggregations actually are.
However now that we run terms aggregations in breadth-first mode by default when
there are sub aggregations, it is less likely that we have to run the cardinality
aggregation on kagilions of buckets. So we could use a static default, which will
be less confusing to users.
We currently have concurrency issue between the static methods on the Store class and store changes that are done via a valid open store. An example of this is the async shard fetch which can reach out to a node while a local shard copy is shutting down (the fetch does check if we have an open shard and tries to use that first, but if the shard is shutting down, it will not be available from IndexService).
Specifically, async shard fetching tries to read metadata from store, concurrently the shard that shuts down commits to lucene, changing the segments_N file. this causes a file not find exception on the shard fetching side. That one in turns makes the master think the shard is unusable. In tests this can cause the shard assignment to be delayed (up to 1m) which fails tests. See https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+java9-periodic/570 for details.
This is one of the things #18938 caused to bubble up.
* Removed `Template` class and unified script & template parsing logic. Templates are scripts, so they should be defined as a script. Unless there will be separate template infrastructure, templates should share as much code as possible with scripts.
* Removed ScriptParseException in favour for ElasticsearchParseException
* Moved TemplateQueryBuilder to lang-mustache module because this query is hard coded to work with mustache only
integration tests, as they are no longer needed with
index creation now waiting for shards to be started before
returning from the index creation call (by default, it waits
for the primary of each shard to be started before returning,
which is what ensureYellow() was ensuring anyway).
Closes#19452
Relates #19450
This commit adds a log message when bootstrap checks are enforced
informing the user that they are enforced because they are bound to an
external network interface. We also log if bootstrap checks are being
enforced but system checks are being ignored.
Relates #19451
The unit tests for IndicesClusterStateService currently inject random failures upon shard creation/ routing upate / mapping update etc. This commit makes injecting failures optional so that stronger assertions can be made about the local indices / shard state in case of no failures.
Before returning, index creation now waits for the configured number
of shard copies to be started. In the past, a client would create an
index and then potentially have to check the cluster health to wait
to execute write operations. With the cluster health semantics changing
so that index creation does not cause the cluster health to go RED,
this change enables waiting for the desired number of active shards
to be active before returning from index creation.
Relates #9126
For historical reasons, the value associated with Priority.IMMEDIATE is
-1. Yet, with a full-cluster restart required on major version upgrades,
we can reset these values so they are conceptually simpler. This commit
resets the values associated with Priority instances.