For historical reasons, the value associated with Priority.IMMEDIATE is
-1. Yet, with a full-cluster restart required on major version upgrades,
we can reset these values so they are conceptually simpler. This commit
resets the values associated with Priority instances.
Today we have an abstraction Priority for representing
priorities. Ideally, these values are a fixed set of constants with a
well-defined ordering which sounds perfect for an enum. This commit
changes Priority so that it is an enum instead of a class.
In Priority there is a field named values that represents an ordered, by
priority, list of all priorities. Yet, this collection is modifiable and
this collection is exposed via the public API. This means that consumers
can modify this list potentially leading to complete chaos. This commit
modifies this field so that it is unmodifiable, documents that the
returned collection is unmodifiable, and returns total order to the
world. We also punish the bad consumer here by making them make a copy
of the returned collection with which they can do as they please. This
fixes a puzzling test failure which only arises if the two tests
(PrioritizedExecutorsTests#testPriorityQueue and
PriorityTests#testCompareTo run in the same JVM, and run in the right
order).
Relates #19447
Also introduced a `Processor.Parameters` class that is holder for several services processors rely on,
the IngestPlugin#getProcessors(...) method has been changed to accept `Processor.Parameters` instead
of each service seperately.
Today we log all loaded modules and installed plugins in a single
line. The number of modules has grown, and when plugins are installed a
single log line containing the loaded modules and plugins is
lengthy. With this commit, we log a single module or plugin per line,
log these in sorted order, and also log if no modules or no plugins were
loaded.
Relates #19441
This commit renames the Netty 3 transport module from transport-netty to
transport-netty3. This is to make room for a Netty 4 transport module,
transport-netty4.
Relates #19439
Currently custom headers that should be passed through rest requests are
registered by depending on the RestController in guice and calling a
registration method. This change moves that registration to a getter for
plugins, and makes the RestController take the set of headers on
construction.
Invocation counts can be used to help judge the selectivity of individual query components in the context of the entire query. E.g. a query may not look selective when run by itself (matches most of the index), but when run in context of a full search request, is evaluated only rarely due to execution order
Since this is modifying the base timing class, it'll enrich both query and agg profiles (as well as future profile results)
Today `node.mode` and `node.local` serve almost the same purpose, they
are a shortcut for `discovery.type` and `transport.type`. If `node.local: true`
or `node.mode: local` is set elasticsearch will start in _local_ mode which means
only nodes within the same JVM are discovered and a non-network based transport
is used. The _local_ mode it only really used in tests or if nodes are embedded.
For both, embedding and tests explicit configuration via `discovery.type` and `transport.type`
should be preferred.
This change removes all the usage of these settings and by-default doesn't
configure a default transport implemenation since netty is now a module. Yet, to make
the user expericence flawless, plugins or modules can set a `http.type.default` and
`transport.type.default`. Plugins set this via `PluginService#additionalSettings()`
which enforces _set-once_ which prevents node startup if set multiple times. This means
that our distributions will just startup with netty transport since it's packaged as a
module unless `transport.type` or `http.transport.type` is explicitly set.
This change also found a bunch of bugs since several NamedWriteables were not registered if a
transport client is used. Now that we don't rely on the `node.mode` leniency which is inherited
instead of using explicit settings, `TransportClient` uses `AssertingLocalTransport` which detects these problems since it serializes all messages.
Closes#16234
The Java API supports this while mostly used for tests it can also be useful in
production environments. For instance if something is automated like a settings change
and we execute some health right after it the settings update might have some consequences
like a reroute which hasn't been fully applied since the preconditions are not fulfilled yet.
For instance if not all shards started the settings update is applied but the reroute won't move
currently initializing shards like in the shrink API test. Sure this could be done by waiting for
green before but if the cluster moves shards due to some side-effects waiting for all events is
still useful. I also took the chance to add unittests to Priority.java
Closes#19419
This adds an extra method, registerWithDeprecatedHandler, to register both a normal handler and a deprecated handler at the same time. This helps with renaming methods as opposed to _just_ deprecated methods.
Today we assert that the tmp files are present but if the recovery
was canceled this might not be the case while still a valid state.
This chance only throws the AssertionError if the recovery is still active.
This moves all netty related code into modules/transport-netty the module is build as a zip file as well as a JAR to serve as a dependency for transport client. For the time being this is required otherwise we have no network based impl. for transport client users. This might be subject to change given that we move forward http client.
* Clean up the generics around significant terms aggregation results
* Reduce code duplicated between `SignificantLongTerms` and
`SignificantStringTerms` by creating `InternalMappedSignificantTerms`
and moving common things there where possible.
* Migrate to `NamedWriteable`
* Line length fixes while I was there
This commit removes support for properties syntax and config files:
- removed support for elasticsearch.properties
- removed support for logging.properties
- removed support for properties content detection in REST APIs
- removed support for properties content detection in Java API
Relates #19398
Some tests still start http implicitly or miss configuring the transport clients correctly.
This commit fixes all remaining tests and adds a depdenceny to `transport-netty` from
`qa/smoke-test-http` and `modules/reindex` since they need an http server running on the nodes.
This also moves all required permissions for netty into it's module and out of core.
If a nested, has_child or has_parent query's inner query gets rewritten then the InnerHitBuilder should use that rewritten form too, otherwise this can cause exceptions in a later phase.
Also fixes a bug that HasChildQueryBuilder's rewrite method overwrites max_children with min_children value.
Closes#19353
That exception is currently serialized as its current base class IllegalStateException which confuses code supposed to deal with the stepping down of a master. This is an important exception and we should be able to serialize it correctly. This commit fixes it by moving the exception to inherit from ElasticsearchException and properly register it.
As a bonus I adapted CapturingTransport to properly simulate serialized exceptions.
Switches most search behavior extensions from push (`onModule(SearchModule)`)
to pull (`implements SearchPlugin`). This effort in general gives plugin
authors a much cleaner view of how to extend Elasticsearch and starts to
set up portions of Elasticsearch as "the plugin API". This commit in
particular does that for search-time behavior like customized suggesters,
highlighters, score functions, and significance heuristics.
It also switches most such customization to being done at search module
construction time which is much, much easier to reason about from a testing
perspective. It also helps significantly in the process of de-guice-ing
Elasticsearch's startup.
There are at least two major search time extensions that aren't covered in
this commit that will simply have to wait for the next commit on the topic
because this one has already grown large: custom aggregations and custom
queries. These will likely live in the same SearchPlugin interface as well.
This change adds a createComponents() method to Plugin implementations
which they can use to return already constructed componenents/services.
Eventually this should be just services ("components" don't really do
anything), but for now it allows any object so that preconstructed
instances by plugins can still be bound to guice. Over time we should
add basic services as arguments to this method, but for now I have left
it empty so as to not presume what is a necessary service.
If the allocation decision for a primary shard was NO, this should
cause the cluster health for the shard to go RED, even if the shard
belongs to a newly created index or is part of cluster recovery.
Relates #9126
Previously, index creation would momentarily cause the cluster health to
go RED, because the primaries were still being assigned and activated.
This commit ensures that when an index is created or an index is being
recovered during cluster recovery and it does not have any active
allocation ids, then the cluster health status will not go RED, but
instead be YELLOW.
Relates #9126
this commit moves the most of the http related integ tests out into it's own
`qa/smoke-test-http` project where most of the test can run against the external cluster.
Today when a node is removed the cluster (it leaves or it fails), we
submit a cluster state update task. These cluster state update tasks are
processed serially on the master. When nodes are removed en masse (e.g.,
a rack is taken down or otherwise becomes unavailable), the master will
be slow to process these failures because of the resulting reroutes and
publishing of each subsequent cluster state. We improve this in this
commit by processing the node removals using the cluster state update
task batch processing framework.
Relates #19289
Today we have a bunch of tests that use netty transport for several reasons
these tests use it because they need to run some tcp based transport. Yet, this
couples our tests tightly to the netty implementation which should be tested on it's own.
This change adds a plain socket based blocking TcpTransport implementation that is used by
default in tests if local transport is suppressed or if network is selected.
It also adds another tcp network implementation as a showcase how the interface works.
Lucene IndexWriter asserts on files existing on the filesystem but
some tests throw IOException explicitly on those operatiosn such that
some tests trip asserts. We had this before on InternalEngine#ctor
and added some logic there to catch only a specific assertions based
on some excepition stack analysis. This change applies the same logic
to the IndexWriter#commit part of the engine since it can hit the same
issue.
This also fixes a self-suppression issue in Store.java.
Closes#19356
Several tests required http.enabled where it was unnecessary.
We also had RestMainActionIT which tests what two of our REST tests
test already so I removed it.
The explicit use of http.enabled: false is also obsolet since our
test do that by default.
The DiscoveryNodeService exists to register CustomNodeAttributes which
plugins can add. This is not necessary, since plugins can already add
additional attributes, and use the node attributes prefix.
This change removes the DiscoveryNodeService, and converts the only
consumer, the ec2 discovery plugin, to add the ec2 availability zone
in additionalSettings().
This change removes the ability for guice to have child injectors (and
the entire concept of parent injectors) from our fork of guice. The
methodology for removing was simple: I removed createChildInjector, and
continued to remove methods and members that were unused until my head
was spinning. The motivation for this change is to limit what our fork
of guice gives us access to, so we don't regress and start adding back
more complicated uses.
This adds a test that uses transport implementation and sends random requests
to 3 different nodes, the request handlers maybe forwarding the requests to yet another node
etc. until returning the response. This test basically tests that nodes are not deadlocking
in a distributed fashion.
Repository plugins currently use a lot of custom classes like
RepositoryName and RepositorySettings in order to use guice to construct
repository implementations. But repositories now only really need their
settings to be constructed. Anything else they need (eg a cloud client)
can be constructed within the plugin, instead of via guice.
This change makes repository plugins use the new pull model. It removes
guice from the construction of Repository objects (no more child
injectors) and also from all repository plugins.
This exposes a method to start an action and return a task from
`NodeClient`. This allows reindex to use the injected `Client` rather
than require injecting `TransportAction`s
Today when a thread encounters a fatal unrecoverable error that
threatens the stability of the JVM, Elasticsearch marches on. This
includes out of memory errors, stack overflow errors and other errors
that leave the JVM in a questionable state. Instead, the Elasticsearch
JVM should die when these errors are encountered. This commit causes
this to be the case.
Relates #19272
After #13834 many tests that used Groovy scripts (for good or bad reason) in their tests have been moved in the lang-groovy module and the issue #13837 has been created to track these messy tests in order to clean them up.
This commit moves more tests back in core, removes the dependency on Groovy, changes the scripts in order to use the mocked script engine, and change the tests to integration tests.
This commit moves back some messy tests that have been placed in lang-groovy module in https://github.com/elastic/elasticsearch/pull/13834. It removes the dependency on Groovy plugin as well as change back the tests to integration tests (IT suffix).
It also changes the current MockScriptEngine and MockScriptPlugin to make it easier to use.
The api for snapshot/restore was split up between two interfaces,
Repository and IndexShardRepository. There was also complex
initialization and injection between the two. However, there is always a
one to one relationship between the two.
This change moves the IndexShardRepository api into Repository, as well
as updates the API so as not to require any services to be injected for
sublcasses.
This adds a new proxy for RestHandlers and RestControllers so that requests made
to deprecated REST APIs can be automatically logged in the ES logs via the
DeprecationLogger as well as via a "Warning" header (RFC-7234) for all responses.
Calling indicesService.deleteIndex() can trip an assertion if there is an ongoing cluster state applied in
IndicesClusterStateService. This means that the index is possibly deleted after the failMissingShards
check and before we try creating new and updated shards, tripping an assertion that non-existing shards must
have shard state initializing (started in this case).
This change adds the type of the field in the fieldstats response.
It can be one of the following:
* "integer" for byte, short, integer and long
* "float" for float, half-float and double
* "date" for date
* "ip" for ip
* "text" for string, keyword and text.
Closes#17750
This adds a remote option to reindex that looks like
```
curl -POST 'localhost:9200/_reindex?pretty' -d'{
"source": {
"remote": {
"host": "http://otherhost:9200"
},
"index": "target",
"query": {
"match": {
"foo": "bar"
}
}
},
"dest": {
"index": "target"
}
}'
```
This reindex has all of the features of local reindex:
* Using queries to filter what is copied
* Retry on rejection
* Throttle/rethottle
The big advantage of this version is that it goes over the HTTP API
which can be made backwards compatible.
Some things are different:
The query field is sent directly to the other node rather than parsed
on the coordinating node. This should allow it to support constructs
that are invalid on the coordinating node but are valid on the target
node. Mostly, that means old syntax.