Site plugins used to be used for things like kibana and marvel, but
there is no longer a need since kibana (and marvel as a kibana plugin)
uses node.js. This change removes site plugins, as well as the flag for
jvm plugins. Now all plugins are jvm plugins.
This undocumented setting was mainly used for testing and a safety net for
a new flushOnClose feature back in 1.x. We can now safely remove this setting.
The test usage now uses a mock plugin setting to achive the same.
Today we maintain a lot of settings on the shard level which are all index level settings.
In order to cut over to the new settings API where we register update listener we have to move
all of them on to the index level otherwise we need a way to un-register listeners which is error-prone
and requires additional handling when shards are closed. It's simpler and also more accurate to handle all of
them on the index level where we can trash the entire registry for update listener once the index goes out of scope.
ContextAndHeaders has a massive impact on the core infrastructure since it has to
be manually passed on to all relevant places across threads/network calls etc. For the same reason
it's also very error prone and easily forgotten on potentially relevant APIs.
The new ThreadContext is associated with a ThreadPool (node or transport client) and ensures that
headers and context registered on a current thread are inherited to new threads spawned, send across
the network to be deserialized on the receiver end as well as restored on the response handling thread
once the response is received.
This commit adds convenience methods to o.e.t.t.CapturingTransport
that enables capturing requests and clearing the captured requests
with a single method. This is to simplify a common pattern in tests of
capturing requests, and then clearing the captured requests.
This commit removes and now forbids use of
org.apache.lucene.index.IndexWriter#isLocked as this method was
deprecated in LUCENE-6508. The deprecation is due to the fact that
checking if a lock is held before acquiring that lock is subject to a
time-of-check-to-time-of-use race condition. There were three uses of
IndexWriter#isLocked in the code base:
- a logging statement in o.e.i.e.InternalEngine where we are already in
an exceptional condition that the lock was held; in this case,
logging whether or not the directory is locked is superfluous
- in o.e.c.l.u.VersionsTests where we were verifying that a write lock
is released upon closing an IndexWriter; in this case, the check is
not needed as successfully closing an IndexWriter releases its
write lock
- in o.e.t.s.MockFSDirectoryService where we were verifying that a
directory is not write-locked before (implicitly) trying to obtain
such a write lock in org.apache.lucene.index.CheckIndex#<init> (this
is the exact type of a situation that is subject to a race
condition); in this case we can proceed by just (implicitly) trying
to obtain the write lock and failing if we encounter a
LockObtainFailedException
* Added percolator field mapper that extracts the query terms and indexes these terms with the percolator query.
* At percolate time these extracted terms are used to query percolator queries that are like to be evaluated. This can significantly cut down the time it takes to percolate. Whereas before all percolator queries were evaluated if they matches with the document being percolated.
* Changes made to percolator queries are no longer immediately visible, a refresh needs to happen before the changes are visible.
* By default the percolate api only returns upto 10 matches instead of returning all matching percolator queries.
* Made percolate more modular, so that it is easier to add unit tests.
* Added unit tests for the percolator.
Closes#12664Closes#13646
Adds task manager class and enables all activities to register with the task manager. Currently, the immutable Transport*Activity class represents activity itself shared across all requests. This PR adds and an additional structure Task that keeps track of currently running requests and can be used to communicate with these requests using TransportTaskAction.
Related to #15117
When waiting indefinitely for a new cluster state in a test,
TestClusterService#add will throw a NullPointerException if the timeout
is null. Instead, TestClusterService#add should guard against a null
timeout and not even attempt to add a notification for the timeout
expiring. Note that the usage of null is the agreed upon contract for
specifying an indefinite wait from ClusterStateObserver.
DedicatedClusterSnapshotRestoreIT#testRestoreIndexWithMissingShards took ~1.5 min to finish
due to timeouts that are applied if not all shards are allocated. Now that the index that has
unallocated shareds is not refreshed the test is more reasonable and runs in 15 sec
Today we throttle recoveries only for incoming recoveries. Nodes that have a lot
of primaries can get overloaded due to too many recoveries. To still keep that at bay
we limit the number of threads that are sending files to the target to overcome this problem.
The right solution here is to also throttle the outgoing recoveries that are today unbounded on
the master and don't start the recovery until we have enough resources on both source and target nodes.
The concurrency aspects of the recovery source also added a lot of complexity and additional threadpools
that are hard to configure. This commit removes the concurrent streamns notion completely and sends files
in the thread that drives the recovery simplifying the recovery code considerably.
Outgoing recoveries are not throttled on the master via a allocation decider.
Today the logic to async - commit the translog is in every translog instance
itself. While the setting is a per index setting we manageing it per shard. This
polluts the translog code and can more easily be managed in IndexService.
Today we have two variants of translogs for indexing. We only recommend the buffered
one which also has a 20% advantage in indexing speed. This commit removes the option and defaults
to the buffered case. It also hard-wires the translog buffer to 8kb instead of 64kb. We used to
adjust that buffer based on if the shard is active or not, this code has also been removed and
instead we just keep an 8kb buffer arround.
This commit removes `index.translog.flush_threshold_ops` and `index.translog.disable_flush`
in favor of `index.translog.flush_threshold_size`. The number of operations is meaningless by itself and
can easily be turned into a size value with knowledge of the data. Disabling the flush is only useful in
tests and we can set the size value to a really high value. If users really need to do this they can
also apply a very high value like `1PB`.