The number of current pending tasks is useful to detect and overloaded master. This commit adds it to the cluster health API. The complete list can be retrieved from the dedicated pending tasks API.
It also adds rest tests for the cluster health variants.
Closes#9877
Today we fail the shard if we need to upgrade a replica to a primary on shadow replicas
on shared filesystem. Yet, this commit allows promotion by re-initializing on the master preventing
reallocation of all replicas.
We try to lock all shards when an index is deleted but likely not
succeeding since shards are still active. To ensure that shards
that used to be allocated on that node get cleaned up as well we have
to retry or block on the delete until we get the locks. This is not desirable
since the delete happens on the cluster state processing thread. Instead of blocking
this commit schedules a pending delete for the index just like if we can't delete shards.
Otherwise the fs repository metadata that points to non-existing location is stored in the old index cluster state, which causes warnings during OldIndexBackwardsCompatibilityTests.
There are two implications to this change.
First, percolator now uses _uid internally, extracting the id portion
when needed. Second, sorting on _id is no longer possible, since you
can no longer index _id. However, _uid can still be used to sort, and
is better anyways as indexing _id just to make it available to
fielddata for sorting is wasteful.
see #8143closes#9842
Today if we delete files from the index directory we never acquire the
write lock. Yet, for safety reasons we should get the write lock before
we modify / delete any files. Where we can we should leave the deletion
to the index writer and only delete that are necessary to delete ourself.
Refactor how settings filters are handled. Instead of specifying settings filters as filtering class, settings filters are now specified as list of settings that needs to be filtered out. Regex syntax is supported. This is breaking change and will require small change in plugins that are using settingsFilters. This change is needed in order to simplify cluster state diff implementation.
Contributes to #6295
When indexing of a document with a type that is not in the mappings fails,
for example because "dynamic": "strict" but doc contains a new field,
then the type is still created on the node that executed the indexing request.
However, the change was never added to the cluster state.
This commit makes sure mapping updates are always added to the cluster state
even if indexing of a document fails.
closes#8692
relates to #8650
Now that the global cluster is gone, we shoudln't need to ignore
thread leaks across tests. We unfortunately still need suite level
scope, since most tests are using suite scope clusters (although
test clope clusters should really switch back to test scope thread
leaks).
closes#9843
These help a lot when refactoring, upgrading lucene, etc, and
can prevent code duplication (as you get a compile error for outdated stuff).
Closes#9832.
Double negatives are confusing, but a triple negative (1 no, 2 non, 3 null)? It takes five minutes to understand this little sentence. Cleaned that up a bit.
Closes#9789
This change removes the deprecated script parameter names ('file', 'id', and 'scriptField').
It also removes the ability to load file scripts using the 'script' parameter. File scripts should be loaded using the 'script_file' parameter only.
If an elected master node goes into a long gc then other nodes' fault detection will notice this and a new master election is started and eventually a new master node is elected. If the previous master nodes goes out of the long gc it can still have pending tasks which can result in new cluster state updates. Nodes that are still in the nodes list of this previous elected master node can get these cluster state updates. This commit makes sure that this dated cluster states are not accepted by these nodes.
This issue can temporary lead to the fact that non elected master nodes switch to the previous elected master node. The new elected master node also gets the same dated cluster state, but rejects it and tells the previous elected master node to step down and rejoin. Because the new elected master is the only master node the previous elected master node will follow the new elected master node. Any nodes that follow the previous elected master node (by accident), will also rejoin and follow the new elected master node because their master fault detection will fail. So all in all this isn't a severe problem, because the problem fixes itself eventually.
Closes#9632
In #6636 we switched to a default FileSwitchDirectory that made
.listAll run twice on the same underlying file system directory.
This fixes listAll to do a single directory listing again.
Closes#9666
Currently many meta field mappers do not take index settings in their
simple constructor that DocumentMapper uses, and instead pass null or
empty settings to the parent abstract mapper. This change fixes them to
pass through index settings, and adds an assertion in AbstractFieldMapper
that settings are not null.
closes#9780
This was previously attempted in #8854. I revived that branch and did
some performance testing as was suggested in the comments there.
I fixed all the errors, mostly just the rest tests, which
needed to have http enabled on the node settings (the global cluster
previously had this always enabled). I also addressed the comments from
that issue.
My performance tests involved running the entire test suite on my
desktop which has 6 cores, 16GB of ram, and nothing else was being
run on the box at the time. I ran each set of settings 3 times and
took the average time.
| mode | master | patch | diff |
| ------- | ------ | ----- | ---- |
| local | 409s | 417s | +2% |
| network | 368s | 380s | +3% |
This increase in average time is clearly worthwhile to pay to achieve
isolation of tests. One caveat is the way I fixed the rest tests
is still to have one cluster for the entire suite, so all the rest
tests can still potentially affect each other, but this is an
issue for another day.
There were some oddities that I noticed while running these tests
that I would like to point out, as they probably deserve some
investigation (but orthogonal to this PR):
* The total test run times are highly variable (more than a minute between the min and max)
* Running in network mode is on average actually *faster* than local mode. How is this possible!?
Files.exists(f) && Files.isDirectory(f) -> Files.exists(f)
if (Files.exists(f)) Files.delete(f) -> Files.deleteIfExists(f)
if (!Files.exists(f)) Files.createDirectories(f) -> Files.createDirectories(f)
In a few places where successive i/o ops are done against the same file, convert
to Files.readAttributes().
Closes#9807.
Today locking all shards only locks the shards that are present on
the node or that still have a shard directory. This can lead to odd
behavior if another shard that doesn't exist yet is allocated while
all shards are supposed to be locked.