Add a test that verifies that even though all replicas are corrupted on all available nodes, and listing of shard stores faield, it still get allocated and properly recovered from the primary shard
Some requests in the SyncedFlushService were sill blocking on network
calls which made calling this service error prone if done on a network
thread. This commit makes this service fully async based on ActionListener.
This change simplifies the users of the mapper service to no
longer have access to multiple fields for a single name. While
FieldMappersLookup still stores and gives access to multiple
fields, the current users of SmartNameFieldMappers already all
assumed a single field. The arbitrary selection of that field
when multiple exist is now isolated to the mapper service.
When testing, each jvm gets its own tmpdir set, so it may not exist
at all. A Lucene test rule ensures its created, but some tests (I
am looking at you rest tests) do a bunch of file stuff in static {},
in that case because its a parameterized test. And if you try to
extend it, it will fail if security manager is disabled...
Currently we ensure(java.io.tmpdir) very early when tests are running under
security manager, but otherwise we don't and it won't happen until the
test rule fires. So just do it early always.
This behavior has changed been changed rescently to throw an IAE if
the translog we try to read from is already outdated. This is not
the expected behavior and this commit adds back the `old` way returning
`null` instead. The InternalEngine implementation will then go and ask the
lucene index for the document instead.
In some cases due to calling checking `rarely()` the `indexRandom()` method
can potentially flush, which creates flush requests, that miss a certain
header in this test and allow the test to fail.
In addition unused configuration code for this test has been removed.
We used to double write the translog operation which is not needed except
of for recovery. This commit cuts over to a big-array based temporary serialiation
and removes the crazy double writing.
Now that mapping updates are synchronous, it is not necessary to send mappings
to the master node during the recovery process anymore: they will already be on
the master node since we ensure mappings are on the master node before indexing.
Mappings conflicts should not be ignored. If I read the history correctly, this
option was added when a mapping update to an existing field was considered a
conflict, even if the new mapping was exactly the same. Now that mapping updates
are smart enough to detect conflicting options, we don't need an option to
ignore conflicts.
Whenever a query parser (or any other component) issues another
request as part of a request, the headers and the context has to
be supplied as well.
In order to do this, the `SearchContext` has to have those headers
available, which in turn means, the shard level request needs to
copy those from the original `SearchRequest`
This commit introduces two new interface to supply the needed methods
to work with context and headers.
Closes#10979