Increased also default publish state timeout to 30 seconds (from 5 seconds) and introduced constant for it.
Introduced AcknowledgedRequest.DEFAULT_ACK_TIMEOUT constant.
Removed misleading default values coming from the REST layer.
Removed (in a bw compatible manner) the timeout support in put/delete index template as the timeout parameter was ignored.
Closes#4395
Also made the call to PercolatorQueriesRegistry#enableRealTimePercolator and #disableRealTimePercolator synchronized, so that for the same shard the RealTimePercolatorOperationListener can't registered twice.
when we bulk changes, we need to use the same index metadata builder across the tasks, otherwise we might remove mappings erroneously
also, when we check if we can use a higher order mapping, we need to verify that its for the same mapping type
In order to be sure that memory mapped lucene directories are working
one can configure the kernel about how many memory mapped areas
a process may have. This setting ensure for the debian and redhat initscripts
as well as the systemd startup, that this setting is set high enough.
Closes#4397
When sending a request, mainly to multiple nodes, if we already have the "body" of the request in bytes, we can share it instead of copying it over to a new buffer. Also, it helps a lot when sending a relatively large body to multiple nodes, since it will use the same body buffer across all nodes
When the search preference is set to only node, but this node is not a
data (or does not exist), we return a search exception, which indicates,
that this is actually a server problem.
However specifying a non-existing node id is a client problem
and should return a more useful error message than
{"error":"SearchPhaseExecutionException[Failed to execute phase [query_fetch], all shards failed]","status":503}
The explain output for function_score queries with score_mode=max or
score_mode=min was incorrect, returning instead the value of the last
function. This change fixes this.
Currently we sometimes see test failures that fail because not all replicase are
`searchable` which means they are not started yet or still recovering. Yet, the usual
situation is where two nodes have the same clusterstate but the one that acts as
the search target has not yet processed that clusterstate. The requester sees the
shard as started but it's not mark as such on the target node. For now the #ensureSearchable()
just delegates to #ensureYellow() to make sure the cluster is not red. In the future if we have
the possibilty to recover from situations like this in the search logic we can easily test
this by making the impl a no-op. Note: this problem only occurs if you have low number of docs
and the indexing is really quick such that first request are exectued but shards are not
fully `started`
The postings hl now uses a searcher that only encapsulate the view of segment the document being highlighted is in,
this should be better than using the top level engine searcher.
Closes#4385
Adding functionality to call cluster().transportClient() in tests in order
to get an arbitrary TransportClient object back, independently if the
transport client ratio in returning the normal clients is configured.
Also made sure, that if the normal client is already a transport client
(or a node client) we do not generate another one.
When searching with a query containing query_strings inside a bool query, the specified _name is randomly missing from the results due to caching.
Closes#4361.
Closes#4371.
Instead of processing all the bulk of update mappings we have per index/node, we can only update the last ordered one out of those (cause they are incremented on the node/index level). This will improve the processing time of an index that have large updates of mappings.
closes#4373
If a phrase query is wrapped in a filtered query due to type filtering
slop was not applied correctly. Also if the default field required a
type filter the filter was not applied.
Closes#4356