As part of #10136 we removed the transport action for broadcast deletes in case routing is required but not specified. Bulk api worked differently though and kept on doing the broadcast delete internally in that case. This commit makes sure that delete items are marked as failed in such cases. Also the check has been moved up in the code together with the existing check for the update api, and we now make sure that the exception is the same as the one thrown for single document apis (delete/update).
Note that the failure for the update api contained the wrong optype (the type of the document rather than "update"), that's been fixed too and tested.
Closes#16645
This commit disables the production limits checks on snapshot
builds. This is at a minimum short-term relief for developers that do in
fact bind to external network interfaces, and is possibly a long-term
fix as well. The situation with using the JVM flag MaxFDLimit is far too
complicated.
Closes#16835
At some time in the distant past, Bootstrap could be used by those
embedding elasticsearch. However, it is no longer allowed (now package
private), and so any errors (for example, log4j missing, or any other
exception while initializing) should be exposed directly and cause
elasticsearch to fail to start. This change removes hiding of logging
initialization exceptions.
Removes all our logger wrappers except the wrapper for log4j1.2. If you
depend on Elasticsearch's jar in your application you'll need to declare
log4j 1.2 and/or some bridge to your favorite logger.
We did this to simplify our builds and code. No more commons-logging like
log implementation sniffing. No more optional dependency hacks in gradle.
We might one day want to use j.u.l instead of log4j. If we do want that
we can recover its wrapper by studying this commit. We didn't go directly
to j.u.l in this commit because that is a bigger change. Our logging
configuration is based on log4j1.2 and people are used to it. So it'd
be a much more fraught breaking change to do that conversion.
It is possible to register multiple settings with complex matchers that could both match
a given key. The behavior when this occurs can lead to issues and depends on the
number of settings that have been registered. In order to identify the setting for a given
key, we iterate over the values in a map to find the first setting that matches the given key
and iteration order of a map should not be relied upon.
This commit checks complex settings when adding them and if the keys for these overlap,
an IllegalArgumentException is now thrown.
We should open up the node to the world when it's as ready as possiblAt the moment we open up the transport service before the local node has been fully initialized. This causes bug as some data structures are not fully initialized yet. See for example #16723.
Sadly, we can't just start the TransportService last (as we do with the HTTP server) because the ClusterService needs to know the bound published network address for the local DiscoveryNode. This address can only be determined by actually binding (people may use, for example, port 0). Instead we start the TransportService as late as possible but block any incoming requests until the node has completed initialization.
A couple of other cleanup during start time:
1) The gateway service now starts before the initial cluster join so we can simplify the logic to recover state if the local node has become master.
2) The discovery is started before the transport service accepts requests, but we only start the join process later using a dedicated method.
Closes#16723Closes#16746
This commit removes the system property "es.useLinkedTransferQueue" that
defaulted to false and was used to control the queue implementation used
in a few places.
Closes#16786
This commit adds a check on startup for G1 GC while running on early
versions of HotSpot version 25. This is to prevent potential data
corruption issues that can occur on those versions.
Closes#16737
Java NIO has the notion of gathering writes. These are writes that
gather data from multiple buffers into a single channel. These gathering
writes in Netty have been enabled by default with the possibility to
disable them using "es.netty.gathering". This flag was added in case
having gathering writes on by default did not work out. We have not
published this ability and sufficient time has passed to render
judgement that using gathering writes is okay.
Closes#16774
Expose http address in cat/nodes and cat/nodeattrs APIs
We expose a lot of information like IP address and port but never
expose the http address/ip:port in the CAT API. It's nice to have it
there too since otherwise json parsing is required to get this information
We expose a lot of information like IP address and port but never
expose the http address/ip:port in the CAT API. It's nice to have it
there too since otherwise json parsing is required to get this information
Elasticsearch should reject ids that are this long, to ensure a document
always remains retrievable for clients that impose a maximum URI length
Closes#16034
Most elements in SearchSourceBuilder (e.g. aggs, queries) write their top-level
ParseField name in toXContent(), while HighlightBuilder used to do it in
its own toXContent() method. Moved this up so SeachSourceBuilder for consistency.
Today we might start a node and some of the paths might not have the
required permissions. This commit goes through all data directories as
well as index, shard and state directories and ensures we have write access.
To make this work across all OS etc. we are trying to write a real file
and remove it again in each of those directories
This commit removes the es.max-open-files flag as the same information
can be obtained from the cluster nodes info API, and is warn logged on
startup if it's set too low anyway.
Closes#16757
This commit tries to 'guess' if a user starts a node in production by
checking if any network host is configured. If that is the case soft-limits
that are only logged otherwise are enforced like number of open file descriptors.
Closes#16727
Today we have the notion of a snapshot inside Version.java which makes
releasing complicated since to do a release Version.java must be changed.
This commit removes all notions of snapshot from the code and allows to
switch between snapshot and release build by specifying a system property on
the build. For instance running:
```
gradle run -Dbuild.snapshot=false
```
will build and package a release build while the default always
builds snapshots. Calls to the main rest action will still get the snapshot
information rendered out with the response.
This was changed when adding the text field in an attempt to clean up
how analyzers are set. Unfortunately this change was not safe for the
string field given that it can also represent keywords.
Also renamed histogram.AbstractBuilcer to AbstractHistogramBuilder, range.AbstractBuilder to AbstractRangeBuilder and org.elasticsearch.search.aggregations.pipeline.having to org.elasticsearch.search.aggregations.pipeline.bucketselector
Function Score Query now checks the type of token that we are parsing, which makes parsing stricter and allows to throw useful errors in case the json is malformed. It also makes code more readable as in what gets parsed when.
Closes#16583
This commit updates the OrdinalsBuilder and GeoPoint FieldData loader to work with the new PREFIX_ENCODING introduced in lucene-5.5.0. Backcompat is included to support legacy encoding types.
closes#16634
After #15776 got in, we don't need these copy constructors anymore. When we used to copy requests it was to make sure that headers and context were copied from the parent requests (e.g. index/delete as part of update). This is not a problem anymore.
The current logic for doing recovery from a source to a target shourd is tightly coupled with the underlying network pipes. This changes decouple the two, making it easier to add unit tests for shard recovery that doesn't involve the node and network environment.
On top that, RecoveryTarget is renamed to RecoveryTargetService leaving space to renaming RecoveryStatus to RecoveryTarget (and thus avoid the confusion we have today with RecoveryState).
Correspondingly RecoverySource is renamed to RecoverySourceService.
Closes#16605
All we do is check the cancelled flag and stop the request at a few key
points.
Adds the cancellation cause to the status so any request that is cancelled
but doesn't die can be seen in the task list.
The `keyword` field is intended to replace `not_analyzed` string fields. It is
indexed and has doc values by default, and doesn't support enabling term
vectors.
Although it doesn't support setting an analyzer for now, there are plans for
it to support basic normalization in the future such as case folding.
2.x has show so far that running with security manager is the way to go.
This commit make this non-optional. Users that need to pass their own rules
can still do this via the system configuration for the security manager. They
can even opt out of all security that way.
This commit moves IndicesRequestCache into o.e.indics and makes all API in this
class package private. All references to SearchReqeust, SearchContext etc. have been factored
out and relevant glue code has been added to IndicesService. The IndicesRequestCache is not a
simple class without any hard dependencies on ThreadPool nor SearchService or IndexShard. This now
allows to add unittests.
This commit also removes two settings `indices.requests.cache.clean_interval` and `indices.fielddata.cache.clean_interval`
in favor of `indices.cache.clean_interval` which cleans both caches.
Some bw incompatible setting changes:
http.netty.http.blocking_server -> http.tcp.blocking_server
http.netty.host (removed, we just have http.host)
http.netty.bind_host (removed, we just have http.bind_host)
http.netty.publish_host (removed, we just have http.publish_host)
http.netty.tcp_no_delay -> http.tcp.no_delay
http.netty.tcp_keep_alive -> http.tcp.keep_alive
http.netty.reuse_address -> http.txp.reuse_address
http.netty.tcp_send_buffer_size -> http.tcp.send_buffer_size
http.netty.tcp_receive_buffer_size -> http.tcp.receive_buffer_size
Closes#16531
this is a minor cleanup that detaches `IndicesRequestCache` and `IndicesQueryCache`
from guice and moves it into `IndicesService`. It also decouples the `IndexShard` and `IndexService`
from these caches which are unnecessary dependencies.
QueryBuilders today do all their heavy lifting in toQuery() which
can be too late for several operations. For instance if we want to fetch geo shapes
on the coordinating node we need to do all this before we create the actual lucene query
which happens on the shard itself. Also optimizations for request caching need to be done
to the query builder rather than the query which then in-turn needs to be serialized again.
This commit adds the basic infrastructure for query rewriting and moves the heavy lifting into
the rewrite method for the following queries:
* `WrapperQueryBuilder`
* `GeoShapeQueryBuilder`
* `TermsQueryBuilder`
* `TemplateQueryBuilder`
Other queries like `MoreLikeThisQueryBuilder` still need to be fixed / converted. The nice
sideeffect of this is that queries like template queries will now also match the request cache
if their non-template equivalent has been cached befoore. In the future this will allow to
add optimizataion like rewriting time-based queries into primitives like `match_all_docs` or `match_no_docs`
based on the currents shards bounds. This is especially appealing for indices that are read-only ie. never change.
In the testCanFetchIndexStatus the task check can occur before the indexing process is started making the test to fail. This commit adds an additional lock to make sure we check tasks only after at least one of the tasks is registered.
This commit removes bootstrap support for Java Service Wrapper. The
implementation of this has been moved to its own repository where it was
deprecated, does not work with Elasticsearch 2.x, and is untested and
therefore unmaintained.
Closes#16580
This commit handles the scenario where a replication action fails on a
replica shard, the primary shard attempts to fail the replica shard
but the primary shard is notified of demotion by the master. In this
scenario, the demoted primary shard must be failed, and then the
request rerouted again to the new primary shard.
Closes#16415, closes#14252