This commit fixes the issue caused by restore process deleting all legacy checksum files at the end of restore process. Instead it keeps the latest version of the checksum intact. The issue manifests itself in losing checksum for all legacy files restored into post 1.3.0 cluster, which in turn causes unnecessary snapshotting of files that didn't change.
Fixes#8119
Now each error is reported in bulk response rather than causing entire bulk to fail.
Added a Junit test but the use of TransportClient means the error is manifested differently to a REST based request - instead of a NullPointer the whole of the bulk request failed with a RoutingMissingException. Changed TransportBulkAction to catch this exception and treat it the same as the existing logic for a ElasticsearchParseException - the individual bulk request items are flagged and reported individually rather than failing the whole bulk request.
Closes#8365
Test used `indices.recovery.concurrent_streams` when creating an index but this is a node setting. Moved it to the node settings and added similar settings to speed up concurrent recoveries.
Also fixed a misleading log message in ShardRecoveryHandler when logging a remove corruption
Today it's possible that the data directory for a single shard is used by more than on
IndexShard->Store instances. While one shard is already closed but has a concurrent recovery
running and a new shard is creating it's engine files can conflict and data can potentially
be lost. We also remove shards data without checking if there are still users of the files
or if files are still open which can cause pending writes / flushes or the delete operation
to fail. If the latter is the case the index might be treated as a dangeling index and is brought
back to life at a later point in time.
This commit introduces a shard level lock that prevents modifications to the shard data
while it's still in use. Locks are created per shard and maintined in NodeEnvironment.java.
In contrast to most java concurrency primitives those locks are not reentrant.
This commit also adds infrastructure that checks if all shard locks are released after tests.
Don't eagerly cache parent type filters in bitset cache or nested object fields that are leafs.
Also let parent/child queries not rely on FixedBitSetFilter, but rather on regular Filter
Closes#8440
Fixed behaviour where two representations of the default index analyzer weren't being treated as equivalent. Added REST test to confirm fix.
Closes#2716
In addition to `_source`, the following variables are available through
the `ctx` map: `_index`, `_type`, `_id`, `_version`, `_routing`,
`_parent`, `_timestamp`, `_ttl`.
Some of these fields are more useful still within the context of an
Update By Query, see #1607, #2230, #2231.
While this commit is primariy a fix for issue/8433 it adds more rigor to ShapeBuilder for parsing against the GeoJSON specification. Specifically, this adds LinearRing and LineString validity checks as defined in http://geojson.org/geojson-spec.html to ensure valid polygons are specified. The benefit of this fix is to provide a gate check at parse time to avoid any further processing if an invalid GeoJSON is provided. More parse checks like these will be necessary going forward to ensure full compliance with the GeoJSON specification.
Closes#8433
Based on some test failures, this commit fixes two minor things
* Bind ports only on so called ephemeral ports to prevent try to
bind to ports where elasticsearch already runs on
* Remove @Network annotation as it was used in a wrong scope
Previous to this change all features (_alias,_mapping,_settings,_warmer) are run regardless of which features are actually requested. This change fixes the request object to resolve this bug
Updated log4j link so it doesn't point to log4j 2.0 but version 1.2. Clarified which formats are supported and briefly explained what loggers and appenders are, plus added a link to the log4j docs.
Closes#5305Closes#8455
The compression bug fixed in #7210 can still strike us since we are
running BWC test against these version. This commit disables compression
forcefully if the compatibility version is < 1.3.2 to prevent debugging
already known issues.
The query_string query has an option for analyzing wildcard/prefix (#787) by a best effort approach.
This adds `analyze_wildcard` option also to simple_query_string.
The default is set to `false` so the existing behavior of simple_query_string is unchanged.
When data nodes receive mapping updates from the master, the parse it and merge it into their own in memory representation (if there). If this results in different bytes then the master sent, the nodes will send a refresh-mapping command to indicate to the master that it's byte level storage of the mapping should be refreshed via the document mappers. This comes handy when the mapping format has changed, in a backwards compatible manner, and we want to make sure we can still rely on the bytes to identify changes. An example of such a change can be seen at #4760.
This commit extends the logic to include the `_default_` type, which was never refreshed before. In some unlucky scenarios, this caused the _default_ mapping to be parsed with every cluster state update.
Closes#8413
This prevents too-difficult regular expressions from consuming
excessive RAM/CPU; the default max_determinized_states is 10,000 (same
as Lucene) but query_string and regepx query/filter can override
per-request.
The also upgrades to a new Lucene 5.0.0 snapshot.
Closes#8386Closes#8357