Date math index name resolution enables you to search a range of time-series indices, rather than searching all of your time-series indices and filtering the the results or maintaining aliases. Limiting the number of indices that are searched reduces the load on the cluster and improves execution performance. For example, if you are searching for errors in your daily logs, you can use a date math name template to restrict the search to the past two days.
The added `ExpressionResolver` implementation that is responsible for resolving date math expressions in index names. This resolver is evaluated before wildcard expressions are evaluated.
The supported format: `<static_name{date_math_expr{date_format|timezone_id}}>` and the date math expressions must be enclosed within angle brackets. The `date_format` is optional and defaults to `YYYY.MM.dd`. The `timezone_id` id is optional too and defaults to `utc`.
The `{` character can be escaped by places `\\` before it.
Closes#12059
When we rewrite to a MatchNoTermsQuery we were throwing out the boost which
could could lead to funky changes when the query against _all was in a
bool query.
Previously we would write the entire ByteBuffer to the stream to serialise the HDRHistogram even if it was not all needed. Now we only write the bytes that are actually written to in the ByteBuffer.
I suggest updating the BulkProcessor so that it prevents users from building a BulkProcessor with a null client. If the client is null, then the class should throw an exception with a message that describes the cause of the exception. Earlier today, when I passed a null client to the builder() method, later received an exception, and attempted to get the exception's message in my afterBulk() listener (e.g. e.getMessage()), the message was null. It took me a while to pinpoint the cause of the problem.
This can happen in two ways:
1. The _all field is disabled.
2. There are documents in the index, the _all field is enabled, but there are
no fields in any of the documents.
In both of these cases we now rewrite the query to a MatchNoDocsQuery which
should be safe because there isn't anything to match.
Closes#12439
When a node discovers shard content on disk which isn't used, we reach out to all other nodes that supposed to have the shard active. Only once all of those have confirmed the shard active, the shard has no unassigned copies *and* no cluster state change have happened in the mean while, do we go and delete the shard folder.
Currently, after removing a shard, the IndicesStores checks the indices services if that has no more shard active for this index and if so, it tries to delete the entire index folder (unless on master node, where we keep the index metadata around). This is wrong as both the check and the protections in IndicesServices.deleteIndexStore make sure that there isn't any shard *in use* from that index. However, it may be the we erroneously delete other unused shard copies on disk, without the proper safety guards described above.
Normally, this is not a problem as the missing copy will be recovered from another shard copy on another node (although a shame). However, in extremely rare cases involving multiple node failures/restarts where all shard copies are not available (i.e., shard is red) there are race conditions which can cause all shard copies to be deleted.
Instead, we should change the decision to clean up an index folder to based on checking the index directory for being empty and containing no shards.
This change creates a proper `distribution` modules in which we have today packaging for
all of our four current packages:
* zip
* tar.gz
* rpm
* deb
Licenes have moved into the distribution project as well. So have the config/ and the bin/ directory
from the core/ project.
The RPM package is now built, if rpmbuild exists.
The bats tests have been moved as well.
Also the zip distribution now executes the REST integration tests.
As Robert pointed out on #12465, it has the undesirable property of relying on
the operating system. So it would be better to use a simple rule such as
checking whether the file name starts with a dot.
When an index is opened it will not be assigned to a node but also not have closed state
anymore. Before we only checked if an index either is closed or assigned to the data node
and therefore the change from close->open was not written.
Some of the test for meta data are redundant. Also, since they
somewhat test service disruptions (start master with empty
data folder) we might move them to DiscoveryWithServiceDisruptionsTests.
Also, this commit adds a test for
https://github.com/elastic/elasticsearch/issues/11665
If we cache the type filter and we e.g. set its boost which is now settable on all queries, the boost will change for all subsequent queries. We should rather create a new query every time.
Major changes:
* Changed MetaData to holds alias and index lookup information into a single TreeMap instead of two separate maps.
* Moved the building of the alias / index lookup to the metadata builder.
Settings are currently parsed by looping over the tokens until an END_OBJECT token is reached. However, this does not mean that the end of
the settings stream was reached. This can occur, for example, when parsing a YAML settings file with inconsistent indentation. Currently
in this case, some settings will be silently ignored. This commit forces a check that we have in fact reached the end of the settings
stream.
Closes#12382
HDRHistogram has been added as an option in the percentiles and percentile_ranks aggregation. It has one option `number_significant_digits` which controls the accuracy and memory size for the algorithm
Closes#8324
The ThrottlingAllocationDecider is responsible to limit the number of incoming/local recoveries on a node. It therefore shouldn't count shards marked as relocating which represent the source of the recovery.
Closes#12409
If there are multiple levels of filtered leaf readers then with the unwrap() method it immediately returns the most inner leaf reader and thus skipping of over any other filtered leaf reader that may be instance of ElasticsearchLeafReader. By using #getDelegate() method we can check each filter reader layer if it is instance of ElasticsearchLeafReader, so that we never skip over any wrapped filtered leaf reader and lose the shard id.
Also, as set of utility methods was introduced on ShardRouting to do various types of matching with other shard routings, giving control about what exactly should be matched (same shard id, same allocation id, all but version and shard info etc.). This is useful here, but also prepares the grounds for the change needed in #12387 (making ShardRouting.equals be strict and perform exact equality).
Closes#12397
When a replica is initializing from the primary, and we find a better node that has full sync id match, it is better to cancel the existing replica allocation and allocate it to the new node with sync id match (eventually)
Today, when a user provides settings and specifies a classloader to be used, the classloader gets
dropped when we copy the settings to check for prompt entries. This change copies the classloader
when replacing the prompt placeholders and adds a test to ensure the InternalSettingsPreparer
always retains the classloader.
Closes#12340
JarHell has a low level check, but its more of a best effort one,
only checking if X-Compile-Target-JDK is set in the manifest. This
is the case for all lucene- and elasticsearch- generated jars, but
lets just be explicit for plugins.
The TransportSingleCustomOperationAction `prefer_local` option has been removed as it isn't worth the effort.
The TransportSingleShardAction will execute the operation on the receiving node if a concrete list doesn't provide a list of candite shards routings to perform the operation on.
Change #12371 broke fielddata on the `_parent` child for indices created before
2.0. This pull request adds back caching of the `_parent` fielddata for indices
created before 2.0 and cleans some related stuff. For instance
DocumentTypeListener doesn't need to take care of removed mappings anymore since
mappings can't be removed in 2.0.
The rangeQuery() method in MappedFieldType and some overwriting subtypes takes
a nullable QueryParseContext argument which is unused and can be deleted.
This is also useful for the current query parsing refactoring, since
we want to avoid passing the context object around unnecessarily.
IndicesStoreIntegrationTests.indexCleanup() tests if the shard files on disk are actually removed
after relocation. In particular it tests the following:
Whenever a node deletes a shard because it was relocated somewhere else, it first
checks if enough other copies are started somewhere else. The node sends a
ShardActiveRequest to the nodes that should have a copy.
The nodes that receive this request check if the shard is in state STARTED in which case
they respond with true. If they have the shard in POST_RECOVERY they register a cluster state
observer that checks at each update if the shard has moved to STARTED and respond with true when
this happens.
To test that the cluster state observer mechanism actually works, the latter can be triggered by
blocking the cluster state processing when a recover starts and only unblocking it shortly
after the node receives the ShardActiveRequest.
This is more reliable than using random cluster state processing delays
because the random delays make it hard to reason about different timeouts
that can be reached.
closes#11989
Closes#12367
Squashed commit of the following:
commit 9453c411798121aa5439c52e95301f60a022ba5f
Merge: 3511a9c 828d8c7
Author: Robert Muir <rmuir@apache.org>
Date: Wed Jul 22 08:22:41 2015 -0400
Merge branch 'master' into refactor_pluginservice
commit 3511a9c616503c447de9f0df9b4e9db3e22abd58
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 21:50:15 2015 -0700
Remove duplicated constant
commit 4a9b5b4621b0ef2e74c1e017d9c8cf624dd27713
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 21:01:57 2015 -0700
Add check that plugin must specify at least site or jvm
commit 19aef2f0596153a549ef4b7f4483694de41e101b
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 20:52:58 2015 -0700
Change plugin "plugin" property to "classname"
commit 07ae396f30ed592b7499a086adca72d3f327fe4c
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:36:05 2015 -0400
remove test with no methods
commit 550e73bf3d0f94562f4dde95239409dc5a24ce25
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:31:58 2015 -0400
fix loading to use classname
commit 04463aed12046da0da5cac2a24c3ace51a79f799
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:24:19 2015 -0400
rename to classname
commit 9f3afadd1caf89448c2eb913757036da48758b2d
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 20:18:46 2015 -0700
moved PluginInfo and refactored parsing from properties file
commit df63ccc1b8b7cc64d3e59d23f6c8e827825eba87
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:08:26 2015 -0400
fix test
commit c7febd844be358707823186a8c7a2d21e37540c9
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:03:44 2015 -0400
remove test
commit 017b3410cf9d2b7fca1b8653e6f1ebe2f2519257
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 22:58:31 2015 -0400
fix test
commit c9922938df48041ad43bbb3ed6746f71bc846629
Merge: ad59af4 01ea89a
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 22:37:28 2015 -0400
Merge branch 'master' into refactor_pluginservice
commit ad59af465e1f1ac58897e63e0c25fcce641148a7
Author: Areek Zillur <areek.zillur@elasticsearch.com>
Date: Tue Jul 21 19:30:26 2015 -0400
[TEST] Verify expected number of nodes in cluster before issuing shardStores request
commit f0f5a1e087255215b93656550fbc6bd89b8b3205
Author: Lee Hinman <lee@writequit.org>
Date: Tue Jul 21 11:27:28 2015 -0600
Ignore EngineClosedException during translog fysnc
When performing an operation on a primary, the state is captured and the
operation is performed on the primary shard. The original request is
then modified to increment the version of the operation as preparation
for it to be sent to the replicas.
If the request first fails on the primary during the translog sync
(because the Engine is already closed due to shadow primaries closing
the engine on relocation), then the operation is retried on the new primary
after being modified for the replica shards. It will then fail due to the
version being incorrect (the document does not yet exist but the request
expects a version of "1").
Order of operations:
- Request is executed against primary
- Request is modified (version incremented) so it can be sent to replicas
- Engine's translog is fsync'd if necessary (failing, and throwing an exception)
- Modified request is retried against new primary
This change ignores the exception where the engine is already closed
when syncing the translog (similar to how we ignore exceptions when
refreshing the shard if the ?refresh=true flag is used).
commit 4ac68bb1658688550ced0c4f479dee6d8b617777
Author: Shay Banon <kimchy@gmail.com>
Date: Tue Jul 21 22:37:29 2015 +0200
Replica allocator unit tests
First batch of unit tests to verify the behavior of replica allocator
commit 94609fc5943c8d85adc751b553847ab4cebe58a3
Author: Jason Tedor <jason@tedor.me>
Date: Tue Jul 21 14:04:46 2015 -0400
Correctly list blobs in Azure storage to prevent snapshot corruption and do not unnecessarily duplicate Lucene segments in Azure Storage
This commit addresses an issue that was leading to snapshot corruption for snapshots stored as blobs in Azure Storage.
The underlying issue is that in cases when multiple snapshots of an index were taken and persisted into Azure Storage, snapshots subsequent
to the first would repeatedly overwrite the snapshot files. This issue does render useless all snapshots except the final snapshot.
The root cause of this is due to String concatenation involving null. In particular, to list all of the blobs in a snapshot directory in
Azure the code would use the method listBlobsByPrefix where the prefix is null. In the listBlobsByPrefix method, the path keyPath + prefix
is constructed. However, per 5.1.11, 5.4 and 15.18.1 of the Java Language Specification, the reference null is first converted to the string
"null" before performing the concatenation. This leads to no blobs being returned and therefore the snapshot mechanism would operate as if
it were writing the first snapshot of the index. The fix is simply to check if prefix is null and handle the concatenation accordingly.
Upon fixing this issue so that subsequent snapshots would no longer overwrite earlier snapshots, it was discovered that the snapshot metadata
returned by the listBlobsByPrefix method was not sufficient for the snapshot layer to detect whether or not the Lucene segments had already
been copied to the Azure storage layer in an earlier snapshot. This led the snapshot layer to unnecessarily duplicate these Lucene segments
in Azure Storage.
The root cause of this is due to known behavior in the CloudBlobContainer.getBlockBlobReference method in the Azure API. Namely, this method
does not fetch blob attributes from Azure. As such, the lengths of all the blobs appeared to the snapshot layer to be of length zero and
therefore they would compare as not equal to any new blobs that the snapshot layer is going to persist. To remediate this, the method
CloudBlockBlob.downloadAttributes must be invoked. This will fetch the attributes from Azure Storage so that a proper comparison of the
blobs can be performed.
Closeselastic/elasticsearch-cloud-azure#51, closeselastic/elasticsearch-cloud-azure#99
commit cf1d481ce5dda0a45805e42f3b2e0e1e5d028b9e
Author: Lee Hinman <lee@writequit.org>
Date: Mon Jul 20 08:41:55 2015 -0600
Unit tests for `nodesAndVersions` on shared filesystems
With the `recover_on_any_node` setting, these unit tests check that the
correct node list and versions are returned.
commit 3c27cc32395c3624f7c794904d9ea4faf2eccbfb
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 14:15:59 2015 -0400
don't fail junit4 integration tests if there are no tests.
instead fail the failsafe plugin, which means the external cluster will still get shut down
commit 95d2756c5a8c21a157fa844273fc83dfa3c00aea
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 17:16:53 2015 +0200
Testing: Fix help displaying tests under windows
The help files are using a unix based file separator, where as
the test relies on the help being based on the file system separator.
This commit fixes the test to remove all `\r` characters before
comparing strings.
The test has also been moved into its own CliToolTestCase, as it does
not need to be an integration test.
commit 944f06ea36bd836f007f8eaade8f571d6140aad9
Author: Clinton Gormley <clint@traveljury.com>
Date: Tue Jul 21 18:04:52 2015 +0200
Refactored check_license_and_sha.pl to accept a license dir and package path
In preparation for the move to building the core zip, tar.gz, rpm, and deb as separate modules, refactored check_license_and_sha.pl to:
* accept a license dir and path to the package to check on the command line
* to be able to extract zip, tar.gz, deb, and rpm
* all packages except rpm will work on Windows
commit 2585431e8dfa5c82a2cc5b304cd03eee9bed7a4c
Author: Chris Earle <pickypg@users.noreply.github.com>
Date: Tue Jul 21 08:35:28 2015 -0700
Updating breaking changes
- field names cannot be mapped with `.` in them
- fixed asciidoc issue where the list was not recognized as a list
commit de299b9d3f4615b12e2226a1e2eff5a38ecaf15f
Author: Shay Banon <kimchy@gmail.com>
Date: Tue Jul 21 13:27:52 2015 +0200
Replace primaryPostAllocated flag and use UnassignedInfo
There is no need to maintain additional state as to if a primary was allocated post api creation on the index routing table, we hold all this information already in the UnassignedInfo class.
closes#12374
commit 43080bff40f60bedce5bdbc92df302f73aeb9cae
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 15:45:05 2015 +0200
PluginManager: Fix bin/plugin calls in scripts/bats test
The release and smoke test python scripts used to install
plugins in the old fashion.
Also the BATS testing suite installed/removed plugins in that
way. Here the marvel tests have been removed, as marvel currently
does not work with the master branch.
In addition documentation has been updated as well, where it was
still missing.
commit b81ccba48993bc13c7678e6d979fd96998499233
Author: Boaz Leskes <b.leskes@gmail.com>
Date: Tue Jul 21 11:37:50 2015 +0200
Discovery: make sure NodeJoinController.ElectionCallback is always called from the update cluster state thread
This is important for correct handling of the joining thread. This causes assertions to trip in our test runs. See http://build-us-00.elastic.co/job/es_g1gc_master_metal/11653/ as an example
Closes#12372
commit 331853790bf29e34fb248ebc4c1ba585b44f5cab
Author: Boaz Leskes <b.leskes@gmail.com>
Date: Tue Jul 21 15:54:36 2015 +0200
Remove left over no commit from TransportReplicationAction
It asks to double check thread pool rejection. I did and don't see problems with it.
commit e5724931bbc1603e37faa977af4235507f4811f5
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 15:31:57 2015 +0200
CliTool: Various PluginManager fixes
The new plugin manager parser was not called correctly in the scripts.
In addition the plugin manager now creates a plugins/ directory in case
it does not exist.
Also the integration tests called the plugin manager in the deprecated way.
commit 7a815a370f83ff12ffb12717ac2fe62571311279
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 13:54:18 2015 +0200
CLITool: Port PluginManager to use CLITool
In order to unify the handling and reuse the CLITool infrastructure
the plugin manager should make use of this as well.
This obsolets the -i and --install options but requires the user
to use `install` as the first argument of the CLI.
This is basically just a port of the existing functionality, which
is also the reason why this is not a refactoring of the plugin manager,
which will come in a separate commit.
commit 7f171eba7b71ac5682a355684b6da703ffbfccc7
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Tue Jul 21 10:44:21 2015 +0200
Remove custom execute local logic in TransportSingleShardAction and TransportInstanceSingleOperationAction and rely on transport service to execute locally. (forking thread etc.)
Change TransportInstanceSingleOperationAction to have shardActionHandler to, so we can execute locally without endless spinning.
commit 0f38e3eca6b570f74b552e70b4673f47934442e1
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 17:36:12 2015 -0700
More readMetadata tests and pickiness
commit 880b47281bd69bd37807e8252934321b089c9f8e
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 14:42:09 2015 -0700
Started unit tests for plugin service
commit cd7c8ddd7b8c4f3457824b493bffb19c156c7899
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 07:21:07 2015 -0400
fix tests
commit 673454f0b14f072f66ed70e32110fae4f7aad642
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 06:58:25 2015 -0400
refactor pluginservice
When performing an operation on a primary, the state is captured and the
operation is performed on the primary shard. The original request is
then modified to increment the version of the operation as preparation
for it to be sent to the replicas.
If the request first fails on the primary during the translog sync
(because the Engine is already closed due to shadow primaries closing
the engine on relocation), then the operation is retried on the new primary
after being modified for the replica shards. It will then fail due to the
version being incorrect (the document does not yet exist but the request
expects a version of "1").
Order of operations:
- Request is executed against primary
- Request is modified (version incremented) so it can be sent to replicas
- Engine's translog is fsync'd if necessary (failing, and throwing an exception)
- Modified request is retried against new primary
This change ignores the exception where the engine is already closed
when syncing the translog (similar to how we ignore exceptions when
refreshing the shard if the ?refresh=true flag is used).
The help files are using a unix based file separator, where as
the test relies on the help being based on the file system separator.
This commit fixes the test to remove all `\r` characters before
comparing strings.
The test has also been moved into its own CliToolTestCase, as it does
not need to be an integration test.
There is no need to maintain additional state as to if a primary was allocated post api creation on the index routing table, we hold all this information already in the UnassignedInfo class.
closes#12374
The release and smoke test python scripts used to install
plugins in the old fashion.
Also the BATS testing suite installed/removed plugins in that
way. Here the marvel tests have been removed, as marvel currently
does not work with the master branch.
In addition documentation has been updated as well, where it was
still missing.
This dependency was used in order for mapping updates that change the fielddata
format to take effect immediately. And the way it worked was by clearing the
cache of fielddata instances that were already loaded. However, we do not need
to cache the already loaded (logical) fielddata instances, they are cheap to
regenerate. Note that the fielddata _caches_ are still kept around so that we
don't keep on rebuilding costly (physical) fielddata values.
The new plugin manager parser was not called correctly in the scripts.
In addition the plugin manager now creates a plugins/ directory in case
it does not exist.
Also the integration tests called the plugin manager in the deprecated way.
In order to unify the handling and reuse the CLITool infrastructure
the plugin manager should make use of this as well.
This obsolets the -i and --install options but requires the user
to use `install` as the first argument of the CLI.
This is basically just a port of the existing functionality, which
is also the reason why this is not a refactoring of the plugin manager,
which will come in a separate commit.
The `_index` field is now a completely virtual field thanks
to #12027. It is no longer necessary to index the actual value
of the index name.
closes#12329
The index name was passed along through many levels of mapping parsing,
just so that it could be used for _index. However, the index name
is really metadata that should exist alongside things like type and
id in SourceToParse.
This change moves index name to SourceToParse, and eliminates it from the
DocumentMapperParser.
Today we grant read+write+delete access to any files underneath the home.
But we have to remove this, if we want to have improved security of files
underneath elasticsearch.
Fold ignored unassigned to a UnassignedShards and have simpler handling of them. Also remove the trapy way of adding an ignored unassigned shards today directly to the list, and have dedicated methods for it.
This change also removes the useless moving of unassigned shards to the end, since anyhow we first, sort those unassigned shards, and second, we now have persistent "store exceptions" that should not cause "dead letter" shard allocation.
Break it into more manageable code by separating allocation primaries and allocating replicas. Start adding basic unit tests for primary shard allocator.
Our thread pools have support for timeout on a task. To support this, a special background task is schedule to run at timeout. That background task fires and check if the main task is still in the executor queue and then cancels it if needed. Currently we schedule this background task before adding the main task to the queue. If the timeout is very small (in tests we often use numbers like 2 ms) the background task can fire before the main one is added to the queue causing the timeout to be missed.
See http://build-us-00.elastic.co/job/es_g1gc_master_metal/11780/testReport/junit/org.elasticsearch.cluster/ClusterServiceTests/testTimeoutUpdateTask/Closes#12319
On top of that:
1) A relocation target shards' allocation id is changed to include the allocation id of the source shard under relocatingId (similar to shard routing semantics)
2) The logic around state change for finalize shard relocation is simplified - one simple start the target shard (we previously had unused logic around relocating state)
Closes#12299
While the GeoJSON spec does say a polygon is represented as an array of LinearRings (where a LinearRing is defined as a 'closed' array of points), the coerce parameter provides users with flexibility to have ES automatically close polygons. This addresses situations like those integrated with twitter (where GeoJSON polygons are not closed) such that our users do not have to write extra code to close the polygon. This code change adds the optional coerce parameter to the GeoShapeFieldMapper.
closes#11131
Currently this target is "yet another way" to run elasticsearch,
which we can't maintain. It also has the problem that it doesnt
ensure its running on the latest source code, doesn't configure
any scratch space properly, won't work with securitymanager, list
goes on.
Even if we made it work, it would break every day, since its untested.
Instead, `mvn package -Drun -DskipTests` will run packaging, and then
startup bin/elasticsearch (like integration tests, but in foreground).
It also enables debugger socket on port 8000, for people that like
IDE debuggers and not system.out.println.
Its a little slower to get started because of all the shading/RPM/DEB
building going on in `package` but that is just what it is right now
until that stuff is moved out.
failsafe uses surefire, which sucks. It also mean integ tests act alien right now.
I would rather have the consistency, e.g. things formatted the same way, running integ tests under security manager, etc.
Store information reports on which nodes shard copies exist, the shard
copy version, indicating how recent they are, and any exceptions
encountered while opening the shard index or from earlier engine failure.
closes#10952
When adding a script to the Groovy classloader, the script name is used
as the class identifier in the classloader. This means that in order not
to break JVM Classloader convention, that script must always be
available by that name. As a result, modifying a script with the same
content over and over causes it to be loaded with a different name (due
to the incrementing integer).
This is particularly bad when something like chef or puppet replaces the
on-disk script file with the same content over and over every time a
machine is converged.
This change makes the script name the SHA1 hash of the script itself,
meaning that replacing a script with the same text will use the same
script name.
Resolves#12212
Just like specifying `?preference=_primary`, this adds the ability to
specify `?preference=_replica` or `?preference=_replica_first` on
requests that support it.
Resolves#12222
This action is a liveness test added in #8763 . It should be excluded, just like the fault detection logic or things become overly chatty.
Closes#12291
Most of our tests call assertSearchResponse which checks whether all shards
were successful. However this is usually not necessary given that all shards
that received documents should be available given the way our indexing works.
The only shards that might not be available are those that did not index
any documents. So removing this assertion would allow us to remove most
ensureGreen/ensureYellow calls while still being able to assert on the content
of the search response since all data have been taken into account.
For now I only removed the ensureYellow/Green calls from SearchQueryTests in
order to not de-stabilize the build, but eventually we should remove most of
them.
Today, the unicast zen test configuration will try to find a open port starting at the internal test cluster's
base port and continuing for 1000 ports. The internal test cluster class assigns a port range of 100 ports
to each JVM. This means that the unicast zen test configuration will try ports in the range for another JVM
and can lead to port conflicts. This change uses the same value for both so that the unicast configuration
does not go into another JVM's port range.
Add a unique allocation id for a shard, helping to uniquely identify a specific allocation taking place to a node.
A special case is relocation, where a transient relocationId is kept around to make sure the target initializing shard (when using RoutingNodes) is using it for its id, and when relocation is done, the transient relocationId becomes the actual id of it.
closes#12242
If an index name is reused but a leftover shard still exists on any node
we fail repeatedly to allocate the shard since we now check the index UUID
before reusing data. This commit allows to recover even if there is such a
leftover shard by deleting the leftover shard.
Closes#10677
During master election each node pings in order to discover other nodes and validate the liveness of existing nodes. Based on this information the node either discovers an existing master or, if enough nodes are found (based on `discovery.zen.minimum_master_nodes>>) a new master will be elected.
Currently, the node that is elected as master will currently update it the cluster state to indicate the result of the election. Other nodes will submit a join request to the newly elected master node. Instead of immediately processing the election result, the elected master
node should wait for the incoming joins from other nodes, thus validating the elections result is properly applied. As soon as enough nodes have sent their joins request (based on the `minimum_master_nodes` settings) the cluster state is modified.
Note that if `minimum_master_nodes` is not set, this change has no effect.
Closes#12161
Require urls for URL repository to be listed in repositories.url.allowed_urls setting. This change ensures that only authorized URLs can be accessed by elasticsearch
The previous strategy (target/xxx + .m2/repository) is obviously broken for multi-module
builds.
Includes hack for crazy jython, which "finds its own jar" then looks for a Lib/ beside it
Lucene deprecated this in 4.0 and we only try best effort to support it.
Folks should only use edit distance rather than some length based
similarity. Yet the formular is simple enough such that users can
still do it in the client if they really need to.
Closes#10638
The old script syntax has been removed from the Java API but the metrics aggregations were missed. This change removes the old script API from the ValuesSourceMetricsAggregationBuilder and removes the relevant test methods for the metrics aggregations.
Today we have a intermediate hierarchy for shard and index exceptions
which makes it hard to introduce generic exceptions like ResourceNotFoundException
intoduced in this commit. This commit breaks up the hierarchy by adding index and shard
as a special internal header that gets rendered for every exception that fills that header.
This commit removes dedicated exceptions like `IndexMissingException` or
`IndexShardMissingException` in favour of `ResourceNotFoundException`
Change the default delayed allocation timeout from 0 (no delayed allocation) to 1m. The value came from a test of having a node with 50 shards being indexed into (so beefy translog requiring flush on shutdown), then shutting it down and starting it back up and waiting for it to join the cluster. This took, on a slow machine, about 30s.
The value is conservatively low and does not try to address a virtual machine / OS restart for now, in order to not have the affect of node going away and users being concerned that shards are not being allocated to the rest of the cluster as a result of that. The setting can always be changed in order to increase the delayed allocation if needed.
closes#12166
The MetaData.clusterUUID is guaranteed to be unique across clusters and is handy (which may or may not have the same human readable cluster name).
Closes#11832
As explained in #11831, we currently have uuid fields on the cluster state, meta data and index metadata. The latter two are persistent across changes are being effectively used as a persistent uuid for this cluster and a persistent uuid for an index. The first (ClusterState.uuid) is ephemeral and changes with every change to the cluster state. This is confusing,
We settled on having the following, new names:
-> ClusterState.uuid -> stateUUID (transient)
-> MetaData.uuid -> clusterUUID (persistent)
-> IndexMetaData.uuid -> indexUUID (persistent).
Closes#11914Closes#11831
Today we throw ElasticsearchException if we can't lock the index. This can cause
problems since some places where we have logic to deal with IOException on shard
deletion won't schedule a retry if we can't lock the index dir for removal. This
is the case on shadow replicas for instance if a shared FS is used. The result
of this is that the delete of an index is never acked.
A change in #12116 introduces closing / cleaning of search ctx even if
the index service was closed due to a relocation of it's last shard. This
is not desired since in that case it's fine to serve the pending requests from
the relocated shard. This commit adds an extra check to ensure that the index is
either removed (delete) or closed via API.
The term query parser was too lenient during parsing and allowed to specify
more than one field, even though this expected to filter only for a single field.
This commit returns an exception if a query has been specified more than once.
Closes#12184
Modified ScriptEngineService to pass in a CompiledScript object
with newly added name and type member variables.
This can in turn be used to give better scripting error messages
with the type of script used and the name of the script.
Required slight modifications to the caching mechanism.
Note that this does not enforce good behavior in that plugins will
have to write exceptions that also output the name of the script
in order to be effective. There was no way to wrap the script
methods in a try/catch block properly further up the chain because
many have script-like objects passed back that can be run at a
later time.
closes#6653closes#11449
Changes in a nutshell:
* All expression logic is now encapsulated by ExpressionResolver interface.
* MetaData#convertFromWildcards() gets replaced by WildcardExpressionResolver.
* All of the indices expansion methods are being moved from MetaData class to the new IndexNameExpressionResolver class.
* All single index expansion optimisations are removed.
The logic for resolving a concrete index name from an expression has been moved from MetaData to IndexExpressionResolver. The logic has been cleaned up and simplified were was possible without breaking bwc.
Also the notion of aliasOrIndex has been changed to index expression.
The IndexNameExpressionResolver translates index name expressions into concrete indices. The list of index name expressions are first delegated to the known ExpressionResolverS. An ExpressionResolver is responsible for translating if possible an expression into another expression (possibly but not required this can be concrete indices or aliases) otherwise the expressions are left untouched. Concretely this means converting wildcard expressions into concrete indices or aliases, but in the future other implementations could convert expressions based on different rules.
To prevent many overloading of methods, DocumentRequest extends now from IndicesRequest. All implementation of DocumentRequest already did implement IndicesRequest indirectly.
This allows the creation of the RPM artifact as part of the
maven package phase. The result of this is that we get checksum and
name correction for-free as it's all build an installed into the m2
repository. This also publishes the RPM together with .deb to the mvn
mirror.
Note: this will only build the RPM as part of the package phase if
`-Dpackage.rpm=true` since the binaries to build the RPM are not
availabel on all platforms.
Today we only clear search contexts for deleted indies. Yet, we should
do the same for closed indices to ensure they can be reopened quickly.
Closes#12116
A method for the new Script API were missing in the ValuesSourceMetricsAggregationBuilder. This change adds the missing method and deprecates the old Script API methods
When a node sends a shard started message to the master, the master goes through the routing table looking for the shard to start. At the moment we validate the indexUUID, the node the shard is assigned to and the fact that the shard is initializing. This check goes wrong if a relocating replica shard finishes recovery just at the moment the source node leaves the cluster. In this case the master will cancel the recovery and will likely assign a new initializing replica to the same target node. In this case the message from the relocation recovery can activate the new replica wrongfully.
Also, the logic for decided whether an incoming shard started message will be applied was split between ShardStateAction and the AllocationService.
This commit does the following:
1) Let ShardStateAction only filter basic stuff like index existence and indexUUID.
2) Move the trickier shard started matching logic to the AllocationService and make it stricter
3) Unify ShardStateAction filtering logic for both shard started and shard failed.
4) Add unit tests for all of the above.
For an example test failure see: http://build-us-00.elastic.co/job/es_core_16_centos/388/Closes#11999
Today everything is tight to having the next version as the latest.
In order to work towards 2.0.0.beta1 we need to fix all the usage of
2.0.0-SNAPSHOT to reflect the version we will release soon.
Usually we do this on the release branch but to simplify things I wanna
keep this on master for now and move to 2.1.0-SNAPSHOT on master once
we created a 2.0 branch.
Closes#12148
This information was stored with the snapshot but wasn't available on the interface. Knowing the version of elasticsearch that created the snapshot can be useful to determine the minimal version of the cluster that is required in order to restore this snapshot.
Closes#11980
This change fixes the plugin manager to trim `elasticsearch-` and `es-` prefixes from plugin names
for our official plugins. This restores the old behavior prior to #11805.
Closes#12143
Today it will remove all permissions and only set execute bit:
---x--x--x
Instead we should preserve existing permissions, and just add
read and execute to whatever is there.
Closes#12142
This change allows custom settings to be passed to the client for the external test cluster,
which is necessary when additional settings need to be passed to the client in order to
properly communicate with the external test cluster.
Before inner_hits existed named queries has support to also verify if inner queries of nested query matched with returned documents. This logic was broken and became obsolete from the moment inner hits get released. #10694 fixed named queries for nested docs in the top_hits agg, but it didn't fix the named query support for nested inner hits. This commit fixes that and on top of this also adds support for parent/child inner hits.
We allow setting the node's name a few different ways: the `name` system
property, the setting `name`, and the setting `node.name`. There is an order
of preference to these settings that gets applied, which can copy values from the
system property or `node.name` setting to the `name` setting. When setting
only `node.name` to one of the prompt placeholders, the user would be
prompted twice as the value of `node.name` is copied to `name` prior to
prompting for input. Additionally, the value entered by the user for `node.name`
would not be used and only the value entered for `name` would be used.
This fix changes the behavior to only prompt once when `node.name is set` and
`name` is not set. This is accomplished by waiting until all values have been
prompted and replaced, then the logic for determining the node's name is
executed.
Closes#11564
This commit adds support to retrieve fields when using the bulk update API. This functionality was previously available for the update API
but not for the bulk update API.
Closes#11527
Fixed documentation since the default rewrite method for fuzzy queries is to
select top terms, fixed usage of the fuzzy rewrite method, and removed unused
`rewrite` parameter.
Close#6932
This rewrite method is interesting because it computes scores as if all terms
had the same frequencies, which avoids disappointments with ranking when a fuzzy
query ranks typos first given that they are less frequent than the correct term.
Today shards are responsible for producing one sort value per document, which
is later used on the coordinating node to resolve the global top documents.
However, this is problematic on string fields with
`missing: _first, order: desc` or `missing: _last, order: asc` given that there
is no such thing as a string that compares greater than any other string. Today
we use a string containing a single code point which is the maximum allowed code
point but this is a hack: instead we should inform the coordinating node that
the document had no value and let it figure out how it should be sorted
depending on whether missing values should be sorted first or last.
Close#9155
Simplify and consolidate ShardRouting construction. Make sure that there is really only one place it gets created, when a shard is first created in unassigned state, and from there on, it is either copy constructed or built internally as a target for relocation.
This change helps make sure within our codebase data carries over by the ShardRouting is not lost as the shard goes through transitions, and can help simplify the addition of more data on it (like uuid).
For testing, a centralized TestShardRouting allows to create testable versions of ShardRouting, that are not needed to be as strict as the non test codebase. This can be cleanup more later on, but it is a good start.
closes#12125
This makes FuzzyQueryBuilder and Parser take an Object as a value using the
same logic as termQuery, so that numbers, dates or Strings would be properly
handled.
Relates #11865Closes#12020
This converts the tracking of jars and classes in JarHell to use
Path objects, instead of URL. This makes for nicer printing
of the underlying path when an error does occur.
This commit defaults fuzzy_transpositions on fuzzy queries to true. This means that by default, tranpositions will now count as a single
edit.
Closes#9278
the failure type. This change marks the engine as corrupted only when the failure
is caused by an actual index corrruption. When an engine is failed for other
reasons, the engine is only closed without removing the shard state.
closes#11788
Plugin Manager can now use another simplified form when a user wants to install an official plugin hosted at elasticsearch download service.
The form we use is:
```sh
bin/plugin install pluginname
```
As plugins share now the same version as elasticsearch, we can automatically guess what is the exact current version of the plugin manager script.
Also, download service will now use `/org.elasticsearch.plugins/pluginName/pluginName-version.zip` URL path to download a plugin.
If the older form is provided (`user/plugin/version` or `user/plugin`), we will still use:
* elasticsearch download service at `/user/plugin/plugin-version.zip`
* maven central with groupIp=user, artifactId=plugin and version=version
* github with user=user, repoName=plugin and tag=version
* github with user=user, repoName=plugin and branch=master if no version is set
Note that community plugin providers can use other download services by using `--url` option.
If you try to use the new form with a non core elasticsearch plugin, the plugin manager will reject
it and will give you all known core plugins.
```
Usage:
-u, --url [plugin location] : Set exact URL to download the plugin from
-i, --install [plugin name] : Downloads and installs listed plugins [*]
-t, --timeout [duration] : Timeout setting: 30s, 1m, 1h... (infinite by default)
-r, --remove [plugin name] : Removes listed plugins
-l, --list : List installed plugins
-v, --verbose : Prints verbose messages
-s, --silent : Run in silent mode
-h, --help : Prints this help message
[*] Plugin name could be:
elasticsearch-plugin-name for Elasticsearch 2.0 Core plugin (download from download.elastic.co)
elasticsearch/plugin/version for elasticsearch commercial plugins (download from download.elastic.co)
groupId/artifactId/version for community plugins (download from maven central or oss sonatype)
username/repository for site plugins (download from github master)
Elasticsearch Core plugins:
- elasticsearch-analysis-icu
- elasticsearch-analysis-kuromoji
- elasticsearch-analysis-phonetic
- elasticsearch-analysis-smartcn
- elasticsearch-analysis-stempel
- elasticsearch-cloud-aws
- elasticsearch-cloud-azure
- elasticsearch-cloud-gce
- elasticsearch-delete-by-query
- elasticsearch-lang-javascript
- elasticsearch-lang-python
```
AbstractFieldMapper is the only direct base class of FieldMapper.
This change moves all AbstractFieldMapper functionality into
FieldMapper, since there is no need for 2 levels of abstraction.
Each scroll on a scan causes a query to be executed. This commit adds support for these indirect queries to count against the search stats.
Additionally, this commit adds three new search stats: scroll_count, scroll_time_in_millis, and scroll_current. scroll_count tracks the
number of completed scrolls. scroll_time_in_millis tracks the total time that scrolls were held open. scroll_current tracks the number of
scrolls currently open.
Closes#9109
In testing infra, one can simulate node GCs, network issues and other problems by adding a disruption to the test cluster. Those disruption are automatically removed after the test is done. At the moment each disruption indicates how long it will take the cluster to heal once the disruption is removed and the test cluster waits for this amount of time. However, more often than not this is an upper bound, causing a much longer wait than needed. Instead we should push the responsibility of healing to the disruption it self, where we can be smarter about what we wait for.
Closes#12071
When using `awaitBusy`, sometimes, you might not want to double time between two runs in an infinitive manner.
For example, let's say it will probably take 30 seconds to run a test.
When doubling all the time, you will most likely wait for a bigger time than needed:
|iteration|ms |s |duration (ms)|duration (s)|
|-----------|-------------|-----------|-----------|-----------|
|1|1|0,001|1|0,001|
|2|2|0,002|3|0,003|
|3|4|0,004|7|0,007|
|4|8|0,008|15|0,015|
|5|16|0,016|31|0,031|
|6|32|0,032|63|0,063|
|7|64|0,064|127|0,127|
|8|128|0,128|255|0,255|
|9|256|0,256|511|0,511|
|10|512|0,512|1023|1,023|
|11|1024|1,024|2047|2,047|
|12|2048|2,048|4095|4,095|
|13|4096|4,096|8191|8,191|
|14|8192|8,192|16383|16,383|
|15|16384|16,384|32767|32,767|
|16|32768|32,768|65535|65,535|
|17|65536|65,536|131071|131,071|
|18|131072|131,072|262143|262,143|
|19|262144|262,144|524287|524,287|
|20|524288|524,288|1048575|1048,575|
|21|1048576|1048,576|2097151|2097,151|
For example here, if the task is successful after 35 seconds, we will most likely have to wait for 32s more before the Predicate is run again.
With this patch, the maximum sleep time is now set to 1 second.
This pipeline aggregation runs a script on each bucket in the parent aggregation to determine whether the bucket is kept in the final aggregation tree. If the script returns true the bucket is retained, if it returns false the bucket is dropped
If you are using the default date or the named identifiers of dates,
the current implementation was allowed to read a year with only one
digit. In order to make this more strict, this fixes a year to be at
least 4 digits. Same applies for month, day, hour, minute, seconds.
Also the new default is `strictDateOptionalTime` for indices created
with Elasticsearch 2.0 or newer.
In addition a couple of not exposed date formats have been exposed, as they
have been mentioned in the documentation.
Closes#6158
Field names containing dots can cause problems. For example, @jpountz
made this recreation which cause no error, but can result in a
serialization exception if the type already exists:
https://gist.github.com/jpountz/8c66817e00a322b81f85
But this is not just a potential conflict. It also has larger problems,
since only the leaf mapper is created. The intermediate "foo" object
field would not exist if only "foo.bar" was in the mappings.
This change forbids the use of dots in field names. It also
fixes an issue with passing through the update_all_types setting,
which was always set to true whenever a type already existed (!).
I do not think we should worry about backwards compatibility here. This
should be a hard break (and added to the migration plugin).
This commit adds logic to prefer shards with higher priority
or from newer indicse to be allocated first if they are unallocated post API.
This commit allows users to set `index.priority` to a non-negative integer to
prioritize index recovery for certain indices. This setting is dynamically updateable
and defaults to `0`. If two indices have the same priority this change takes the creation
date into account to prioritize shards from newer indices which is important in the time-based
indices usecase.
Closes#11787
When a bulk request fails on a Delete or Update request, the BulkItemResponse
reports incorrect "index" operation in the response. This PR fixes this
for the case of closed indices as reported in #9821 but also for
other failures and adds tests for the two cases covered.
Closes#9821
the specialization can cause stack overflows if an exception is a
ElasticsearchWrapperException as well as a ElasticsearchException.
This commit just relies on the unwrap logic now to find the cause and only
renders if we the rendering exception is the cause otherwise forwards
to the generic exception rendering.
Closes#11994
While MappedFieldType contains settings for doc values and fielddata,
AbstractFieldMapper had special logic in its constructor that
required setting these on the field type from there. This change
removes those settings from the AbstractFieldMapper constructor.
As a result, defaultDocValues(), defaultFieldType() and
defaultFieldDataType() are no longer needed.
This commit merges the pre-existing special exception that
allowed to associate headers with exceptions and the elasticsaerch
base class `ElasticsearchException` This allows for more generic use
of exceptions where plugins can associate meta-data with any elasticsearch
base exception to control behavior etc.
This also addds a generic SecurityException to allow plugins to pass on
information based on the RestStatus.
If the version of a node is lower than the minimum supported version or higher than the maximum supported version, a node shouldn't be allowed to join and nodes should join that elected master node
Closes#11924
Removed ParseField#match variant that accepts the field name only, without parse flags. Such a method is harmful as it defaults to empty parse flags, meaning that no deprecation exceptions will be thrown in strict mode, which defeats the purpose of using ParseField. Unfortunately such a method was used in a lot of places were the parse flags weren't easily accessible (outside of query parsing), and in a lot of other places just by mistake.
Parse flags have been introduced now as part of SearchContext and mappers where needed. There are a few places (e.g. java api requests) where it is not possible to retrieve them as they depend on the index settings, in that case we explicitly pass in EMPTY_FLAGS for now, but this has to be seen as an exception.
Closes#11859
This commit changes MessageChannelHandler to not skip the underlying
ChannelBuffer while a StreamInput is open on top of it. In case eg. compression
is enabled, this prevents failures due to the fact that the decompressed
stream input expects a certain structure that it can't verify if the position
of the underlying buffer is changed.
Added dynamic arguments to `ElasticsearchException`, `ElasticsearchParseException` and `ElasticsearchTimeoutException`.
This helps keeping the exception messages clean and readable and promotes consistency around wrapping dynamic args with `[` and `]`.
This is just the start, we need to propagate this to all exceptions deriving from `ElasticsearchException`. Also, work started on standardizing on lower case logging & exception messages. We need to be consistent here...
- Uses the same `LoggerMessageFormat` as used by our logging infrastructure.
The work around for resolving `now` doesn't need to be used for aliases, becuase alias filters are parsed at search time. However it can't be removed, because the percolator relies on it.
Parent/child can be specified again in alias filters, this now works again because alias filters are parsed at search time. Parent/child will also use the late query parse work around, to make sure to do the final preparations when the search context is around. This allows the aliases api to validate the parent/child queries without failing because there is no search context.
Closes#10485
Eventually, the field type should not need any names, because there
will be only one name which leads to finding it (the full name, which is
also the index name). However, the short or "simple" name (using java
terminology for class names) is needed just in a couple places, for
serialization.
This change moves the simple name out of MappedFieldType.Names, into
Mapper, and makes Mapper and FieldMapper abstract classes.
Today we loose the RestStatus code for non-serializable exceptions.
This can be tricky if they are supposed to signal certain situations
like authentication errors etc. This commit adds support for carrying on
the exceptions in the NotSerializableExceptoinWrapper
The filters aggregation now has an option to add an 'other' bucket which will, when turned on, contain all documents which do not match any of the defined filters. There is also an option to change the name of the 'other' bucket from the default of '_other_'
Closes#11289
This change means that when the skip gap policy is used, the bucket script aggregation will skip executing the script on a bucket if any of the required bucket_paths are missing for the bucket. No aggregation will be added to the bucket, and the aggregation will move to the next bucket.
This commit changes the postrm script so that it prints error messages instead of failing & exiting when the deletion of a directory failed while removing a RPM/DEB package.
Closes#11373
This allows a lot of null checks to be removed where we were always falling back to the ValueFormat.RAW anyway. Now the format is set to ValueFormat.RAW when no alternative is suitable.
Closes#10594
Field stats index constraints allows to omit all field stats for indices that don't match with the constraint. An index
constraint can exclude indices' field stats based on the `min_value` and `max_value` statistic. This option is only
useful if the `level` option is set to `indices`.
For example index constraints can be useful to find out the min and max value of a particular property of your data in
a time based scenario. The following request only returns field stats for the `answer_count` property for indices
holding questions created in the year 2014:
curl -XPOST 'http://localhost:9200/_field_stats?level=indices' -d '{
"fields" : ["answer_count"] <1>
"index_constraints" : { <2>
"creation_date" : { <3>
"min_value" : { <4>
"gte" : "2014-01-01T00:00:00.000Z",
},
"max_value" : {
"lt" : "2015-01-01T00:00:00.000Z"
}
}
}
}'
Closes#11187
"Root" is a very confusing term for meta field mappers. This change
renames "RootMapper" to "MetadataFieldMapper" and simplifies
how metadata mappers are setup.
It also requires that metadata mappers are now a FieldMapper
(MetadataFieldMapper extends from AbstractFieldMapper). The only
use of a root mapper that wasn't a field mapper was the theoretical
"external" root mapper (just a test mapper). But it doesn't make
sense to not have an actual field, and this falls inline with
the hopefully eventual collapsing of AbstractFieldMapper/FieldMapper/Mapper.
- shard listing actions underpinning shard allocation do not have access to that new node yet (causing errors during shard allocation see #11923
- the very first cluster state published to a node already has shard assignments to it. This surfaced other issues we are working to fix separately
This commit changes the reroute to be done post processing the initial join cluster state to side step these issues while we work on a longer term solution.
Closes#11960
We had several problems with Java Serializatin in the past. At some point
in the Java 1.7.x series JDKs where not compatible anymore when java
serialization (ObjectStream) was used to exchange objects. In elasticsearch
we used this to serialize exceptions across the wire which caused several problems
with incompatible JDKs. While causing lot of trouble this essentially prevented
users from moving forward and upgrade their JVMs. To prevent these kind of issues
this commit removes the dependency on java serialization entirely and bans the
usage of ObjectOutputStream and ObjectInputStream entirely.
Yet, we can't fully serialize all exception anymore such that this commit
is best effort and adds hand written serialization to all elasticsearch exceptions
as well to a selected set of JDK and Lucene exceptions. (see StreamOutput#writeThrowable /
StreamInput.readThrowable). Stacktraces should be preserved for all exceptions while
several names might be replaced with ElasticsearchException if there is no mapping for
the given exception.
In order to support older RPM based distributions like CentOS5,
we should have one RPM available, which is not signed.
This commit creates an unsigned RPM first, then moves it over to
target/releases during the build, then builds a signed RPM.
The unsigned one is uploaded via S3, where as the signed one is
used for the repositories.
In addition, you can now build an RPM without having to specify
any gpg credentials due to offloading this into a maven profile
that is only activated when specifying `rpm.sign` property.
Closes#11587
instead of maintaining a thread local cache in the PercolatorQueriesRegistry.
Before PercolatorQueriesRegistry had its own cache, because all the queries had to forcefully opt out of caching. Nowadays in master small segments are never cached by the query cache, so the reason for the dedicated cache is no longer valid.
Today we keep track of how often filters are used at the index level in order
to decide whether they should be cached or not. This is an issue if you have
several shards of the same index on the same node as it will multiply statistics
by the number of shards that you have for this index on the node, which defeats
the purpose of waiting for a filter to be reused before caching them.
If the translog UUID is corrupted we should not convert it
to UTF-8 since it might be invalid. Instead we should compare
the UTF-8 byte representation directly.