Adds support for calculating and sending diffs instead of full cluster state of the most frequently changing elements - cluster state, meta data and routing table.
Closes#6295
Whenever a transport client executes a request, it uses a built-in
RetryListener which tries to execute the request on another node.
However, if a connection error occurs, the onFailure() callback of
the listener is triggered, the netty I/O thread might still be used
to whatever failure has been added.
This commit offloads the onFailure handling to the generic thread pool.
In order to automatically sign and and upload our debian and RPM
packages, this commit incorporates signing into the build process
and adds the necessary steps to the release process. In order to do this
the pom.xml has been adapted and the RPM and jdeb maven plugins have been
updated, so the packages are signed on build. However the repositories
need to signed as well.
Syncing the repos requires downloading the current repo, adding
the new packages and syncing it back.
The following environment variables are now required as part of the build
* GPG_KEY_ID - the key ID of the key used for signing
* GPG_PASSPHRASE - your GPG passphrase
* S3_BUCKET_SYNC_TO: S3 bucket to sync new repo into
The following environment variables are optional
* S3_BUCKET_SYNC_FROM: S3 bucket to get existing packages from
* GPG_KEYRING - home of gnupg, defaults to ~/.gnupg
The following command line tools are needed
* createrepo (creates RPM repositories)
* expect (used by the maven rpm plugin)
* apt-ftparchive (creates DEB repositories)
* gpg (signs packages and repo files)
* s3cmd (syncing between the different S3 buckets)
The current approach would also work for users who want to run their
own repositories, all they need to change are a couple of environment
variables.
Minor implementation detail: Right now the branch name is used as version
for the repositories (like 1.4/1.5/1.6) - if we ever change our branch naming
scheme, the script needs to be fixed.
* Removed the docs for `index.compound_format` and `index.compound_on_flush` - these are expert settings which should probably be removed (see https://github.com/elastic/elasticsearch/issues/10778)
* Removed the docs for `index.index_concurrency` - another expert setting
* Labelled the segments verbose output as experimental
* Marked the `compression`, `precision_threshold` and `rehash` options as experimental in the cardinality and percentile aggs
* Improved the experimental text on `significant_terms`, `execution_hint` in the terms agg, and `terminate_after` param on count and search
* Removed the experimental flag on the `geobounds` agg
* Marked the settings in the `merge` and `store` modules as experimental, rather than the modules themselves
Closes#10782
the translog might be reused across engines which is currently a problem
in the design such that we have to allow calls to `close` more than once.
This moves the closed check for snapshot on the actual file to exit the loop.
Relates to #10807
If the translog is closed while a snapshot opertion is in progress
we must fail the snapshot operation otherwise we end up in an endless
loop.
Closes#10807
Internal: Change BigArrays to not extend AbstractComponent
In order to avoid the getLogger(getClass()) calls in the
AbstractComponent constructor.
Seems like BigArrays used to be a Singleton but it actually
no longer is one. Every time a SearchContext is created a
new BigArrays instance is created via the
withCircuitBreaking call.
closes#10798
In order to avoid the ``getLogger(getClass())`` calls in the
AbstractComponent constructor.
Seems like BigArrays used to be a Singleton but it actually
no longer is one. Every time a SearchContext is created a
new BigArrays instance is created via the
``withCircuitBreaking`` call.
Due to timing issues, mappings that are required to index a document might not
be available on the replica at indexing time. In that case the replica starts
listening to cluster state changes and re-parses the document until no dynamic
mappings updates are generated.
Hi there. I've been experimenting with the search templates recently and I'm a bit confused. Shouldn't the Mustache tags be written like `{{tagname}}` instead of `{tagname}`? Your using `{{...}}` [here](http://www.elastic.co/guide/en/elasticsearch/reference/current/search-template.html) BTW.
Using the first example in that page seems to indicate that something's wrong, or am I missing something?
```
$ curl 'localhost:9200/test/_search' -d '{"query":{"template":{"query":{"match":{"text":"{keywords}"}},"params":{"keywords":"value1_foo"}}}}'
{"took":1,"timed_out":false,"_shards":{"total":1,"successful":1,"failed":0},"hits":{"total":0,"max_score":null,"hits":[]}}
$ curl 'localhost:9200/test/_search' -d '{"query":{"template":{"query":{"match":{"text":"{{keywords}}"}},"params":{"keywords":"value1_foo"}}}}'
{"took":1,"timed_out":false,"_shards":{"total":1,"successful":1,"failed":0},"hits":{"total":1,"max_score":1.0,"hits":[{"_index":"test","_type":"testtype","_id":"1","_score":1.0,"_source":{"text":"value1_foo"}}]}}
```
MergeContext currently exists to store conflicts, and providing
a mechanism to add dynamic fields. MergeResults store the same
conflicts. This change merges the two classes together, as well
as removes the MergeFlags construct.
This is in preparation for simplifying the callback structures
to dynamically add fields, which will require storing the mapping
updates in the results, instead of having a sneaky callback to
the DocumentMapper instance. It also just makes more sense that
the "results" of a merge are conflicts that occurred, along with
updates that may have occurred. For MergeFlags, any future needs
for parameterizing the merge (which seems unlikely) can just be
added directly to the MergeResults as simlulate is with this change.
This refactoring and cleanup is that each request handler ends up
implementing too many methods that can be provided when the request handler itself
is registered, including a prototype like class that can be used to instantiate
new request instances for streaming.
closes#10730
Today we have a lot of bloat in the IndexStore and related classes. THe IndexStore interface
is unneeded as we always subclass AbstractIndexStore and it hides circular dependencies
that are problematic when added. Guice proxies them if you have an interface which is bad in
general. This commit removes most of the bloat classes and unifies all the classes we have
into a single one since they all just structural and don't encode any functionality.
Refactor TransportShardReplicationOperationAction state management into clear separate Primary phase and Replication phase. The primary phase is responsible for routing the request to the node holding the primary, validating it and performing the operation on the primary. The Replication phase is responsible for sending the request to the replicas and managing their responses.
This also adds unit test infrastructure for this class, and some basic tests. We can extend later as we continue developing.
Closes#10749
This commit adds support for structural errors / failures / exceptions
on the elasticsearch REST layer. Exceptions are rendering with at least
a `type` and a `reason` corresponding to the exception name and the message.
Some expcetions like the ones associated with an index or a shard will have
additional information about the index the exception was triggered on or the
shard respectivly.
Each rendered response will also contain a list of root causes which is a list
of distinct shard level errors returned for the request. Root causes are the lowest
level elasticsearch exception found per shard response and are intended to be displayed
to the user to indicate the soruce of the exception.
Shard level response are by-default grouped by their type and reason to reduce the amount
of duplicates retunred. Yet, the same exception retunred from different indices will not be
grouped.
Closes#3303