The _field_names field was fixed in 1.5.1 (#10268) to correctly be
disabled for indexes before 1.3.0. However, only the exists filter
was updated to check this enabled flag on 1.x/1.5. The missing
filter on those branches still checks the field type to see if it
is indexed, which causes the filter to always try and use
the _field_names field for those old indexes.
This change adds a test to the old index tests for missing filter.
closes#10842
Minor issue with specifying the correct version when starting the package release script.
Another issue fixed to make sure that the S3 bucket parameters act the same.
1. initialize SM after things like mlockall. Their tests currently
don't run with securitymanager enabled, and its simpler to just
run mlockall etc first.
2. remove redundant test permissions (junit4.childvm.cwd/temp). This
is alreay added as java.io.tmpdir.
3. improve tests to load the generated policy with some various
settings and assert things about the permissions on configured
directories.
4. refactor logic to make it easier to fine-grain the permissions later.
for example we currently allow write access to conf/. In the future
I think we can improve testing so we are able to make improvements here.
Since the circuit breaking service doesn't actually change for
BigArrays, we can eagerly create a new instance only once and use that
for all further invocations of `withCircuitBreaking`.
This plugin should be ignored as it will make the internal eclipse build fail when there are NOCOMMIT comments in files, which are expected during feature development. This change only affects eclipse users and only when they are using the m2e eclipse integration
This changes the default ranking behaviour of single-term queries on numeric fields to use the usual Lucene TermQuery scoring logic rather than a constant-scoring wrapper.
Closes#10628
We have some duplication in TransportIndexAction/TransportShardBulkAction due
to the fact that we have totally different branches for INDEX and CREATE
operations. This commit tries to share the logic better between these two cases.
Adds support for calculating and sending diffs instead of full cluster state of the most frequently changing elements - cluster state, meta data and routing table.
Closes#6295
Whenever a transport client executes a request, it uses a built-in
RetryListener which tries to execute the request on another node.
However, if a connection error occurs, the onFailure() callback of
the listener is triggered, the netty I/O thread might still be used
to whatever failure has been added.
This commit offloads the onFailure handling to the generic thread pool.
In order to automatically sign and and upload our debian and RPM
packages, this commit incorporates signing into the build process
and adds the necessary steps to the release process. In order to do this
the pom.xml has been adapted and the RPM and jdeb maven plugins have been
updated, so the packages are signed on build. However the repositories
need to signed as well.
Syncing the repos requires downloading the current repo, adding
the new packages and syncing it back.
The following environment variables are now required as part of the build
* GPG_KEY_ID - the key ID of the key used for signing
* GPG_PASSPHRASE - your GPG passphrase
* S3_BUCKET_SYNC_TO: S3 bucket to sync new repo into
The following environment variables are optional
* S3_BUCKET_SYNC_FROM: S3 bucket to get existing packages from
* GPG_KEYRING - home of gnupg, defaults to ~/.gnupg
The following command line tools are needed
* createrepo (creates RPM repositories)
* expect (used by the maven rpm plugin)
* apt-ftparchive (creates DEB repositories)
* gpg (signs packages and repo files)
* s3cmd (syncing between the different S3 buckets)
The current approach would also work for users who want to run their
own repositories, all they need to change are a couple of environment
variables.
Minor implementation detail: Right now the branch name is used as version
for the repositories (like 1.4/1.5/1.6) - if we ever change our branch naming
scheme, the script needs to be fixed.
* Removed the docs for `index.compound_format` and `index.compound_on_flush` - these are expert settings which should probably be removed (see https://github.com/elastic/elasticsearch/issues/10778)
* Removed the docs for `index.index_concurrency` - another expert setting
* Labelled the segments verbose output as experimental
* Marked the `compression`, `precision_threshold` and `rehash` options as experimental in the cardinality and percentile aggs
* Improved the experimental text on `significant_terms`, `execution_hint` in the terms agg, and `terminate_after` param on count and search
* Removed the experimental flag on the `geobounds` agg
* Marked the settings in the `merge` and `store` modules as experimental, rather than the modules themselves
Closes#10782
the translog might be reused across engines which is currently a problem
in the design such that we have to allow calls to `close` more than once.
This moves the closed check for snapshot on the actual file to exit the loop.
Relates to #10807
If the translog is closed while a snapshot opertion is in progress
we must fail the snapshot operation otherwise we end up in an endless
loop.
Closes#10807
Internal: Change BigArrays to not extend AbstractComponent
In order to avoid the getLogger(getClass()) calls in the
AbstractComponent constructor.
Seems like BigArrays used to be a Singleton but it actually
no longer is one. Every time a SearchContext is created a
new BigArrays instance is created via the
withCircuitBreaking call.
closes#10798
In order to avoid the ``getLogger(getClass())`` calls in the
AbstractComponent constructor.
Seems like BigArrays used to be a Singleton but it actually
no longer is one. Every time a SearchContext is created a
new BigArrays instance is created via the
``withCircuitBreaking`` call.
Due to timing issues, mappings that are required to index a document might not
be available on the replica at indexing time. In that case the replica starts
listening to cluster state changes and re-parses the document until no dynamic
mappings updates are generated.
Hi there. I've been experimenting with the search templates recently and I'm a bit confused. Shouldn't the Mustache tags be written like `{{tagname}}` instead of `{tagname}`? Your using `{{...}}` [here](http://www.elastic.co/guide/en/elasticsearch/reference/current/search-template.html) BTW.
Using the first example in that page seems to indicate that something's wrong, or am I missing something?
```
$ curl 'localhost:9200/test/_search' -d '{"query":{"template":{"query":{"match":{"text":"{keywords}"}},"params":{"keywords":"value1_foo"}}}}'
{"took":1,"timed_out":false,"_shards":{"total":1,"successful":1,"failed":0},"hits":{"total":0,"max_score":null,"hits":[]}}
$ curl 'localhost:9200/test/_search' -d '{"query":{"template":{"query":{"match":{"text":"{{keywords}}"}},"params":{"keywords":"value1_foo"}}}}'
{"took":1,"timed_out":false,"_shards":{"total":1,"successful":1,"failed":0},"hits":{"total":1,"max_score":1.0,"hits":[{"_index":"test","_type":"testtype","_id":"1","_score":1.0,"_source":{"text":"value1_foo"}}]}}
```