field_value_factor now takes a default that is used if the document doesn't
have a value for that field. It looks like:
"field_value_factor": {
"field": "popularity",
"missing": 1
}
Closes#10841
This commit adds support for running with only one node and sets the
maximum number of nodes to 3 by default. if run with test.nighly=true
at most 6 nodes are used. This gave a 20% speed improvement compared to
the previoulys minimum number of nodes of 3.
Groovy sandboxing was disabled by default from 1.4.3 on though since we found out that it could be worked around, so it makes little sense to keep it and maintain it.
Closes#10156Closes#10480
Best effort to print out the search source depending on how it was set to the SearchRequestBuilder, don't call `internalBuilder() as that causes the content of the request to be wiped.
Closes#5576
This pull request replaces the current self-made implementation of JSON encoding special chars with re-using the Jackson JsonStringEncoder. Turns out the previous implementation also missed a few special chars so had to adjust the tests accordingly (looked at RFC 4627 for reference).
Note: There's another JSON String encoder on our classpath (org.apache.commons.lang3.StringEscapeUtils) that essentially does the same thing but adds quoting to more characters than the Jackson Encoder above.
Relates to #5473
The _field_names field was fixed in 1.5.1 (#10268) to correctly be
disabled for indexes before 1.3.0. However, only the exists filter
was updated to check this enabled flag on 1.x/1.5. The missing
filter on those branches still checks the field type to see if it
is indexed, which causes the filter to always try and use
the _field_names field for those old indexes.
This change adds a test to the old index tests for missing filter.
closes#10842
Minor issue with specifying the correct version when starting the package release script.
Another issue fixed to make sure that the S3 bucket parameters act the same.
1. initialize SM after things like mlockall. Their tests currently
don't run with securitymanager enabled, and its simpler to just
run mlockall etc first.
2. remove redundant test permissions (junit4.childvm.cwd/temp). This
is alreay added as java.io.tmpdir.
3. improve tests to load the generated policy with some various
settings and assert things about the permissions on configured
directories.
4. refactor logic to make it easier to fine-grain the permissions later.
for example we currently allow write access to conf/. In the future
I think we can improve testing so we are able to make improvements here.
Since the circuit breaking service doesn't actually change for
BigArrays, we can eagerly create a new instance only once and use that
for all further invocations of `withCircuitBreaking`.
This plugin should be ignored as it will make the internal eclipse build fail when there are NOCOMMIT comments in files, which are expected during feature development. This change only affects eclipse users and only when they are using the m2e eclipse integration
This changes the default ranking behaviour of single-term queries on numeric fields to use the usual Lucene TermQuery scoring logic rather than a constant-scoring wrapper.
Closes#10628
We have some duplication in TransportIndexAction/TransportShardBulkAction due
to the fact that we have totally different branches for INDEX and CREATE
operations. This commit tries to share the logic better between these two cases.
Adds support for calculating and sending diffs instead of full cluster state of the most frequently changing elements - cluster state, meta data and routing table.
Closes#6295
Whenever a transport client executes a request, it uses a built-in
RetryListener which tries to execute the request on another node.
However, if a connection error occurs, the onFailure() callback of
the listener is triggered, the netty I/O thread might still be used
to whatever failure has been added.
This commit offloads the onFailure handling to the generic thread pool.
In order to automatically sign and and upload our debian and RPM
packages, this commit incorporates signing into the build process
and adds the necessary steps to the release process. In order to do this
the pom.xml has been adapted and the RPM and jdeb maven plugins have been
updated, so the packages are signed on build. However the repositories
need to signed as well.
Syncing the repos requires downloading the current repo, adding
the new packages and syncing it back.
The following environment variables are now required as part of the build
* GPG_KEY_ID - the key ID of the key used for signing
* GPG_PASSPHRASE - your GPG passphrase
* S3_BUCKET_SYNC_TO: S3 bucket to sync new repo into
The following environment variables are optional
* S3_BUCKET_SYNC_FROM: S3 bucket to get existing packages from
* GPG_KEYRING - home of gnupg, defaults to ~/.gnupg
The following command line tools are needed
* createrepo (creates RPM repositories)
* expect (used by the maven rpm plugin)
* apt-ftparchive (creates DEB repositories)
* gpg (signs packages and repo files)
* s3cmd (syncing between the different S3 buckets)
The current approach would also work for users who want to run their
own repositories, all they need to change are a couple of environment
variables.
Minor implementation detail: Right now the branch name is used as version
for the repositories (like 1.4/1.5/1.6) - if we ever change our branch naming
scheme, the script needs to be fixed.
* Removed the docs for `index.compound_format` and `index.compound_on_flush` - these are expert settings which should probably be removed (see https://github.com/elastic/elasticsearch/issues/10778)
* Removed the docs for `index.index_concurrency` - another expert setting
* Labelled the segments verbose output as experimental
* Marked the `compression`, `precision_threshold` and `rehash` options as experimental in the cardinality and percentile aggs
* Improved the experimental text on `significant_terms`, `execution_hint` in the terms agg, and `terminate_after` param on count and search
* Removed the experimental flag on the `geobounds` agg
* Marked the settings in the `merge` and `store` modules as experimental, rather than the modules themselves
Closes#10782
the translog might be reused across engines which is currently a problem
in the design such that we have to allow calls to `close` more than once.
This moves the closed check for snapshot on the actual file to exit the loop.
Relates to #10807