Introduces `XContentParser#namedObject which works a little like
`StreamInput#readNamedWriteable`: on startup components register
parsers under names and a superclass. At runtime we look up the
parser and call it to parse the object.
Right now the parsers take a context object they use to help with
the parsing but I hope to be able to eliminate the need for this
context as most what it is used for at this point is to move
around parser registries which should be replaced by this method
eventually. I make no effort to do so in this PR because it is
big enough already. This is meant to the a start down a road that
allows us to remove classes like `QueryParseContext`,
`AggregatorParsers`, `IndicesQueriesRegistry`, and
`ParseFieldRegistry`.
The goal here is to reduce the amount of plumbing required to
allow parsing pluggable things. With this you don't have to pass
registries all over the place. Instead you must pass a super
registry to fewer places and use it to wrap the reader. This is
the same tradeoff that we use for NamedWriteable and it allows
much, much simpler binary serialization. We think we want that
same thing for xcontent serialization.
The only parsing actually converted to this method is parsing
`ScoreFunctions` inside of `FunctionScoreQuery`. I chose this
because it is relatively self contained.
We are currenlty checking that no deprecation warnings are emitted in our query tests. That can be moved to ESTestCase (disabled in ESIntegTestCase) as it allows us to easily catch where our tests use deprecated features and assert on the expected warnings.
With this commit, we introduce a cache to the geoip ingest processor.
The cache is enabled by default and caches the 1000 most recent items.
The cache size is controlled by the setting `ingest.geoip.cache_size`.
Closes#22074
In some cases, it might happen that the `_all` field gets a field type that is
not totally configured, and in particular lacks analyzers. This is due to the
fact that `AllFieldMapper.TypeParser.getDefault` uses `Defaults.FIELD_TYPE` as
a default field type, which does not have any analyzers configured since it
does not know about the default analyzers.
While I was fixing a documentation issue (#22007), I looked at the code and discovered that we actually never read what the user entered as a `readonly` parameter when he creates an azure repository.
So if someone sends:
```
PUT _snapshot/my_backup4
{
"type": "azure",
"settings": {
"account": "my_account2",
"location_mode": "primary_only",
"readonly": true
}
}
```
The repository is not actually defined as `readonly`.
It's caused by the fact we are always overwriting `readonly`setting based on `location_mode`.
If a user sets it to `primary_only`, `readonly` is forced to `false`.
If a user sets it to `primary_then_secondary`, `readonly` is forced to `false`.
If a user sets it to `secondary_only`, `readonly` is forced to `false`.
Note that with this change, a user can force a `secondary_only` repository to `readonly: false` which will lead him to an error later on when we check the repository as per definition in Azure, a secondary repository is not writable.
Another option could have been to detect this mismatch and throw an exception in that case. Note sure it is worth writing more code though.
Closes#22053.
Since the removal of local discovery of #https://github.com/elastic/elasticsearch/pull/20960 we rely on minimum master nodes to be set in our test cluster. The settings is automatically managed by the cluster (by default) but current management doesn't work with concurrent single node async starting. On the other hand, with `MockZenPing` and the `discovery.initial_state_timeout` set to `0s` node starting and joining is very fast making async starting an unneeded complexity. Test that still need async starting could, in theory, still do so themselves via background threads.
Note that this change also removes the usage of `INITIAL_STATE_TIMEOUT_SETTINGS` as the starting of nodes is done concurrently (but building them is sequential)
For the record, I also had to remove the geo-hash cell and geo-distance range
queries to make the code compile. These queries already throw an exception in
all cases with 5.x indices, so that does not hurt any more.
I also had to rename all 2.x bwc indices from `index-${version}` to
`unsupported-${version}` to make `OldIndexBackwardCompatibilityIT`
happy.
Set lucene version to 6.4.0-snapshot-ec38570 and update all the sha1s/license
Fix invalid combo after upgrade in query_string query. split_on_whitespace=false is disallowed if auto_generate_phrase_queries=true
Adapt the expectations of some tests to the new format of the Lucene explain output
Lucene 6.2 added index and query support for numeric ranges. This commit adds a new RangeFieldMapper for indexing numeric (int, long, float, double) and date ranges and creating appropriate range and term queries. The design is similar to NumericFieldMapper in that it uses a RangeType enumerator for implementing the logic specific to each type. The following range types are supported by this field mapper: int_range, float_range, long_range, double_range, date_range.
Lucene does not provide a DocValue field specific to RangeField types so the RangeFieldMapper implements a CustomRangeDocValuesField for handling doc value support.
When executing a Range query over a Range field, the RangeQueryBuilder has been enhanced to accept a new relation parameter for defining the type of query as one of: WITHIN, CONTAINS, INTERSECTS. This provides support for finding all ranges that are related to a specific range in a desired way. As with other spatial queries, DISJOINT can be achieved as a MUST_NOT of an INTERSECTS query.
Group, List and Affix settings generate a bogus diff that turns the actual
diff into a string containing a json structure for instance:
```
"action" : {
"search" : {
"remote" : {
"" : "{\"my_remote_cluster\":\"[::1]:60378\"}"
}
}
}
```
which make reading the setting impossible. This happens for instance
if a group or affix setting is rendered via `_cluster/settings?include_defaults=true`
This change fixes the issue as well as several minor issues with affix settings that
where not accepted as valid setting today.
Today we eagerly resolve unicast hosts. This means that if DNS changes,
we will never find the host at the new address. Moreover, a single host
failng to resolve causes startup to abort. This commit introduces lazy
resolution of unicast hosts. If a DNS entry changes, there is an
opportunity for the host to be discovered. Note that under the Java
security manager, there is a default positive cache of infinity for
resolved hosts; this means that if a user does want to operate in an
environment where DNS can change, they must adjust
networkaddress.cache.ttl in their security policy. And if a host fails
to resolve, we warn log the hostname but continue pinging other
configured hosts.
When doing DNS resolutions for unicast hostnames, we wait until the DNS
lookups timeout. This appears to be forty-five seconds on modern JVMs,
and it is not configurable. If we do these serially, the cluster can be
blocked during ping for a lengthy period of time. This commit introduces
doing the DNS lookups in parallel, and adds a user-configurable timeout
for these lookups.
Relates #21630
#20960 removed `LocalDiscovery` and we now use `ZenDiscovery` in all our tests. To keep cluster forming fast, we are using a `MockZenPing` implementation which uses static maps to return instant results making master election fast. Currently, we don't set `minimum_master_nodes` causing the occasional split brain when starting multiple nodes concurrently and their pinging is so fast that it misses the fact that one of the node has elected it self master. To solve this, `InternalTestCluster` is modified to behave like a true cluster and manage and set `minimum_master_nodes` correctly with every change to the number of nodes.
Tests that want to manage the settings themselves can opt out using a new `autoMinMasterNodes` parameter to the `ClusterScope` annotation.
Having `min_master_nodes` set means the started node may need to wait for other nodes to be started as well. To combat this, we set `discovery.initial_state_timeout` to `0` and wait for the cluster to form once all node have been started. Also, because a node may wait and ping while other nodes are started, `MockZenPing` is adapted to wait rather than busy-ping.
JDK9 removed pathname canonicalization when constructing FilePermission objects, which breaks some of the FilePermissions added by Elasticsearch. This commit adds the system property jdk.io.permissionsUseCanonicalPath which makes JDK9 behave like JDK8 w.r.t. FilePermission objects (see #21534).
This changes adds a test discovery (which internally uses the existing
mock zenping by default). Having the mock the test framework selects be a discovery
greatly simplifies discovery setup (no more weird callback to a Node
method).
JDK9 removed pathname canonicalization when constructing FilePermission objects, which breaks some of the FilePermissions added by
Elasticsearch. This commit adds the system property jdk.io.permissionsUseCanonicalPath which makes JDK9 behave like JDK8 w.r.t. FilePermissions (see
https://github.com/elastic/elasticsearch/issues/21534).
* master:
ShardActiveResponseHandler shouldn't hold to an entire cluster state
Ensures cleanup of temporary index-* generational blobs during snapshotting (#21469)
Remove (again) test uses of onModule (#21414)
[TEST] Add assertBusy when checking for pending operation counter after tests
Revert "Add trace logging when aquiring and releasing operation locks for replication requests"
Allows multiple patterns to be specified for index templates (#21009)
[TEST] fixes rebalance single shard check as it isn't guaranteed that a rebalance makes sense and the method only tests if rebalance is allowed
Document _reindex with random_score
* master: (516 commits)
Avoid angering Log4j in TransportNodesActionTests
Add trace logging when aquiring and releasing operation locks for replication requests
Fix handler name on message not fully read
Remove accidental import.
Improve log message in TransportNodesAction
Clean up of Script.
Update Joda Time to version 2.9.5 (#21468)
Remove unused ClusterService dependency from SearchPhaseController (#21421)
Remove max_local_storage_nodes from elasticsearch.yml (#21467)
Wait for all reindex subtasks before rethrottling
Correcting a typo-Maan to Man-in README.textile (#21466)
Fix InternalSearchHit#hasSource to return the proper boolean value (#21441)
Replace all index date-math examples with the URI encoded form
Fix typos (#21456)
Adapt ES_JVM_OPTIONS packaging test to ubuntu-1204
Add null check in InternalSearchHit#sourceRef to prevent NPE (#21431)
Add VirtualBox version check (#21370)
Export ES_JVM_OPTIONS for SysV init
Skip reindex rethrottle tests with workers
Make forbidden APIs be quieter about classpath warnings (#21443)
...
This change was reverted after it caused random test failures. This was
due to a copy/paste error in the original PR which caused the mock
version of ClusterInfoService to be used whenever the mock *ZenPing* was
used, and the real ClusterInfoService to be used when MockZenPing was
not used.
* Plugins: Convert custom discovery to pull based plugin
This change primarily moves registering custom Discovery implementations
to the pull based DiscoveryPlugin interface. It also keeps the cloud
based discovery plugins re-registering ZenDiscovery under their own name
in order to maintain backwards compatibility. However,
discovery.zen.hosts_provider is changed here to no longer fallback to
discovery.type. Instead, each plugin which previously relied on the
value of discovery.type now sets the hosts_provider to itself if
discovery.type is set to itself, along with a deprecation warning.
At one point in the past when moving out the rest tests from core to
their own subproject, we had multiple test classes which evenly split up
the tests to run. However, we simplified this and went back to a single
test runner to have better reproduceability in tests. This change
removes the remnants of that multiplexing support.
Currently the default S3 buffer size is 100MB, which can be a lot for small
heaps. This pull request updates the default to be 100MB for heaps that are
greater than 2GB and 5% of the heap size otherwise.
The `IndexService#newQueryShardContext()` method creates a QueryShardContext on
shard `0`, with a `null` reader and that uses `System.currentTimeMillis()` to
resolve `now`. This may hide bugs, since the shard id is sometimes used for
query parsing (it is used to salt random score generation in `function_score`),
passing a `null` reader disables query rewriting and for some use-cases, it is
simply not ok to rely on the current timestamp (eg. percolation). So this pull
request removes this method and instead requires that all call sites provide
these parameters explicitly.
Since Lucene 6.2. the UkrainianMorfologikAnalyzer is available through the
lucene-analyzers-morfologik jar. This change exposes it to be used as an
elasticsearch plugin.
Lucene 6.3 is expected to be released in the next weeks so it'd be good to give
it some integration testing. I had to upgrade randomized-testing too so that
both Lucene and Elasticsearch are on the same version.
Follow up for #21039.
We can revert the previous change and do that a bit smarter than it was.
Patch tested successfully manually on ec2 with 2 nodes with a configuration like:
```yml
discovery.type: ec2
network.host: ["_local_", "_site_", "_ec2_"]
cloud.aws.region: us-west-2
```
(cherry picked from commit fbbeded)
Backport of #21048 in master branch
This change moves providing UnicastHostsProvider for zen discovery to be
pull based, adding a getter in DiscoveryPlugin. A new setting is added,
discovery.zen.hosts_provider, to separate the discovery type from the
hosts provider for zen when it is selected. Unfortunately existing
plugins added ZenDiscovery with their own name in order to just provide
a hosts provider, so there are already many users setting the hosts
provider through discovery.type. This change also includes backcompat,
falling back to discovery.type when discovery.zen.hosts_provider is not
set.
Here is what is happening without this fix when you try to connect to ec2 APIs:
```
[2016-10-20T12:41:49,925][DEBUG][c.a.a.AWSCredentialsProviderChain] Unable to load credentials from EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
[2016-10-20T12:41:49,926][DEBUG][c.a.a.AWSCredentialsProviderChain] Unable to load credentials from SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey)
[2016-10-20T12:41:49,926][DEBUG][c.a.a.AWSCredentialsProviderChain] Unable to load credentials from com.amazonaws.auth.profile.ProfileCredentialsProvider@1ad14091: access denied ("java.io.FilePermission" "/home/ubuntu/.aws/credentials" "read")
[2016-10-20T12:41:49,927][DEBUG][c.a.i.EC2MetadataClient ] Connecting to EC2 instance metadata service at URL: http://169.254.169.254/latest/meta-data/iam/security-credentials/
[2016-10-20T12:41:49,951][DEBUG][c.a.i.EC2MetadataClient ] Connecting to EC2 instance metadata service at URL: http://169.254.169.254/latest/meta-data/iam/security-credentials/discovery-tests
[2016-10-20T12:41:49,965][DEBUG][c.a.a.AWSCredentialsProviderChain] Unable to load credentials from InstanceProfileCredentialsProvider: Unable to parse Json String.
[2016-10-20T12:41:49,966][INFO ][o.e.d.e.AwsEc2UnicastHostsProvider] [dJfktmE] Exception while retrieving instance list from AWS API: Unable to load AWS credentials from any provider in the chain
[2016-10-20T12:41:49,967][DEBUG][o.e.d.e.AwsEc2UnicastHostsProvider] [dJfktmE] Full exception:
com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:131) ~[aws-java-sdk-core-1.10.69.jar:?]
at com.amazonaws.services.ec2.AmazonEC2Client.invoke(AmazonEC2Client.java:11117) ~[aws-java-sdk-ec2-1.10.69.jar:?]
at com.amazonaws.services.ec2.AmazonEC2Client.describeInstances(AmazonEC2Client.java:5403) ~[aws-java-sdk-ec2-1.10.69.jar:?]
at org.elasticsearch.discovery.ec2.AwsEc2UnicastHostsProvider.fetchDynamicNodes(AwsEc2UnicastHostsProvider.java:116) [discovery-ec2-5.0.0.jar:5.0.0]
at org.elasticsearch.discovery.ec2.AwsEc2UnicastHostsProvider$DiscoNodesCache.refresh(AwsEc2UnicastHostsProvider.java:234) [discovery-ec2-5.0.0.jar:5.0.0]
at org.elasticsearch.discovery.ec2.AwsEc2UnicastHostsProvider$DiscoNodesCache.refresh(AwsEc2UnicastHostsProvider.java:219) [discovery-ec2-5.0.0.jar:5.0.0]
at org.elasticsearch.common.util.SingleObjectCache.getOrRefresh(SingleObjectCache.java:54) [elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.discovery.ec2.AwsEc2UnicastHostsProvider.buildDynamicNodes(AwsEc2UnicastHostsProvider.java:102) [discovery-ec2-5.0.0.jar:5.0.0]
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:358) [elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$1.doRun(UnicastZenPing.java:272) [elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:504) [elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.0.0.jar:5.0.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
```
For whatever reason, it can not parse what is coming back from http://169.254.169.254/latest/meta-data/iam/security-credentials/discovery-tests.
But, if you wrap the code within an `AccessController.doPrivileged()` call, then it works perfectly.
Closes#21039.
(cherry picked from commit abfdc70)
* Move all zen discovery classes into o.e.discovery.zen
This collapses sub packages of zen into zen. These all had just a couple
classes each, and there is really no reason to have the subpackages.
* fix checkstyle
`LocalDiscovery` is a discovery implementation that uses static in memory maps to keep track of current live nodes. This is used extensively in our tests in order to speed up cluster formation (i.e., shortcut the 3 second ping period used by `ZenDiscovery` by default). This is sad as that mean that most of the test run using a different discovery semantics than what is used in production. Instead of replacing the entire discovery logic, we can use a similar approach to only shortcut the pinging components.
This commit fixes an issue with the configuration for the AwsSdkMetrics
logger; the issue is that the logging configuration had used underscores
instead of periods for the settings key (the perils of lenient settings
parsing).
Relates #20313
In 2.x, the S3 repository accepted a `/` (forward slash) to start
the repositories.s3.base_path, and it used a different string splitting
method that removed the forward slash from the base path, so there
were no issues.
In 5.x, we removed this custom string splitting method in favor of
the JDK's string splitting method, which preserved the leading `/`.
The AWS SDK does not like the leading `/` in the key path after the
bucket name, and so it could not find any objects in the S3 repository.
This commit fixes the issue by removing the leading `/` if it exists
and adding a deprecation notice that leading `/` will not be supported
in the future in S3 repository's base_path.
UpdateHelper, MetaDataIndexUpgradeService, and some recovery
stuff.
Move ClusterSettings to nullable ctor parameter of TransportService
so it isn't forgotten.
This change proposes the removal of all non-tcp transport implementations. The
mock transport can be used by default to run tests instead of local transport that has
roughly the same performance compared to TCP or at least not noticeably slower.
This is a master only change, deprecation notice in 5.x will be committed as a
separate change.
Today when parsing a request, Elasticsearch silently ignores incorrect
(including parameters with typos) or unused parameters. This is bad as
it leads to requests having unintended behavior (e.g., if a user hits
the _analyze API and misspell the "tokenizer" then Elasticsearch will
just use the standard analyzer, completely against intentions).
This commit removes lenient URL parameter parsing. The strategy is
simple: when a request is handled and a parameter is touched, we mark it
as such. Before the request is actually executed, we check to ensure
that all parameters have been consumed. If there are remaining
parameters yet to be consumed, we fail the request with a list of the
unconsumed parameters. An exception has to be made for parameters that
format the response (as opposed to controlling the request); for this
case, handlers are able to provide a list of parameters that should be
excluded from tripping the unconsumed parameters check because those
parameters will be used in formatting the response.
Additionally, some inconsistencies between the parameters in the code
and in the docs are corrected.
Relates #20722
* master: (1199 commits)
[DOCS] Remove non-valid link to mapping migration document
Revert "Default `include_in_all` for numeric-like types to false"
test: add a test with ipv6 address
docs: clearify that both ip4 and ip6 addresses are supported
Include complex settings in settings requests
Add production warning for pre-release builds
Clean up confusing error message on unhandled endpoint
[TEST] Increase logging level in testDelayShards()
change health from string to enum (#20661)
Provide error message when plugin id is missing
Document that sliced scroll works for reindex
Make reindex-from-remote ignore unknown fields
Remove NoopGatewayAllocator in favor of a more realistic mock (#20637)
Remove Marvel character reference from guide
Fix documentation for setting Java I/O temp dir
Update client benchmarks to log4j2
Changes the API of GatewayAllocator#applyStartedShards and (#20642)
Removes FailedRerouteAllocation and StartedRerouteAllocation
IndexRoutingTable.initializeEmpty shouldn't override supplied primary RecoverySource (#20638)
Smoke tester: Adjust to latest changes (#20611)
...
We were swallowing the original exception when creating a client with bad credentials.
So even in `TRACE` log level, nothing useful were coming out of it.
With this commit, it now prints:
```
[2016-09-27 15:54:13,118][ERROR][cloud.azure.storage ] [node_s0] can not create azure storage client: Storage Key is not a valid base64 encoded string.
```
Closes#20633.
Backport of #20669 for master branch (6.0)
Today we hold on to all possible tokenizers, tokenfilters etc. when we create
an index service on a node. This was mainly done to allow the `_analyze` API to
directly access all these primitive. We fixed this in #19827 and can now get rid of
the AnalysisService entirely and replace it with a simple map like class. This
ensures we don't create a gazillion long living objects that are entirely useless since
they are never used in most of the indices. Also those objects might consume a considerable
amount of memory since they might load stopwords or synonyms etc.
Closes#19828
Adds an integration test for the file-based discovery plugin
to test the plugin operates correctly and uses the hosts
configured in `unicast_hosts.txt` with a real cluster
Closes#20459
This commit removes `ByteSizeValue`'s methods that are duplicated (ex: `mbFrac()` and `getMbFrac()`) in order to only keep the `getN` form.
It also renames `mb()` -> `getMb()`, `kb()` -> `getKB()` in order to be more coherent with the `ByteSizeUnit` method names.
* Build: Remove old maven deploy support
This change removes the old maven deploy that we have in parallel to
maven-publish, and makes maven-publish fully work with publishing to
maven local. Using `gradle publishToMavenLocal` should be used to
publish to .m2.
Note that there is an unfortunate hack that means for
zip artifacts we must first create/publish a dummy pom file, and then
follow that with the real pom file. It would be nice to have the pom
file contains packaging=zip, but maven central then requires sources and
javadocs. But our zips are really just attached artifacts, so we already
set the packaging type to pom for our zip files. This change just works
around a limitation of the underlying maven publishing library which
silently skips attached artifacts when the packaging type is set to pom.
relates #20164closes#20375
* Remove unnecessary extra spacing
TransportService is such a central part of the core server, replacing
it's implementation is risky and can cause serious issues. This change removes the ability to
plug in TransportService but allows registering a TransportInterceptor that enables
plugins to intercept requests on both the sender and the receiver ends. This is a commonly used
and overwritten functionality but encapsulates the custom code in a contained manner.
update geoip to not include null-valued results from database
Originally, the plugin would still insert all the requested fields, but
assign null to each one. This fixes that by not writing the fields at
all. Makes for a better experience when the null fields conflict with
the typical geo_point field mapping.
This commit introduces a new plugin for file-based unicast hosts
discovery. This allows specifying the unicast hosts participating
in discovery through a `unicast_hosts.txt` file located in the
`config/discovery-file` directory. The plugin will use the hosts
specified in this file as the set of hosts to ping during discovery.
The format of the `unicast_hosts.txt` file is to have one host/port
entry per line. The hosts file is read and parsed every time
discovery makes ping requests, thus a new version of the file that
is published to the config directory will automatically be picked
up.
Closes#20323
This change replaces the fields parameter with stored_fields when it makes sense.
This is dictated by the renaming we made in #18943 for the search API.
The following list of endpoint has been changed to use `stored_fields` instead of `fields`:
* get
* mget
* explain
The documentation and the rest API spec has been updated to cope with the changes for the following APIs:
* delete_by_query
* get
* mget
* explain
The `fields` parameter has been deprecated for the following APIs (it is replaced by _source filtering):
* update: the fields are extracted from the _source directly.
* bulk: the fields parameter is used but fields are extracted from the source directly so it is allowed to have non-stored fields.
Some APIs still have the `fields` parameter for various reasons:
* cat.fielddata: the fields paramaters relates to the fielddata fields that should be printed.
* indices.clear_cache: used to indicate which fielddata fields should be cleared.
* indices.get_field_mapping: used to filter fields in the mapping.
* indices.stats: get stats on fields (stored or not stored).
* termvectors: fields are retrieved from the stored fields if possible and extracted from the _source otherwise.
* mtermvectors:
* nodes.stats: the fields parameter is used to concatenate completion_fields and fielddata_fields so it's not related to stored_fields at all.
Fixes#20155
Because of security permissions that we do not grant to the AWS SDK (for
use in discovery-ec2 and repository-s3 plugins), certain calls in the
AWS SDK will lead to security exceptions that are logged at the warning
level. These warnings are noise and we should suppress them. This commit
adds plugin log configurations for discovery-ec2 and repository-s3 to
ship with default Log4j 2 configurations that suppress these log
warnings.
Relates #20313
This commit modifies the call sites that allocate a parameterized
message to use a supplier so that allocations are avoided unless the log
level is fine enough to emit the corresponding log message.
This makes GET operations more consistent with `_search` operations which expect
`(stored_)fields` to work on stored fields and source filtering to work on the
`_source` field. This is now possible thanks to the fact that GET operations
do not read from the translog anymore (#20102) and also allows to get rid of
`FieldMapper#isGenerated`.
The `_termvectors` API (and thus more_like_this too) was relying on the fact
that GET operations would extract fields from either stored fields or the source
so the logic to do this that used to exist in `ShardGetService` has been moved
to `TermVectorsService`. It would be nice that term vectors do not rely on this,
but this does not seem to be a low hanging fruit.
Squashes all the subpackages of `org.elasticsearch.rest.action` down to
the following:
* `o.e.rest.action.admin` - Administrative actions
* `o.e.rest.action.cat` - Actions that make tables for `grep`ing
* `o.e.rest.action.document` - Actions that act on documents
* `o.e.rest.action.ingest` - Actions that act on ingest pipelines
* `o.e.rest.action.search` - Actions that search
I'm tempted to merge `search` into `document` but the `document`
package feels fairly complete as is and `Suggest` isn't actually always
about documents either....
I'm also tempted to merge `ingest` into `admin.cluster` because the
latter contains the actions for dealing with stored scripts.
I've moved the `o.e.rest.action.support` into `o.e.rest.action`.
I've also added `package-info.java`s to all packges in `o.e.rest`. I
figure if the package is too small to deserve a `package-info.java` file
then it is too small to deserve to be a package....
Also fixes checkstyle in all moved classes.
Fixes two issues:
1. lang-javascript doesn't support `executable` with a `null` `vars`
parameters. The parameter is quite nullable.
2. reindex didn't support script engines who's `unwrap` method wasn't
a noop. This didn't come up for lang-groovy or lang-painless because
both of those `unwrap`s were noops. lang-javascript copys all maps that
it `unwrap`s.
This adds fairly low level unit tests for these fixes but dosen't add
an integration test that makes sure that reindex and lang-javascript
play well together. That'd make backporting this difficult and would
add a fairly significant amount of time to the build for a fairly rare
interaction. Hopefully the unit tests will be enough.
I also reduced the visibility of a couple classes and renamed/consolidated some
test classes for consistency, eg. removing the `Simple` prefix or using the
`<Type>FieldMapperTests` convention for testing field mappers.
When compiling many dynamically changing scripts, parameterized
scripts (<https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-scripting-using.html#prefer-params>)
should be preferred. This enforces a limit to the number of scripts that
can be compiled within a minute. A new dynamic setting is added -
`script.max_compilations_per_minute`, which defaults to 15.
If more dynamic scripts are sent, a user will get the following
exception:
```json
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "i",
"node" : "a5V1eXcZRYiIk8lecjZ4Jw",
"reason" : {
"type" : "general_script_exception",
"reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
}
}
],
"caused_by" : {
"type" : "general_script_exception",
"reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
}
},
"status" : 500
}
```
This also fixes a bug in `ScriptService` where requests being executed
concurrently on a single node could cause a script to be compiled
multiple times (many in the case of a powerful node with many shards)
due to no synchronization between checking the cache and compiling the
script. There is now synchronization so that a script being compiled
will only be compiled once regardless of the number of concurrent
searches on a node.
Relates to #19396
This commit cleans up indices in a snapshot repository when all
snapshots containing the index are all deleted. Previously, empty
indices folders would lay around after all snapshots containing
them were deleted.
conform with the requirements of the writeBlob method by
throwing a FileAlreadyExistsException if attempting to write
to a blob that already exists. This change means implementations
of BlobContainer should never overwrite blobs - to overwrite a
blob, it must first be deleted and then can be written again.
Closes#15579
In an effort to reduce the number of tiny packages we have in the
code base this moves all the files that were in subdirectories of
`org.elasticsearch.rest.action.admin.cluster` into
`org.elasticsearch.rest.action.admin.cluster`.
Also fixes line length in these packages.
While I was working on #18703, I discovered a bad behavior when people don't provide AWS key/secret as part as their `elasticsearch.yml` but rely on SysProps or env. variables...
In [`InternalAwsS3Service#getClient(...)`](d4366f8493/plugins/repository-s3/src/main/java/org/elasticsearch/cloud/aws/InternalAwsS3Service.java (L76-L141)), we have:
```java
Tuple<String, String> clientDescriptor = new Tuple<>(endpoint, account);
AmazonS3Client client = clients.get(clientDescriptor);
```
But if people don't provide credentials, `account` is `null`.
Even if it actually could work, I think that we should use the `AWSCredentialsProvider` we create later on and extract from it the `account` (AWS KEY actually) and then use it as the second value of the tuple.
Closes#19557.
This makes it obvious that these tests are for running the client yaml
suites. Now that there are other ways of running tests using the REST
client against a running cluster we can't go on calling the shared
client yaml tests "REST tests". They are rest tests, but they aren't
**the** rest tests.
This adds a header that looks like `Location: /test/test/1` to the
response for the index/create/update API. The requirement for the header
comes from https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.htmlhttps://tools.ietf.org/html/rfc7231#section-7.1.2 claims that relative
URIs are OK. So we use an absolute path which should resolve to the
appropriate location.
Closes#19079
This makes large changes to our rest test infrastructure, allowing us
to write junit tests that test a running cluster via the rest client.
It does this by splitting ESRestTestCase into two classes:
* ESRestTestCase is the superclass of all tests that use the rest client
to interact with a running cluster.
* ESClientYamlSuiteTestCase is the superclass of all tests that use the
rest client to run the yaml tests. These tests are shared across all
official clients, thus the `ClientYamlSuite` part of the name.
In #15950#15080#16084 we added the support of TimeOut for Requests with a default client`setTimeoutIntervalInMs`.
So we can remove this useless todo which was added for only one method.
Closes#18617.
Azure and Google cloud blob containers, as the APIs for both return
a 404 in the case of a missing object, which we already handle through
a NoSuchFileFoundException.
Follow up for #18662 and #18690.
* For consistency, we rename method parameters and use `key` and `secret` instead of `account` and `key`.
* We add some tests to check that settings are correctly applied.
* Tests revealed that some checks are bad like for #18662.
Add test and fix issue for getting the right S3 endpoint
Test when Repository, Repositories or global settings are defined
But ignore testAWSCredentialsWithSystemProviders test
Add tests for AWS Client Configuration
Fix NPE when no region is set
We used to transform region="" to region=null but it's not needed anymore and would actually cause NPE from now.
Follow up for #18662
We add some tests to check that settings are correctly applied.
Tests revealed that some checks were missing.
But we ignore `testAWSCredentialsWithSystemProviders` test for now.
Added BlobContainer tests for HDFS storage
and caught a bug at the same time in which
deleteBlob was not raising an IOException
when the blobName did not exist.
Added BlobContainer tests for Azure storage
and caught a bug at the same time in which
deleteBlob was not raising an IOException
when the blobName did not exist.
Follow up for https://github.com/elastic/elasticsearch/pull/17784#discussion_r64575845
Today we are registering repository settings when `S3RepositoryPlugin` starts:
```java
settingsModule.registerSetting(S3Repository.Repository.KEY_SETTING);
settingsModule.registerSetting(S3Repository.Repository.SECRET_SETTING);
settingsModule.registerSetting(S3Repository.Repository.BUCKET_SETTING);
settingsModule.registerSetting(S3Repository.Repository.ENDPOINT_SETTING);
settingsModule.registerSetting(S3Repository.Repository.PROTOCOL_SETTING);
settingsModule.registerSetting(S3Repository.Repository.REGION_SETTING);
settingsModule.registerSetting(S3Repository.Repository.SERVER_SIDE_ENCRYPTION_SETTING);
settingsModule.registerSetting(S3Repository.Repository.BUFFER_SIZE_SETTING);
settingsModule.registerSetting(S3Repository.Repository.MAX_RETRIES_SETTING);
settingsModule.registerSetting(S3Repository.Repository.CHUNK_SIZE_SETTING);
settingsModule.registerSetting(S3Repository.Repository.COMPRESS_SETTING);
settingsModule.registerSetting(S3Repository.Repository.STORAGE_CLASS_SETTING);
settingsModule.registerSetting(S3Repository.Repository.CANNED_ACL_SETTING);
settingsModule.registerSetting(S3Repository.Repository.BASE_PATH_SETTING);
```
We don't need to register those settings as they are repository level settings and not node level settings.
Closes#18945.
creation in the REST tests, as we no longer need it due
to index creation now waiting for active shard copies
before returning (by default, it waits for the primary of
each shard, which is the same as ensuring yellow health).
Relates #19450
Also introduced a `Processor.Parameters` class that is holder for several services processors rely on,
the IngestPlugin#getProcessors(...) method has been changed to accept `Processor.Parameters` instead
of each service seperately.
This commit renames the Netty 3 transport module from transport-netty to
transport-netty3. This is to make room for a Netty 4 transport module,
transport-netty4.
Relates #19439
Today `node.mode` and `node.local` serve almost the same purpose, they
are a shortcut for `discovery.type` and `transport.type`. If `node.local: true`
or `node.mode: local` is set elasticsearch will start in _local_ mode which means
only nodes within the same JVM are discovered and a non-network based transport
is used. The _local_ mode it only really used in tests or if nodes are embedded.
For both, embedding and tests explicit configuration via `discovery.type` and `transport.type`
should be preferred.
This change removes all the usage of these settings and by-default doesn't
configure a default transport implemenation since netty is now a module. Yet, to make
the user expericence flawless, plugins or modules can set a `http.type.default` and
`transport.type.default`. Plugins set this via `PluginService#additionalSettings()`
which enforces _set-once_ which prevents node startup if set multiple times. This means
that our distributions will just startup with netty transport since it's packaged as a
module unless `transport.type` or `http.transport.type` is explicitly set.
This change also found a bunch of bugs since several NamedWriteables were not registered if a
transport client is used. Now that we don't rely on the `node.mode` leniency which is inherited
instead of using explicit settings, `TransportClient` uses `AssertingLocalTransport` which detects these problems since it serializes all messages.
Closes#16234
Some tests still start http implicitly or miss configuring the transport clients correctly.
This commit fixes all remaining tests and adds a depdenceny to `transport-netty` from
`qa/smoke-test-http` and `modules/reindex` since they need an http server running on the nodes.
This also moves all required permissions for netty into it's module and out of core.
This change adds a createComponents() method to Plugin implementations
which they can use to return already constructed componenents/services.
Eventually this should be just services ("components" don't really do
anything), but for now it allows any object so that preconstructed
instances by plugins can still be bound to guice. Over time we should
add basic services as arguments to this method, but for now I have left
it empty so as to not presume what is a necessary service.
The DiscoveryNodeService exists to register CustomNodeAttributes which
plugins can add. This is not necessary, since plugins can already add
additional attributes, and use the node attributes prefix.
This change removes the DiscoveryNodeService, and converts the only
consumer, the ec2 discovery plugin, to add the ec2 availability zone
in additionalSettings().
Repository plugins currently use a lot of custom classes like
RepositoryName and RepositorySettings in order to use guice to construct
repository implementations. But repositories now only really need their
settings to be constructed. Anything else they need (eg a cloud client)
can be constructed within the plugin, instead of via guice.
This change makes repository plugins use the new pull model. It removes
guice from the construction of Repository objects (no more child
injectors) and also from all repository plugins.
The api for snapshot/restore was split up between two interfaces,
Repository and IndexShardRepository. There was also complex
initialization and injection between the two. However, there is always a
one to one relationship between the two.
This change moves the IndexShardRepository api into Repository, as well
as updates the API so as not to require any services to be injected for
sublcasses.
* master: (192 commits)
[TEST] Fix rare OBOE in AbstractBytesReferenceTestCase
Reindex from remote
Rename writeThrowable to writeException
Start transport client round-robin randomly
Reword Refresh API reference (#19270)
Update fielddata.asciidoc
Fix stored_fields message
Add missing footer notes in mapper size docs
Remote BucketStreams
Add doc values support to the _size field in the mapper-size plugin
Bump version to 5.0.0-alpha5.
Update refresh.asciidoc
Update shrink-index.asciidoc
Change Debian repository for Vagrant debian-8 box
[TEST] fix test to account for internal empyt reference optimization
Upgrade to netty 3.10.6.Final (#19235)
[TEST] fix histogram test when extended bounds overlaps data
Remove redundant modifier
Simplify TcpTransport interface by reducing send code to a single send method (#19223)
Fix style violation in InstallPluginCommand.java
...
This change activates the doc_values on the _size field for indices created after 5.0.0-alpha4.
It also adds a note in the breaking changes that explain the situation and how to get around it.
Closes#18334
Today throughout the codebase, catch throwable is used with reckless
abandon. This is dangerous because the throwable could be a fatal
virtual machine error resulting from an internal error in the JVM, or an
out of memory error or a stack overflow error that leaves the virtual
machine in an unstable and unpredictable state. This commit removes
catch throwable from the codebase and removes the temptation to use it
by modifying listener APIs to receive instances of Exception instead of
the top-level Throwable.
Relates #19231
Rename `fields` to `stored_fields` and add `docvalue_fields`
`stored_fields` parameter will no longer try to retrieve fields from the _source but will only return stored fields.
`fields` will throw an exception if the user uses it.
Add `docvalue_fields` as an adjunct to `fielddata_fields` which is deprecated. `docvalue_fields` will try to load the value from the docvalue and fallback to fielddata cache if docvalues are not enabled on that field.
Closes#18943
The only reason for LifecycleComponent taking a generic type was so that
it could return that type on its start and stop methods. However, this
chaining has no practical necessity. Instead, start and stop can be
void, and a whole bunch of confusing generics disappear.
Before, a repository would maintain an index file (named 'index') per
repository, that contained the current snapshots in the repository.
This file was not atomically written, so repositories had to depend on
listing the blobs in the repository to determine what the current
snapshots are, and only rely on the index file if the repository does
not support the listBlobs operation. This could cause an incorrect view
of the current snapshots in the repository if any prior snapshot delete
operations failed to delete snapshot metadata files.
This commit introduces the atomic writing of the index file, and because
atomic writes are not guaranteed if the file already exists, we write to
a generational index file (index-N, where N is the current generation).
We also maintain an index-latest file that contains the current
generation, for those repositories that cannot list blobs.
Closes#19002
Relates #18156
BytesReference should be a really simple interface, yet it has a gazillion
ways to achieve the same this. Methods like `#hasArray`, `#toBytesArray`, `#copyBytesArray`
`#toBytesRef` `#bytes` are all really duplicates. This change simplifies the interface
dramatically and makes implementations of it much simpler. All array access has been removed
and is streamlined through a single `#toBytesRef` method. Utility methods to materialize a
compact byte array has been added too for convenience.
Repository-S3 needs a special permission because of problems in AmazonS3Client: when no region is set on a AmazonS3Client instance, the AWS SDK loads all known partitions from a JSON file and uses a Jackson's ObjectMapper for that: this one, in version 2.5.3 with the default binding options, tries to suppress access checks of ctor/field/method and thus requires this special permission. AWS must be fixed to uses Jackson correctly and have the correct modifiers on binded classes.
This must be fixed in aws sdk (see https://github.com/aws/aws-sdk-java/issues/766) but in the meanwhile we have no choice.
closes#18539
Raise IOException on deleteBlob if the blob doesn't exist
This commit raises an IOException on BlobContainer#deleteBlob
if the blob does not exist, in conformance with the BlobContainer
interface contract. Each implementation of BlobContainer now
conforms to this contract (file system, S3, Azure, HDFS). This
commit also contains blob container tests for each of the
repository implementations.
Closes#18530
As discussed at https://github.com/elastic/elasticsearch-cloud-azure/issues/91#issuecomment-229113595, we know that the current `discovery-azure` plugin only works with Azure Classic VMs / Services (which is somehow Legacy now).
The proposal here is to rename `discovery-azure` to `discovery-azure-classic` in case some users are using it.
And deprecate it for 5.0.
Closes#19144.
The factory for ingest processor is generic, but that is only for the
return type of the create mehtod. However, the actual consumer of the
factories only cares about Processor, so generics are not needed.
This change removes the generic type from the factory. It also removes
AbstractProcessorFactory which only existed in order pull the optional
tag from config. This functionality is moved to the caller of the
factories in ConfigurationUtil, and the create method now takes the tag.
This allows the covariant return of the implementation to work with
tests not needing casts.
When GCE region is empty we get back from the API something like:
```
{
"id": "dummy"
}
```
instead of:
```
{
"id": "dummy",
"items":[ ]
}
```
This generates a NPE when we aggregate all the lists into a single one.
Closes#16967.
Previously all rest handlers would take Client in their injected ctor.
However, it was only to hold the client around for runtime. Instead,
this can be done just once in the HttpService which handles rest
requests, and passed along through the handleRequest method. It also
should always be a NodeClient, and other types of Clients (eg a
TransportClient) would not work anyways (and some handlers can be
simplified in follow ups like reindex by taking NodeClient).
This change allows Plugin implementions to implement Closeable when they
have resources that should be released. As a first example of how this
can be used, I switched over ingest plugins, which just had the geoip
processor. The ingest framework had chains of closeable to support this,
which is now removed.
The discovery-plugin has been broken since 2.x because the code was not compliant with the security manager and because plugins have been refactored.
closes#18637, #15630
Since the Settings infrastructure has been improved, a group setting must be registered by the repository-azure plugin to allow settings like "cloud.azure.storage.my_account.account" to be coherent with Azure plugin documentation.
Instead of plugins calling `registerTokenizer` to extend the analyzer
they now instead have to implement `AnalysisPlugin` and override
`getTokenizer`. This lines up extending plugins in with extending
scripts. This allows `AnalysisModule` to construct the `AnalysisRegistry`
immediately as part of its constructor which makes testing anslysis
much simpler.
This also moves the default analysis configuration into `AnalysisModule`
which is how search is setup.
Like `ScriptModule`, `AnalysisModule` no longer extends `AbstractModule`.
Instead it is only responsible for building `AnslysisRegistry`. We still
bind `AnalysisRegistry` but we only do so in `Node`. This is means it
is available at module construction time so we slowly remove the need to
bind it in guice.
Related to #18945 and to this 35d3bdab84 (commitcomment-17914150)
In GCS Repository plugin we defined a `service_account` setting which is defined as `Property.Filtered`.
It's not needed as it's only a path to a file.
Closes#18946
* master: (416 commits)
docs: removed obsolete information, percolator queries are not longer loaded into jvm heap memory.
Upgrade JNA to 4.2.2 and remove optionality
[TEST] Increase timeouts for Rest test client (#19042)
Update migrate_5_0.asciidoc
Add ThreadLeakLingering option to Rest client tests
Add a MultiTermAwareComponent marker interface to analysis factories. #19028
Attempt at fixing IndexStatsIT.testFilterCacheStats.
Fix docs build.
Move templates out of the Search API, into lang-mustache module
revert - Inline reroute with process of node join/master election (#18938)
Build valid slices in SearchSourceBuilderTests
Docs: Convert aggs/misc to CONSOLE
Docs: migration notes for _timestamp and _ttl
Group client projects under :client
[TEST] Add client-test module and make client tests use randomized runner directly
Move upgrade test to upgrade from version 2.3.3
Tasks: Add completed to the mapping
Fail to start if plugin tries broken onModule
Remove duplicated read byte array methods
Rename `fields` to `stored_fields` and add `docvalue_fields`
...
This is the same as what Lucene does for its analysis factories, and we hawe
tests that make sure that the elasticsearch factories are in sync with
Lucene's. This is a first step to move forward on #9978 and #18064.
`stored_fields` parameter will no longer try to retrieve fields from the _source but will only return stored fields.
`fields` will throw an exception if the user uses it.
Add `docvalue_fields` as an adjunct to `fielddata_fields` which is deprecated. `docvalue_fields` will try to load the value from the docvalue and fallback to fielddata cache if docvalues are not enabled on that field.
Closes#18943
This changes adds a MapperPlugin interface which allows pull style
retrieval of mappers and metadata mappers added by plugins. For now, I
have kept the MapperRegistry, but this should be removed in the future
as it is just a silly container for 2 maps which could themselves be
passed around.
We pretended to be able to ackt like a different version node for so long it's
time to be honest and remove this ability. It's just confusing and where needed
and tested we should build dedicated extension points.
This change removes some unnecessary dependencies from ClusterService
and cleans up ClusterName creation. ClusterService is now not created
by guice anymore.
Today we have a push model for registering basically anything. All our extension points
are defined on modules which we pass in to plugins. This is harder to maintain and adds
unnecessary dependencies on the modules itself. This change moves towards a pull model
where the plugin offers a getter kind of method to get the extensions. This will also
help in the future if we need to pass dependencies to the extension points which can
easily be defined on the method as arguments if a pull model is used.
Registering a script engine or native scripts still uses Guice today
and is much more complicated than needed. This change moves to a pull
based model where script plugins have to implement a dedicated interface
`ScriptPlugin` and defines simple getter returning instances rather than
classes.
In 2.0 we added plugin descriptors which require defining a name and
description for the plugin. However, we still have name() and
description() which must be overriden from the Plugin class. This still
exists for classpath plugins. But classpath plugins are mainly for
tests, and even then, referring to classpath plugins with their class is
a better idea. This change removes name() and description(), replacing
the name for classpath plugins with the full class name.
The database files have been doubled in size compared to the previous files being used.
For this reason the database files are now gzip compressed, which required using
`GZIPInputStream` when loading database files.
* master: (51 commits)
Switch QueryBuilders to new MatchPhraseQueryBuilder
Added method to allow creation of new methods on-the-fly.
more cleanups
Remove cluster name from data path
Remove explicit parallel new GC flag
rehash the docvalues in DocValuesSliceQuery using BitMixer.mix instead of the naive Long.hashCode.
switch FunctionRef over to methodhandles
ingest: Move processors from core to ingest-common module.
Fix some typos (#18746)
Fix ut
convert FunctionRef/Def usage to methodhandles.
Add the ability to partition a scroll in multiple slices. API:
use painless types in FunctionRef
Update ingest-node.asciidoc
compute functional interface stuff in Definition
Use method name in bootstrap check might fork test
Make checkstyle happy (add Lookup import, line length)
Don't hide LambdaConversionException and behave like real javac compiled code when a conversion fails. This works anyways, because fallback is allowed to throw any Throwable
Pass through the lookup given by invokedynamic to the LambdaMetaFactory. Without it real lambdas won't work, as their implementations are private to script class
checkstyle have your upper L
...
Folded grok processor into ingest-common module.
The rest tests have been moved to ingest-common module as well, because these tests don't run in the rest-api-spec module but in the distribution:integ-test-zip module
and adding a test plugin there felt just wrong to me. I think this is ok. I left a tiny ingest rest test behind in that tests with an empty pipeline.
Removed messy tests, these tests were already covered in the rest tests
Added ingest test plugin in test infra so that each module testing integration with ingest doesn't need write its own plugin
Moved reindex ingest tests to qa module
Closes#18490
This commit refactors the handling of thread pool settings so that the
individual settings can be registered rather than registering the top
level group. With this refactoring, individual plugins must now register
their own settings for custom thread pools that they need, but a
dedicated API is provided for this in the thread pool module. This
commit also renames the prefix on the thread pool settings from
"threadpool" to "thread_pool". This enables a hard break on the settings
so that:
- some of the settings can be given more sensible names (e.g., the max
number of threads in a scaling thread pool is now named "max" instead
of "size")
- change the soft limit on the number of threads in the bulk and
indexing thread pools to a hard limit
- the settings names for custom plugins for thread pools can be
prefixed (e.g., "xpack.watcher.thread_pool.size")
- remove dynamic thread pool settings
Relates #18674
* master: (184 commits)
Add back pending deletes (#18698)
refactor matrix agg documentation from modules to main agg section
Implement ctx.op = "delete" on _update_by_query and _reindex
Close SearchContext if query rewrite failed
Wrap lines at 140 characters (:qa projects)
Remove log file
painless: Add support for the new Java 9 MethodHandles#arrayLength() factory (see https://bugs.openjdk.java.net/browse/JDK-8156915)
More complete exception message in settings tests
Use java from path if JAVA_HOME is not set
Fix uncaught checked exception in AzureTestUtils
[TEST] wait for yellow after setup doc tests (#18726)
Fix recovery throttling to properly handle relocating non-primary shards (#18701)
Fix merge stats rendering in RestIndicesAction (#18720)
[TEST] mute RandomAllocationDeciderTests.testRandomDecisions
Reworked docs for index-shrink API (#18705)
Improve painless compile-time exceptions
Adds UUIDs to snapshots
Add test rethrottle test case for delete-by-query
Do not start scheduled pings until transport start
Adressing review comments
...
* master: (911 commits)
[TEST] wait for yellow after setup doc tests (#18726)
Fix recovery throttling to properly handle relocating non-primary shards (#18701)
Fix merge stats rendering in RestIndicesAction (#18720)
[TEST] mute RandomAllocationDeciderTests.testRandomDecisions
Reworked docs for index-shrink API (#18705)
Improve painless compile-time exceptions
Adds UUIDs to snapshots
Add test rethrottle test case for delete-by-query
Do not start scheduled pings until transport start
Adressing review comments
Only filter intial recovery (post API) when shrinking an index (#18661)
Add tests to check that toQuery() doesn't return null
Removing handling of null lucene query where we catch this at parse time
Handle empty query bodies at parse time and remove EmptyQueryBuilder
Mute failing assertions in IndexWithShadowReplicasIT until fix
Remove allow running as root
Add upgrade-not-supported warning to alpha release notes
remove unrecognized javadoc tag from matrix aggregation module
set ValuesSourceConfig fields as private
Adding MultiValuesSource support classes and documentation to matrix stats agg module
...
This commit adds a UUID for each snapshot, in addition to the already
existing repository and snapshot name. The addition of UUIDs will enable
more robust handling of the deletion of previous snapshots and lingering
files from partially failed delete operations, on top of being able to
uniquely track each snapshot.
Closes#18228
Relates #18156
This commit clarifies the behavior that must be adhered to by any
implementors of the BlobContainer interface. This is done through
expanded Javadocs.
Closes#18157Closes#15580
This adds a low level primitive operations to shrink an existing
index into a new index with a single shard. This primitive expects
all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node.
To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only:
```BASH
$ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{
"settings" : {
"index.routing.allocation.require._name" : "shrink_node_name",
"index.blocks.write" : true
}
}
```
once all shards are started on the shrink node. the new index can be created via:
```BASH
$ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{
"settings" : {
"index.codec" : "best_compression",
"index.number_of_replicas" : 1
}
}'
```
This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc.
The shrink operation does not modify the source index, if a shrink operation should
be canceled or if the shrink failed, the target index can simply be deleted and
all resources are released.
* changes `throttle_retries` to `use_throttle_retries`
* removes registering of all individual repository settings when the plugin starts. Not needed
* adds more comment about deprecated method in AWS SDK we need to implement though in a Delegate class within our tests
This PR changes the InternalTestCluster to support dedicated master nodes. The creation of dedicated master nodes can be controlled using a new `supportsMasterNodes` parameter to the ClusterScope annotation. If set to true (the default), dedicated master nodes will randomly be used. If set to false, no master nodes will be created and data nodes will also be allowed to become masters. If active, test runs will either have 1 or 3 masternodes
Failing the build on deprecation warnings was removed in
19b3ec88af. This commit removes the
suppressed deprecation warnings so that their use is surfaced in the
build now.
Relates #18582
* master: (158 commits)
Document the hack
Refactor property placeholder use of env. vars
Force java9 log4j hack in testing
Fix log4j buggy java version detection
Make java9 work again
Don't mkdir directly in deb init script
Fix env. var placeholder test so it's reproducible
Remove ScriptMode class in favor of boolean true/false
[rest api spec] fix doc urls
Netty request/response tracer should wait for send
Filter client/server VM options from jvm.options
[rest api spec] fix url for reindex api docs
Remove use of a Fields class in snapshot responses that contains x-content keys, in favor of declaring/using the keys directly.
Limit retries of failed allocations per index (#18467)
Proxy box method to use valueOf.
Use the build-in valueOf method instead of the custom one.
Fixed tests and added a comment to the box method.
Fix boxing.
Do not decode path when sending error
Fix race condition in snapshot initialization
...
This change makes ES compile with java9 again, build 118.
* There are a handful of changes due to failure to determine types during compile.
* The attachment plugins which use tika needed to have tika upgraded in order to pickup fixes there for java 9.
* azure discovery and s3 repository indirectly depend on jaxb, which is no longer in the default modules. They now add a jaxb dependency externally, and make JarHell allow for this package.
* ESBlobStore tests must move to the test framework if we want to be able to reuse them in the context of plugins.
* To be able to identify more easily what are Integration Tests vs Unit Tests, this commit renames `*AzureTestCase` to `*AzureIntegTestCase`.
* Move some debug level logs to trace level
* Collapse when possible identical catch blocks
* `blobNameFromUri()` does not need anymore to get the container name. We just split the URI after 3 `/` and simply get the remaining part.
* Added a Unit test for that
* As we renamed some existing classes, checkstyle is now complaining about the lines width.
* While we are at it, let's replace all calls to `execute().actionGet()` with `get()`
* Move `readSettingsFromFile()` in a Util class. Note that this class might be useful for other plugins (S3/EC2/Azure-discovery for instance) so may be we should move it to the test framework?
* Replace some part of the code with lambdas
I initially wrongly put this setting under `cloud.aws.s3.` prefix which does not make sense. It should be placed at the same place as `max_retries`.
Also applied @tlrx comments. We should set this even if max_retries is not set (when using default values).
Also added some documentation about this setting.
This commit fixes the inequality symbol used in a test assertion in
RepositoryS3SettingsTests#testInvalidChunkBufferSizeRepositorySettings. The
inequality symbol was previously backwards but fixed in commit
cad0608cdb but fixing the inequality
symbol here was missed in that commit.
Closes#18449
Probably when we updated Azure SDK, we introduced a regression.
Actually, we are not able to remove files anymore.
For example, if you register a new azure repository, the snapshot service tries to create a temp file and then remove it.
Removing does not work and you can see in logs:
```
[2016-05-18 11:03:24,914][WARN ][org.elasticsearch.cloud.azure.blobstore] [azure] can not remove [tests-ilmRPJ8URU-sh18yj38O6g/] in container {elasticsearch-snapshots}: The specified blob does not exist.
```
This fix deals with that. It now list all the files in a flatten mode, remove in the full URL the server and the container name.
As an example, when you are removing a blob which full name is `https://dpi24329.blob.core.windows.net/elasticsearch-snapshots/bar/test` you need to actually call Azure SDK with `bar/test` as the path, `elasticsearch-snapshots` is the container.
To run the test, you need to pass some parameters: `-Dtests.thirdparty=true -Dtests.config=/path/to/elasticsearch.yml`
Where `elasticsearch.yml` contains something like:
```
cloud.azure.storage.default.account: account
cloud.azure.storage.default.key: key
```
Related to #16472Closes#18436.
This change does the following:
- Queries that are currently unsupported such as prefix queries on numeric
fields or term queries on geo fields now throw an error rather than returning
a query that does not match anything.
- Fuzzy queries on numeric, date and ip fields are now unsupported: they used
to create range queries, we now expect users to use range queries directly.
Fuzzy, regexp and prefix queries are now only supported on text/keyword
fields (including `_all`).
- The `_uid` and `_id` fields do not support prefix or range queries anymore as
it would prevent us to store them more efficiently in the future, eg. by
using a binary encoding.
Note that it is still possible to ignore these errors by using the `lenient`
option of the `match` or `query_string` queries.
* master: (904 commits)
Removes unused methods in the o/e/common/Strings class.
Add note regarding thread stack size on Windows
painless: restore accidentally removed test
Documented fuzzy_transpositions in match query
Add not-null precondition check in BulkRequest
Build: Make run task you full zip distribution
Build: More pom generation improvements
Add test for wrong array index
Take return type from "after" field.
painless: build descriptor of array and field load/store in code; fix array index to adapt type not DEF
Build: Add developer info to generated pom files
painless: improve exception stacktraces
painless: Rename the dynamic call site factory to DefBootstrap and make the inner class very short (PIC = Polymorphic Inline Cache)
Remove dead code.
Avoid race while retiring executors
Allow only a single extension for a scripting engine
Adding REST tests to ensure key_as_string behavior stays consistent
[test] Set logging to 11 on reindex test
[TEST] increase logger level until we know what is going on
Don't allow `fuzziness` for `multi_match` types cross_fields, phrase and phrase_prefix
...
Previously multiple extensions could be provided, however, this can lead
to confusion with on-disk scripts (ie, "foo.js" and "foo.javascript")
having different content. Only a single extension is now supported.
The only language currently supporting multiple extensions was the
Javascript engine ("js" and "javascript"). It now only supports the
`.js` extension.
Relates to #10598
This removes all the mentions of the sandbox from the script engine
services and permissions model. This means that the following settings
are no longer supported:
```yaml
script.inline: sandbox
script.stored: sandbox
```
Instead, only a `true` or `false` value can be specified.
Since this would otherwise break the default-allow parameter for
languages like expressions, painless, and mustache, all script engines
have been updated to have individual settings, for instance:
```yaml
script.engine.groovy.inline: true
```
Would enable all inline scripts for groovy. (they can still be
overridden on a per-operation basis).
Expressions, Painless, and Mustache all default to `true` for inline,
file, and stored scripts to preserve the old scripting behavior.
Resolves#17114
I am unable to set ec2 discovery tags because this setting was
accidentally omitted from the register settings list in
Ec2DiscoveryPlugin.java. I get this:
java.lang.IllegalArgumentException: unknown setting [discovery.ec2.tag.project]
This removes dead/duplicate code and makes the `_index` field not configurable.
(Configuration used to jus be ignored, now we would throw an exception if any
is provided.)
Most of the current implementations of BaseNodesResponse (plural Nodes) ignore FailedNodeExceptions.
- This adds a helper function to do the grouping to TransportNodesAction
- Requires a non-null array of FailedNodeExceptions within the BaseNodesResponse constructor
- Reads/writes the array to output
- Also adds StreamInput and StreamOutput methods for generically reading and writing arrays
QueryBuilder has generics, but those are never used: all call sites use
`QueryBuilder<?>`. Only `AbstractQueryBuilder` needs generics so that the base
class can contain a default implementation for setters that returns `this`.
This commit introduces a handshake when initiating a light
connection. During this handshake, node information, cluster name, and
version are received from the target node of the connection. This
information can be used to immediately validate that the target node is
a member of the same cluster, and used to set the version on the
stream. This will allow us to extend APIs that are used during initial
cluster recovery without a major version change.
Relates #15971
This commit removes the method Strings#splitStringToArray and replaces
the call sites with invocations to String#split. There are only two
explanations for the existence of this method. The first is that
String#split is slightly tricky in that it accepts a regular expression
rather than a character to split on. This means that if s is a string,
s.split(".") does not split on the character '.', but rather splits on
the regular expression '.' which splits on every character (of course,
this is easily fixed by invoking s.split("\\.") instead). The second
possible explanation is that (again) String#split accepts a regular
expression. This means that there could be a performance concern
compared to just splitting on a single character. However, it turns out
that String#split has a fast path for the case of splitting on a single
character and microbenchmarks show that String#split has 1.5x--2x the
throughput of Strings#splitStringToArray. There is a slight behavior
difference between Strings#splitStringToArray and String#split: namely,
the former would return an empty array in cases when the input string
was null or empty but String#split will just NPE at the call site on
null and return a one-element array containing the empty string when the
input string is empty. There was only one place relying on this behavior
and the call site has been modified accordingly.
For now we support `_gce_` only if discovery is set to `gce` and all information about GCE is provided (project_id and zone).
But in some cases, people would like to only bind to `_gce_` on a single node (without any elasticsearch cluster).
They could access the machine then from other machines running inside the same project.
This commit adds a new GceMetadataService which is started as soon as the plugin is started so GceNameResolver can use it to resolve `_gce`.
Closes#15724.
When working on #18008 I found while reading the code that we don't filter anymore `repositories.s3.access_key` and `repositories.s3.secret_key`.
Also fixed a typo in REST test
Lucene allows to create a ICUTokenizer with a special config argument
enabling the customization of the rule based iterator by providing
custom rules files.
This commit enable this feature. Users could provide a list of RBBI rule
files to ICU tokenizer.
closes#13146
Now that the current uses of magical camelCase support have been
deprecated, we can remove these in master (sans remaining issues like
BulkRequest). This change removes camel case support from ParseField,
query types, analysis, and settings lookup.
see #8988
* `rename` processor, renamed `to` to `target_field`
* `date` processor, renamed `match_field` to `field` and renamed `match_formats` to `formats`
* `geoip` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
* `attachment` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
Closes#17835
Defaults to `true`.
If anyone is having trouble with this option, you could disable it with `cloud.aws.s3.throttle_retries: false` in `elasticsearch.yml` file.
* Moving from JSON.org to Jackson for request marshallers.
* The Java SDK now supports retry throttling to limit the rate of retries during periods of reduced availability. This throttling behavior can be enabled via ClientConfiguration or via the system property "-Dcom.amazonaws.sdk.enableThrottledRetry".
* Fixed String case conversion issues when running with non English locales.
* AWS SDK for Java introduces a new dynamic endpoint system that can compute endpoints for services in new regions.
* Introducing a new AWS region, ap-northeast-2.
* Added a new metric, HttpSocketReadTime, that records socket read latency. You can enable this metric by adding enableHttpSocketReadMetric to the system property com.amazonaws.sdk.enableDefaultMetrics. For more information, see [Enabling Metrics with the AWS SDK for Java](https://java.awsblog.com/post/Tx3C0RV4NRRBKTG/Enabling-Metrics-with-the-AWS-SDK-for-Java).
* New Client Execution timeout feature to set a limit spent across retries, backoffs, ummarshalling, etc. This new timeout can be specified at the client level or per request.
Also included in this release is the ability to specify the existing HTTP Request timeout per request rather than just per client.
* Added support for RequesterPays for all operations.
* Ignore the 'Connection' header when generating S3 responses.
* Allow users to generate an AmazonS3URI from a string without using URL encoding.
* Fixed issue that prevented creating buckets when using a client configured for the s3-external-1 endpoint.
* Amazon S3 bucket lifecycle configuration supports two new features: the removal of expired object delete markers and an action to abort incomplete multipart uploads.
* Allow TransferManagerConfiguration to accept integer values for multipart upload threshold.
* Copy the list of ETags before sorting https://github.com/aws/aws-sdk-java/pull/589.
* Option to disable chunked encoding https://github.com/aws/aws-sdk-java/pull/586.
* Adding retry on InternalErrors in CompleteMultipartUpload operation. https://github.com/aws/aws-sdk-java/issues/538
* Deprecated two APIs : AmazonS3#changeObjectStorageClass and AmazonS3#setObjectRedirectLocation.
* Added support for the aws-exec-read canned ACL. Owner gets FULL_CONTROL. Amazon EC2 gets READ access to GET an Amazon Machine Image (AMI) bundle from Amazon S3.
* Added support for referencing security groups in peered Virtual Private Clouds (VPCs). For more information see the service announcement at https://aws.amazon.com/about-aws/whats-new/2016/03/announcing-support-for-security-group-references-in-a-peered-vpc/ .
* Fixed a bug in AWS SDK for Java - Amazon EC2 module that returns NPE for dry run requests.
* Regenerated client with new implementation of code generator.
* This feature enables support for DNS resolution of public hostnames to private IP addresses when queried over ClassicLink. Additionally, you can now access private hosted zones associated with your VPC from a linked EC2-Classic instance. ClassicLink DNS support makes it easier for EC2-Classic instances to communicate with VPC resources using public DNS hostnames.
* You can now use Network Address Translation (NAT) Gateway, a highly available AWS managed service that makes it easy to connect to the Internet from instances within a private subnet in an AWS Virtual Private Cloud (VPC). Previously, you needed to launch a NAT instance to enable NAT for instances in a private subnet. Amazon VPC NAT Gateway is available in the US East (N. Virginia), US West (Oregon), US West (N. California), EU (Ireland), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney) regions. To learn more about Amazon VPC NAT, see [New - Managed NAT (Network Address Translation) Gateway for AWS](https://aws.amazon.com/blogs/aws/new-managed-nat-network-address-translation-gateway-for-aws/)
* A default read timeout is now applied when querying data from EC2 metadata service.
This makes all numeric fields including `date`, `ip` and `token_count` use
points instead of the inverted index as a lookup structure. This is expected
to perform worse for exact queries, but faster for range queries. It also
requires less storage.
Notes about how the change works:
- Numeric mappers have been split into a legacy version that is essentially
the current mapper, and a new version that uses points, eg.
LegacyDateFieldMapper and DateFieldMapper.
- Since new and old fields have the same names, the decision about which one
to use is made based on the index creation version.
- If you try to force using a legacy field on a new index or a field that uses
points on an old index, you will get an exception.
- IP addresses now support IPv6 via Lucene's InetAddressPoint and store them
in SORTED_SET doc values using the same encoding (fixed length of 16 bytes
and sortable).
- The internal MappedFieldType that is stored by the new mappers does not have
any of the points-related properties set. Instead, it keeps setting the index
options when parsing the `index` property of mappings and does
`if (fieldType.indexOptions() != IndexOptions.NONE) { // add point field }`
when parsing documents.
Known issues that won't fix:
- You can't use numeric fields in significant terms aggregations anymore since
this requires document frequencies, which points do not record.
- Term queries on numeric fields will now return constant scores instead of
giving better scores to the rare values.
Known issues that we could work around (in follow-up PRs, this one is too large
already):
- Range queries on `ip` addresses only work if both the lower and upper bounds
are inclusive (exclusive bounds are not exposed in Lucene). We could either
decide to implement it, or drop range support entirely and tell users to
query subnets using the CIDR notation instead.
- Since IP addresses now use a different representation for doc values,
aggregations will fail when running a terms aggregation on an ip field on a
list of indices that contains both pre-5.0 and 5.0 indices.
- The ip range aggregation does not work on the new ip field. We need to either
implement range aggs for SORTED_SET doc values or drop support for ip ranges
and tell users to use filters instead. #17700Closes#16751Closes#17007Closes#11513
When it comes to query parsing, either a field is tokenized and it would go
through analysis with its search_analyzer. Or it is not tokenized and the
raw string should be passed to termQuery(). Since numeric fields are not
tokenized and also declare a search analyzer, values would currently go through
analysis twice...
This commit removes `MappedFieldType.value` and simplifies
`MappedFieldType.valueforSearch`. `valueforSearch` was used to post-process
values that come for stored fields (eg. to convert a long back to a string
representation of a date in the case of a date field) and also values that
are extracted from the source but only in the case of GET calls: it would
not be called when performing source filtering on search requests.
`valueforSearch` is now only called for stored fields, since values that are
extracted from the source should already be formatted as expected.
* upgrades numerics to new Point format
* updates geo api changes
* adds GeoPointDistanceRangeQuery as XGeoPointDistanceRangeQuery
* cuts over to ES GeoHashUtils
CBOR is natively supported in Elasticsearch and allows for byte arrays.
This means, that by using CBOR the user can prevent base64 conversions
for the data being sent back and forth.
This PR adds support to extract data from a byte array in addition to
a string. This also required to add a ByteArrayValueSource class.
We have both `Settings.settingsBuilder` and `Settings.builder` that do exactly
the same thing, so we should keep only one. I kept `Settings.builder` since it
has my preference but also it is the one that we use in examples of the Java API.
This PR just adds a new test where we check that we forcing a value in the JSON document actually works as expected:
```json
{
"file": {
"_content": "BASE64"
"_name": "12-240.pdf",
"_language": "en",
"_content_type": "pdf"
}
}
```
Note that we don't support forcing all values. So sending:
```json
{
"file": {
"_content": "BASE64"
"_name": "12-240.pdf",
"_title": "12-240.pdf",
"_keywords": "Div42 Src580 LGE Mechtech",
"_language": "en",
"_content_type": "pdf"
}
}
```
Will have absolutely no effect on fields `title` and `keywords`.
Note that when `_language` is set, it only works if `index.mapping.attachment.detect_language` is set to `true`.
Related to https://discuss.elastic.co/t/mapper-attachments/46615/4
This removes the inconsistent output of IP addresses. The format was parsing-unfriendly and it makes it hard
to reason about API responses, such as to _nodes.
With this change in place, it will never print the hostname as part of the default format, which has the
added benefit that it can be used consistently for URIs, which was not the case when the hostname might
appear at the front with "hostname/ip:port".
`text` fields will have fielddata disabled by default. Fielddata can still be
enabled on an existing index by setting `fielddata=true` in the mappings.
Node roles are now serialized as well, they are not part of the node attributes anymore. DiscoveryNodeService takes care of dividing settings into attributes and roles. DiscoveryNode always requires to pass in attributes and roles separately.
This change moves placeholder replacement to a pkg private class for
settings. It also adds a null check when calling replacement, as
settings objects can still contain null values, because we only prohibit
nulls on file loading. Finally, this cleans up file and stream loading a
bit to not have unnecessary exception wrapping.
We can be better at checking `buffer_size` and `chunk_size` for S3 repositories.
For example, we know that:
* `buffer_size` should be more than `5mb`
* `chunk_size` should be no more than `5tb`
* `buffer_size` should be lower than `chunk_size`
Otherwise, setting `buffer_size` is useless.
For the record:
`chunk_size` is a Snapshot setting whatever the implementation is.
`buffer_size` is an S3 implementation setting.
Let say that you are snapshotting a 500mb file. If you set `chunk_size` to `200mb`, then Snapshot service will call S3 repository to snapshot 3 files with the following sizes:
* `200mb`
* `200mb`
* `100mb`
If you set `buffer_size` to `100mb` (AWS maximum size recommendation), the first file of `200mb` will be uploaded on S3 using the multipart feature in 2 chunks and the workflow is basically the following:
* create the multipart request and get back an `id` from AWS S3 platform
* upload part1: `100mb`
* upload part2: `100mb`
* "commit" the full upload using the `id`.
Closes#17244.