throw exception if a copy_to is within a multi field
Copy to within multi field is ignored from 2.0 on, see #10802.
Instead of just ignoring it, we should throw an exception if this
is found in the mapping when a mapping is added. For already
existing indices we should at least log a warning.
We remove the copy_to in any case.
related to #14946
This commit adds the infrastructure to make settings that are updateable
resetable and changes the application of updates to be transactional. This means
setting updates are either applied or not. If the application failes all values are rejected.
This initial commit converts all dynamic cluster settings to make use of the new infrastructure.
All cluster level dynamic settings are not resettable to their defaults or to the node level settings.
The infrastructure also allows to list default values and descriptions which is not fully implemented yet.
Values can be reset using a list of key or simple regular expressions. This has only been implemented on the java
layer yet. For instance to reset all recovery settings to their defaults a user can just specify `indices.recovery.*`.
This commit also adds strict settings validation, if a setting is unknown or if a setting can not be applied the entire
settings update request will fail.
When using S3 or EC2, it was possible to use a proxy to access EC2 or S3 API but username and password were not possible to be set.
This commit adds support for this. Also, to make all that consistent, proxy settings for both plugins have been renamed:
* from `cloud.aws.proxy_host` to `cloud.aws.proxy.host`
* from `cloud.aws.ec2.proxy_host` to `cloud.aws.ec2.proxy.host`
* from `cloud.aws.s3.proxy_host` to `cloud.aws.s3.proxy.host`
* from `cloud.aws.proxy_port` to `cloud.aws.proxy.port`
* from `cloud.aws.ec2.proxy_port` to `cloud.aws.ec2.proxy.port`
* from `cloud.aws.s3.proxy_port` to `cloud.aws.s3.proxy.port`
New settings are `proxy.username` and `proxy.password`.
```yml
cloud:
aws:
protocol: https
proxy:
host: proxy1.company.com
port: 8083
username: myself
password: theBestPasswordEver!
```
You can also set different proxies for `ec2` and `s3`:
```yml
cloud:
aws:
s3:
proxy:
host: proxy1.company.com
port: 8083
username: myself1
password: theBestPasswordEver1!
ec2:
proxy:
host: proxy2.company.com
port: 8083
username: myself2
password: theBestPasswordEver2!
```
Note that `password` is filtered with `SettingsFilter`.
We also fix a potential issue in S3 repository. We were supposed to accept key/secret either set under `cloud.aws` or `cloud.aws.s3` but the actual code never implemented that.
It was:
```java
account = settings.get("cloud.aws.access_key");
key = settings.get("cloud.aws.secret_key");
```
We replaced that by:
```java
String account = settings.get(CLOUD_S3.KEY, settings.get(CLOUD_AWS.KEY));
String key = settings.get(CLOUD_S3.SECRET, settings.get(CLOUD_AWS.SECRET));
```
Also, we extract all settings for S3 in `AwsS3Service` as it's already the case for `AwsEc2Service` class.
Closes#15268.
Since 2.2 we run all scripts with minimal privileges, similar to applets in your browser.
The problem is, they have unrestricted access to other things they can muck with (ES, JDK, whatever).
So they can still easily do tons of bad things
This PR restricts what classes scripts can load via the classloader mechanism, to make life more difficult.
The "standard" list was populated from the old list used for the groovy sandbox: though
a few more were needed for tests to pass (java.lang.String, java.util.Iterator, nothing scary there).
Additionally, each scripting engine typically needs permissions to some runtime stuff.
That is the downside of this "good old classloader" approach, but I like the transparency and simplicity,
and I don't want to waste my time with any feature provided by the engine itself for this, I don't trust them.
This is not perfect and the engines are not perfect but you gotta start somewhere. For expert users that
need to tweak the permissions, we already support that via the standard java security configuration files, the
specification is simple, supports wildcards, etc (though we do not use them ourselves).
Azure team released new versions of their Java SDK.
According to https://github.com/Azure/azure-sdk-for-java/wiki/Azure-SDK-for-Java-Features, it comes with 2 versions.
We should at least update to `0.9.0` of V1 but also consider moving to the new APIs (V2).
This commit first updates to latest API V1.
```xml
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-svc-mgmt-compute</artifactId>
<version>0.9.0</version>
</dependency>
```
Closes#15209
Failures to merge a mapping can either come as a MergeMappingException if they
come from Mapper.merge or as an IllegalArgumentException if they come from
FieldTypeLookup.checkCompatibility. I think we should settle on one: this pull
request replaces all usage of MergeMappingException with
IllegalArgumentException.
The PipelineTests tried to test if the configured map/list in set processor wasn't modified while documents were ingested. Creating a pipeline programmatically created more noise than the test needed. The new tests in IngestDocumentTests have the same goal, but is much smaller and clearer by directly testing against IngestDocument.
This commit removes and now forbids all uses of the type-unsafe empty
Collections fields Collections#EMPTY_LIST, Collections#EMPTY_MAP, and
Collections#EMPTY_SET. The type-safe methods Collections#emptyList,
Collections#emptyMap, and Collections#emptySet should be used instead.
This change attempts to simplify the gradle tasks for precommit. One
major part of that is using a "less groovy style", as well as being more
consistent about how tasks are created and where they are configured. It
also allows the things creating the tasks to set up inter task
dependencies, instead of assuming them (ie decoupling from tasks
eleswhere in the build).
Rename processor now checks whether the field to rename exists and throws exception if it doesn't. It also checks that the new field to rename to doesn't exist yet, and throws exception otherwise. Also we make sure that the rename operation is atomic, otherwise things may break between the remove and the set and we'd leave the document in an inconsistent state.
Note that the requirement for the new field name to not exist simplifies the usecase for e.g. { "rename" : { "list.1": "list.2"} } as such a rename wouldn't be accepted if list is actually a list given that either list.2 already exists or the index is out of bounds for the existing list. If one really wants to replace an existing field, that field needs to be removed first through remove processor and then rename can be used.
1) It no longer extends from Closeable.
2) Removed the config directory setter. Implementation that relied on it, now get the location to the config dir via their constructors.
Validation is not done as part of the distance setter method and tested in GeoDistanceQueryBuilderTests. Fixed GeoDistanceTests to adapt to the new validation.
Closes#15135
When reading, through #getFieldValue and #hasField, and a list is encountered, the next element in the path is treated as the index of the item that the path points to (e.g. `list.0.key`). If the index is not a number or out of bounds, an exception gets thrown.
Added #appendFieldValue method that has the same behaviour as setFieldValue, but when a list is the last element in the path, instead of replacing the whole list it will simply add a new element to the existing list. This method is currently unused, we have to decide whether the set processor or a new processor should use it.
A few other changes made:
- Renamed hasFieldValue to hasField, as this method is not really about values but only keys. It will return true if a key is there but its value is null, while it returns false only when a field is not there at all.
- Changed null semantic in getFieldValue. null gets returned only when it was an actual value in the source, an exception is thrown otherwise when trying to access a non existing field, so that null != field not present.
- Made remove stricter about non existing fields. Throws error when trying to remove a non existing field. This is more consistent with the other methods in IngestDocument which are strict about fields that are not present.
Relates to #14324
Do not to load fields from _source when using the `fields` option.
Non stored (non existing) fields are ignored by the fields visitor when using the `fields` option.
Fixes#10783
Support * wildcard to retrieve stored fields when using the `fields` option.
Supported pattern styles are "xxx*", "*xxx", "*xxx*" and "xxx*yyy".
Its enough to test the content type for what we are testing.
Currently tests are flaky if charset is detected as e.g. windows-1252 vs iso-8859-1 and so on.
In fact, they fail on windows 100% of the time.
We are not trying to test charset detection heuristics (which might be different even due to newlines in tests or other things).
If we want to do test that, we should test it separately.
When importing dangling indices on a single node that is data and master eligable the async dangling index
call can still be in-flight when the cluster is checked for green / yellow. Adding a dedicated master node
and a data only node that does the importing fixes this issus just like we do in OldIndexBackwardsCompatibilityIT
Client in PipelineStore gets provided via a guice provider
Processor and Factory throw Exception instead of IOException
Removed PipelineExecutionService.Listener with ActionListener