this ensures the codebase URL matches the permission grant (see matching toRealPath in Security.java)
in the case of symlinks or other shenanigans.
this is best effort, if we really want to support symlinks in any way, we need
e.g. qa or vagrant tests that configure a bunch of symlinks for things and ensure that in jenkins.
this should be easier to do with gradle, as we can just create a symlink'd home if we want
Today we only handle correctly if the `ExecutionCancelledException` comes from the
local execution. Yet, this can also come from remove and should be handled identically.
This commit restores the chunk size of 512kb lost in a previous but unreleased
refactoring. At the same time it removes the configurability of:
* `indices.recovery.file_chunk_size` - now fixed to 512kb
* `indices.recovery.translog_ops` - removed without replacement
* `indices.recovery.translog_size` - now fixed to 512kb
* `indices.recovery.compress` - file chunks are not compressed due to lucene's compression but translog operations are.
The compress option is gone entirely and compression is used where it makes sense. On sending files of the index
we don't compress as we rely on the lucene compression for stored fields etc.
Relates to #15161
This commit cherry picks some infrastructure changes from the `feature/seq_no` branch to make merging from master easier.
More explicitly, IndexShard current have prepareIndex and prepareDelete methods that are called both on the primary as the replica, giving it a different origin parameter. Instead, this commits creates two explicit prepare*OnPrimary and prepare*OnReplica methods. This has the extra added value of not expecting the caller to use an Engine enum.
Also, the commit adds some code reuse between TransportIndexAction and TransportDeleteAction and their TransportShardBulkAction counter parts.
Closes#15282
The tribe node creates one local client node for each cluster it
connects to. Refactorings in #13383 broke this so that each local client
node now tries to load the full elasticsearch.yml that the real tribe
node uses.
This change fixes the problem by adding a TribeClientNode which is a
subclass of Node. The Environment the node uses is now passed in (in
place of Settings), and the TribeClientNode simply does not use
InternalSettingsPreparer.prepareEnvironment.
The tests around tribe nodes are not great. The existing tests pass, but
I also manually tested by creating 2 local clusters, and configuring and
starting a tribe node. With this I was able to see in the logs the tribe
node connecting to each cluster.
closes#13383
When using S3 or EC2, it was possible to use a proxy to access EC2 or S3 API but username and password were not possible to be set.
This commit adds support for this. Also, to make all that consistent, proxy settings for both plugins have been renamed:
* from `cloud.aws.proxy_host` to `cloud.aws.proxy.host`
* from `cloud.aws.ec2.proxy_host` to `cloud.aws.ec2.proxy.host`
* from `cloud.aws.s3.proxy_host` to `cloud.aws.s3.proxy.host`
* from `cloud.aws.proxy_port` to `cloud.aws.proxy.port`
* from `cloud.aws.ec2.proxy_port` to `cloud.aws.ec2.proxy.port`
* from `cloud.aws.s3.proxy_port` to `cloud.aws.s3.proxy.port`
New settings are `proxy.username` and `proxy.password`.
```yml
cloud:
aws:
protocol: https
proxy:
host: proxy1.company.com
port: 8083
username: myself
password: theBestPasswordEver!
```
You can also set different proxies for `ec2` and `s3`:
```yml
cloud:
aws:
s3:
proxy:
host: proxy1.company.com
port: 8083
username: myself1
password: theBestPasswordEver1!
ec2:
proxy:
host: proxy2.company.com
port: 8083
username: myself2
password: theBestPasswordEver2!
```
Note that `password` is filtered with `SettingsFilter`.
We also fix a potential issue in S3 repository. We were supposed to accept key/secret either set under `cloud.aws` or `cloud.aws.s3` but the actual code never implemented that.
It was:
```java
account = settings.get("cloud.aws.access_key");
key = settings.get("cloud.aws.secret_key");
```
We replaced that by:
```java
String account = settings.get(CLOUD_S3.KEY, settings.get(CLOUD_AWS.KEY));
String key = settings.get(CLOUD_S3.SECRET, settings.get(CLOUD_AWS.SECRET));
```
Also, we extract all settings for S3 in `AwsS3Service` as it's already the case for `AwsEc2Service` class.
Closes#15268.
I don't recall of this property of any of our field mappers and it's not in our
docs so I suspect it's very old. The removal of this property will not fail
version upgrades since none of the field mappers use it in toXContent.
This commit removes some unneeded null checks from
IndexingMemoryController that were left over from the work in #15251,
and simplifies the try-catch block in
IndexingMemoryController#updateShardBuffers.
For the search refactoring the HighlightBuilder needs a way to
create new instances by parsing xContent. For bwc this PR start
by moving over and slightly modifying the parsing from
HighlighterParseElement and keeps parsing for top level highlighter
and field options separate. Also adding tests for roundtrip
of random builder (rendering it to xContent and parsing it and
making sure the original builder properties are preserved)
Since 2.2 we run all scripts with minimal privileges, similar to applets in your browser.
The problem is, they have unrestricted access to other things they can muck with (ES, JDK, whatever).
So they can still easily do tons of bad things
This PR restricts what classes scripts can load via the classloader mechanism, to make life more difficult.
The "standard" list was populated from the old list used for the groovy sandbox: though
a few more were needed for tests to pass (java.lang.String, java.util.Iterator, nothing scary there).
Additionally, each scripting engine typically needs permissions to some runtime stuff.
That is the downside of this "good old classloader" approach, but I like the transparency and simplicity,
and I don't want to waste my time with any feature provided by the engine itself for this, I don't trust them.
This is not perfect and the engines are not perfect but you gotta start somewhere. For expert users that
need to tweak the permissions, we already support that via the standard java security configuration files, the
specification is simple, supports wildcards, etc (though we do not use them ourselves).
This commit simplifies shard inactive debug logging to only log when the
physical shard is marked as inactive. This eliminates duplicate logging
that existed in IndexShard#checkIdle and
IndexingMemoryController#checkIdle, and eliminates excessive logging
that was occurring when the shard was already inactive as a result of
the work in #15252.
Currently, when a user tries to install an old plugin (pre 2.x) on a 2.x
node, the error message is cryptic (just printing the file path that was
missing, when looking for the descriptor). This improves the message to
be more explicit that the descriptor is missing, and suggests the
problem might be the plugin was built before 2.0.
closes#15197
We currently use the full suite of packaged rest tests for each
distribution. We also used to run rest tests within core integ tests,
but this stopped working when we split out the test-framework, since the
test files are in there.
This change simplifies the code to run packaged rest tests just once,
for the integ-test-zip, and removes the unused rest tests from
test-framework. Distributions rest tests now check that all modules
were loaded.
This commit addresses some issues that arose during the review of #14899
but were lost during squash while integrating into master.
- the number of test threads is dropped to at most eight
- a local variable is renamed for clarity
- task priorities are randomized
This commit fixes a test bug in
ClusterService#testClusterStateBatchedUpdates. In particular, in the
case that an executor did not receive a task assignment from the random
assignments, it would not have an entry in the map of executors to
counts of assigned tasks. The fix is to just check if each executor has
an entry in the counts map.