We can be better at checking `buffer_size` and `chunk_size` for S3 repositories.
For example, we know that:
* `buffer_size` should be more than `5mb`
* `chunk_size` should be no more than `5tb`
* `buffer_size` should be lower than `chunk_size`
Otherwise, setting `buffer_size` is useless.
For the record:
`chunk_size` is a Snapshot setting whatever the implementation is.
`buffer_size` is an S3 implementation setting.
Let say that you are snapshotting a 500mb file. If you set `chunk_size` to `200mb`, then Snapshot service will call S3 repository to snapshot 3 files with the following sizes:
* `200mb`
* `200mb`
* `100mb`
If you set `buffer_size` to `100mb` (AWS maximum size recommendation), the first file of `200mb` will be uploaded on S3 using the multipart feature in 2 chunks and the workflow is basically the following:
* create the multipart request and get back an `id` from AWS S3 platform
* upload part1: `100mb`
* upload part2: `100mb`
* "commit" the full upload using the `id`.
Closes#17244.
We lost some accounting code in the translog recover code during refactoring
which triggers a very rare assertion. If we fail on a recovery target with an
illegal mapping update (which can happen if the clusterstate is behind), then
we miss to rollback the # of processed ops in that batch and once we resume
the batch we trip an assertion that the stats are off.
This commit brings back the code lost in 8bc2332d9a
and improves the comment that explains why we need this rollback logic.
Now that string has been splitted into text and keyword, we use text as a
dynamic type when encountering string fields in a json document. However
this does not play well with existing templates that look like
```
{
"mapping": {
"index": "not_analyzed",
"type": "{dynamic_type}"
},
"match": "*"
}
```
Since we want existing templates to keep working as much as possible in 5.0,
this commit adds a hack to dynamic templates so that elasticsearch will create
a keyword field if the `index` property is set and is either `no` or
`not_analyzed`, similarly to what was done in #16991.
While this will make upgrades easier, we still need to figure out a way to
allow users to create keyword fields when using dynamic types.
The fielddata settings in mappings have been refatored so that:
- text and string have a `fielddata` (boolean) setting that tells whether it
is ok to load in-memory fielddata. It is true by default for now but the
plan is to make it default to false for text fields.
- text and string have a `fielddata_frequency_filter` which contains the same
thing as `fielddata.filter.frequency` used to (but validated at parsing time
instead of being unchecked settings)
- regex fielddata filtering is not supported anymore and will be dropped from
mappings automatically on upgrade.
- text, string and _parent fields have an `eager_global_ordinals` (boolean)
setting that tells whether to load global ordinals eagerly on refresh.
- in-memory fielddata is not supported on keyword fields anymore at all.
- the `fielddata` setting is not supported on other fields that text and string
and will be dropped when upgrading if specified.
Currently, both Gsub and Grok parse regex strings during
Pipeline creation. Thrown parsing exceptions were leaking out, this
commit wraps those exceptions in ElasticsearchParseExceptions.
This commit mocks the value of rlimit infinity in the max size virtual
memory check test. This is to avoid attempting to load the native C
library during the test on Windows which would lead to a permissions
violation (the native C library needs to be loaded before the security
manager is setup).
Archive cluster level settings if unknown or broken
We already archive index level settings if we find an unknown or invalid/broken
value for a setting on node startup. The same could potentially happen for persistent
cluster level settings if we remove a setting or if we add validation to a setting that
didn't exist in the past. To ensure that only valid settings are recovered into the cluster
state we archive them (prefix them with `archive.` and log a warning. Tools that check the
cluster settings can then warn users that they have broken settings in their clusterstate that
got archived.
This commit adds a bootstrap check on Linux and OS X for the max size of
virtual memory (address space) to the user running the Elasticsearch
process.
Closes#16935
Currently if you run an `exists` query on an object, it will resolve all sub
fields and create a disjunction for all those fields. However the `_field_names`
mapper indexes paths for objects so we could query object paths directly.
I also changed the query parser to reject `exists` queries if the `_field_names`
field is disabled since it would be a big performance trap.
We already archive index level settings if we find an unknown or invalid/broken
value for a setting on node startup. The same could potentially happen for persistent
cluster level settings if we remove a setting or if we add validation to a setting that
didn't exist in the past. To ensure that only valid settings are recovered into the cluster
state we archive them (prefix them with `archive.` and log a warning. Tools that check the
cluster settings can then warn users that they have broken settings in their clusterstate that
got archived.
This commit sets up the default filesystem used during install plugins
tests. A hack is neeeded to handle the temporary directory because the
system property "java.io.tmpdir" will have been initialized to a value
that is sensible for the default filesystem, but not necessarily to a
value that makes sense for the mock filesystem in use during the
tests. This property is restored after each test.
For the refactoring of SortBuilders related to #10217, each SortBuilder needs to get a build()
method that produces a SortField according to the SortBuilder parameters on the shard.
This commit fixes string formatting issues in the error handling and
provides a bettter error message if malformed input is detected.
This commit also adds tests for both situations.
Relates to #17212
this is the last step to remove node level service from IndexShard.
This means that tests can now more easily create an IndexShard instance
without starting a node and removes the dependency between IndexShard and Client/ScriptService
When a master is elected, it reaches out to all master nodes for their cluster state, selecting the one with the highest version. At the moment, we do another round to select the index metadata with the highest version as well. This is not needed - the election of a cluster state is enough - we should just use whatever indices are in it.
Closes#17233
In #17187, we upgrade index state after upgrading
index folder structure. As we don't have to write
the upgraded state in the old index folder structure,
we can cleanup how we write upgraded index state.