Currently if a document cannot be indexed because it violates the defined
mapping for the index, a MapperException is thrown. In some cases it is
useful to expose the expected field type in the exception itself,
so that the user can react based on the error message. This change adds
the expected data type to the MapperException.
Closes#31502
Remove a few of the logger constructors that aren't widely used or
aren't used at all and deprecate a few more logger constructors in favor
of log4j2's `LogManager`.
A bug in the test suite prevented to properly check that all date
formatters printed the date the same way like joda time does.
This fixes the test and thus also a fair share of formats, that
now use the strict parser for printing.
We previously discussed moving the classes extending `AcknowledgedResponse` to
simply use `AcknowledgedResponse`, making the class non-abstract.
This moves the first class to do this, removing `WritePipelineResponse` in the
process.
If we like the way this looks, I will switch the remaining classes over to using
`AcknowledgedResponse`.
When Circuit Breaker has tripped, certain diagnostic requests like
"_cluster/health" succeed where as request to / fails with
503 Service Unavailable. This behavior is observed because of this
commit f32b700 where certain API paths are whitelisted from
Circuit Breaking exception, but / is not whitelisted.
Added / to circuit breaker whitelist so that it can be used for
diagnostic purposes
* Fixes suggestion generics
This solves a compile problem in Eclipse where Eclipse could not
resolve the generics for the options field in `PhraseSuggestion.Entry`.
But I think this is also a good change in general because
`PhraseSuggestion.Entry` is now declaring the specific `Option`
implementation it requires rather than `Suggest.Entry.Option` which is
more general and could lead to weird bugs. `CompletionSuggestion.Entry`
and `TermSuggestion.Entry` already declare the more specific class they
use so I think this was an oversight in `PhaseSuggestion.Entry`
* iter
`ShardOperationFailedException` and corresponding implementors seem to suggest that the cause may be null, case that is also handled in a few places. Yet, it does not seem to be possible in practice for the cause to be null, hence we can clean that up and enforce the cause to be a non null value. This is best done by making `ShardOperationFailedException` an abstract class rather than an interface, which holds the basic member instance that all the subclasses have in common and can also enforce that cause, status and reason are non null.
As part of #32608 we made sure that the fully qualified index name is taken from the query shard context whenever creating a new `QueryShardException`. That change introduced a regression as instead of setting the entire `Index` object to the exception, which holds index name and index uuid, we ended up setting only the index name (including cluster alias). With this commit we make sure that the index uuid does not get lost and we try to lower the chances that a similar bug makes it in another time. That's done by making `QueryShardContext` return the fully qualified `Index` (which also holds the uuid) rather than only the fully qualified index name.
Suggestion responses were previously serialized as streamables which
made writing suggesters in plugins with custom suggestion response types
impossible. This commit makes them serialized as named writeables and
provides a facility for registering a reader for suggestion responses
when registering a suggester.
This also makes Suggestion responses abstract, requiring a suggester
implementation to provide its own types. Suggesters which do not need
anything additional to what is defined in Suggest.Suggestion should
provide a minimal subclass.
The existing plugin suggester integration tests are removed and
replaced with an equivalent implementation as an example
plugin.
It will be useful for future efforts to know if the global checkpoint
was updated. To this end, we need to expose whether or not the global
checkpoint was updated when the state of the replication tracker
updates. For this, we add to the tracker a callback that is invoked
whenever the global checkpoint is updated. For primaries this will be
invoked when the computed global checkpoint is updated based on state
changes to the tracker. For replicas this will be invoked when the local
knowledge of the global checkpoint is advanced from the primary.
The MockNioTransport (similar to the MockTcpTransport) is used for integ
tests. The MockTcpTransport has always only opened a single for all of
its work. The MockNioTransport has awlays opened the default number of
connections (13). This means that every test where two transports
connect requires 26 connections. This is more than is necessary. This
commit modifies the MockNioTransport to only require 3 connections.
Primary terms were introduced as part of the sequence-number effort (#10708) and added in ES
5.0. Subsequent work introduced the replication tracker which lets the primary own its replication
group (#25692) to coordinate recovery and replication. The replication tracker explicitly exposes
whether it is operating in primary mode or replica mode, independent of the ShardRouting object
that's associated with a shard. During a primary relocation, for example, the primary mode is
transferred between the primary relocation source and the primary relocation target. After
transferring this so-called primary context, the old primary becomes a replication target and the
new primary the replication source, reflected in the replication tracker on both nodes. With the
most recent PR in this area (#32442), we finally have a clean transition between a shard that's
operating as a primary and issuing sequence numbers and a shard that's serving as a replication
target. The transition from one state to the other is enforced through the operation-permit system,
where we block permit acquisition during such changes and perform the transition under this
operation block, ensuring that there are no operations in progress while the transition is being
performed. This finally allows us to turn the best-effort checks that were put in place to prevent
shards from being used in the wrong way (i.e. primary as replica, or replica as primary) into hard
assertions, making it easier to catch any bugs in this area.
Currently, when TranslogCorruptedException is thrown most of the times it does not contain information about the translog location on the file system. There is the translog recovery tool that accepts the translog path as an argument and users are constantly puzzled where to get the path.
This pull request adds "source" information to every TranslogCorruptedException thrown. The source could be local file, remote translog source (used for recovery), assertion (translog entry is constructed to perform some assertion) or translog constructed inside the test.
Closes#24929
This change adds a check so that when parsing the search source, script fields are
ignored when the requested search result size is 0. This helps with e.g. clients like
Kibana that sends a list of script fields that they may need for convenience, but they
don't require any hits. Before this change, user sometimes ran into confusing behaviour,
e.g. the script compilation limit to breaking although no hits were requested.
Closes#31824
* We were comparing the wrong timeout value in the `randomValueOtherThan` call here, leading to no mutation happening for a certain seed
* closes#32639
Today content type detection on an input stream works by peeking up to
twenty bytes into the stream. If the stream is headed by more whitespace
than twenty bytes, we might fail to detect the content type. We should
be ignoring this whitespace before attempting to detect the content
type. This commit does that by ignoring all leading whitespace in an
input stream before attempting to guess the content type.
The assertion in the test was not broad enough. If the timing is very unlucky, the
shard is already promoted to primary before the indexOnReplica even gets to execute.
Closes#32645
* Remove UpdateSettingsTestHelper class
By making the `settings()` method public on `UpdateSettingsRequest` (I think it
should have been in the first place) we can get rid of this class entirely. Mock
response objects are now constructed by parsing JSON without making the
constructor public.
Relates to #29823
* Remove RolloverIndexTestHelper
This removes the `RolloverIndexTestHelper` class in favor of making a couple of
getters publically accessible as well as custom building a response object using
JSON parsing.
Relates to #29823
This commit removes the hacks associated with mocking Response objects. Rather
than parse a wrapped byte array, the constructors for `IndicesAliasesResponse`
and `ResizeResponse` are made public
Relates to #29823
When some remote clusters return shard failures as part of a cross-cluster search request, the cluster alias currently gets lost. As a result, if the shard failures are all caused by the same error, and against indices belonging to different clusters, but with the same index name, only one failure gets returned as part of the search response, meaning that failures are grouped by index name, ignoring the cluster alias.
With this commit we make sure that `ShardSearchFailure` returns the cluster alias as part of the index name. Also, we set the fully qualfied index name when creating a `QueryShardException`. That way shard failures are grouped by cluster:index. Such fixes should cover at least most of the cases where either 1) the shard target is set but we don't have the index in the cause (we were previously reading it only from the cause that did not have the cluster alias) 2) the shard target is missing but if the cause is a `QueryShardException` the cluster alias does not get lost.
We also prevent NPE in case the failure cause is not set and test such scenario.
If the shard is already closed while bumping the primary term, this can result in an
AlreadyClosedException to be thrown. As we use asyncBlockOperations, the exception
will be thrown on a thread from the generic thread pool and end up in the uncaught
exception handler, failing our tests.
Relates to #32442
Some classes use internal date formatters, which now can be moved over
to java time using the DateFormatters class.
The same applies for a few test cases.
This unmutes the testFanOutAndCollect()` method and add a check to make
sure we aren't accidentally running something twice causing a search
phase to still be running after we have counted down the latch
Relates to #29242
We've recently seen a number of test failures that tripped an assertion in IndexShard (see issues
linked below), leading to the discovery of a race between resetting a replica when it learns about a
higher term and when the same replica is promoted to primary. This commit fixes the race by
distinguishing between a cluster state primary term (called pendingPrimaryTerm) and a shard-level
operation term. The former is set during the cluster state update or when a replica learns about a
new primary. The latter is only incremented under the operation block, which can happen in a
delayed fashion. It also solves the issue where a replica that's still adjusting to the new term
receives a cluster state update that promotes it to primary, which can happen in the situation of
multiple nodes being shut down in short succession. In that case, the cluster state update thread
would call `asyncBlockOperations` in `updateShardState`, which in turn would throw an exception
as blocking permits is not allowed while an ongoing block is in place, subsequently failing the shard.
This commit therefore extends the IndexShardOperationPermits to allow it to queue multiple blocks
(which will all take precedence over operations acquiring permits). Finally, it also moves the primary
activation of the replication tracker under the operation block, so that the actual transition to
primary only happens under the operation block.
Relates to #32431, #32304 and #32118
* Make cluster stats response contain cluster UUID
* Updating constructor usage in Monitoring tests
* Adding cluster_uuid field to Cluster Stats API reference doc
* Adding rest api spec test for expecting cluster_uuid in cluster stats response
* Adding missing newline
* Indenting do section properly
* Missed a spot!
* Fixing the test cluster ID
Unmuting the test and adding some more debug output. Was not able to
reproduce the prior failure, but it seems possible that the
failure (mismatched counts) could be caused by partial search results
during the test.
The assertions check for shard failures first, because if one of the
two searches is partial the rest of the test will fail.
Next, instead of just checking respective hit counts, we emit the
difference in hits to help identify what went wrong.
Closes#32492
Since LUCENE-8263, testSeqNoAndCheckpoints might trigger merges because
of the updates and deletes in the test. Our merge scheduler will trigger
a flush if there is no pending merge. Those extra flushes will change
the last committed segmentInfos in the engine and fail the test.
This commit uses LogMergePolicy for the engine in the test to avoid
merges.
Closes#32430
This commit adds a boolean system property, `es.scripting.use_java_time`,
which controls the concrete return type used by doc values within
scripts. The return type of accessing doc values for a date field is
changed to Object, essentially duck typing the type to allow
co-existence during the transition from joda time to java time.
This removes a constructor from `AbstractComponent` and
`AbstractLifecycleComponent` that we weren't using and it switches the
logger creation away from one of the `Settings` flavored methods which
are no longer needed.
First, some background: we have 15 different methods to get a logger in
Elasticsearch but they can be broken down into three broad categories
based on what information is provided when building the logger.
Just a class like:
```
private static final Logger logger = ESLoggerFactory.getLogger(ActionModule.class);
```
or:
```
protected final Logger logger = Loggers.getLogger(getClass());
```
The class and settings:
```
this.logger = Loggers.getLogger(getClass(), settings);
```
Or more information like:
```
Loggers.getLogger("index.store.deletes", settings, shardId)
```
The goal of the "class and settings" variant is to attach the node name
to the logger. Because we don't always have the settings available, we
often use the "just a class" variant and get loggers without node names
attached. There isn't any real consistency here. Some loggers get the
node name because it is convenient and some do not.
This change makes the node name available to all loggers all the time.
Almost. There are some caveats are testing that I'll get to. But in
*production* code the node name is node available to all loggers. This
means we can stop using the "class and settings" variants to fetch
loggers which was the real goal here, but a pleasant side effect is that
the ndoe name is now consitent on every log line and optional by editing
the logging pattern. This is all powered by setting the node name
statically on a logging formatter very early in initialization.
Now to tests: tests can't set the node name statically because
subclasses of `ESIntegTestCase` run many nodes in the same jvm, even in
the same class loader. Also, lots of tests don't run with a real node so
they don't *have* a node name at all. To support multiple nodes in the
same JVM tests suss out the node name from the thread name which works
surprisingly well and easy to test in a nice way. For those threads
that are not part of an `ESIntegTestCase` node we stick whatever useful
information we can get form the thread name in the place of the node
name. This allows us to keep the logger format consistent.
When using cross-cluster search through the high-level REST client, the cluster alias from each search hit was not parsed correctly. It would be part of the index field initially, but overridden just a few lines later once setting the shard target (in case we have enough info to build it from the response). In any case, getClusterAlias returns `null` which is a bug.
With this change we rather parse back clusterAliases from the index name, set its corresponding field and properly handle the two possible cases depending on whether we can or cannot build the shard target object.
The method for working out whether a polygon is clockwise or anticlockwise is
mostly correct but doesn't work in some rare cases such as the included test
case. This commit fixes that.
Rollover should not swap aliases when `is_write_index` is set to `true`.
Instead, both the new and old indices should have the rollover alias,
with the newly created index as the new write index
Updates Rollover to leverage the ability to preserve aliases and swap which is the write index.
Historically, Rollover would swap which index had the designated alias for writing documents against. This required users to keep a separate read-alias that enabled reading against both rolled over and newly created indices, whiles the write-alias was being re-assigned at every rollover.
With the ability for aliases to designate a write index, Rollover can be a bit more flexible with its use of aliases.
Updates include:
- Rollover validates that the target alias has a write index (the index that is being rolled over). This means that the restriction that aliases only point to one index is no longer necessary.
- Rollover explicitly (and atomically) swaps which index is the write-index by explicitly assigning the existing index to have `is_write_index: false` and have the newly created index have its rollover alias as `is_write_index: true`. This is only done when `is_write_index: true` on the write index. Default behavior of removing the alias from the rolled over index stays when `is_write_index` is not explicitly set
Relevant things that are staying the same:
- Rollover is rejected if there exist any templates that match the newly-created index and configure the rollover-alias
- I think this existed to prevent the situation where an alias pointed to two indices for a short while. Although this can technically be relaxed, the specific cases that are safe are really particular and difficult to reason, so leaving the broad restriction sounds good
* Ensure decryption related exceptions are handled
This commit ensures that all possible Exceptions in
KeyStoreWrapper#decrypt() are handled. More specifically, in the
case that a wrong password is used for secure settings, calling readX
on the DataInputStream that wraps the CipherInputStream can throw an
IOException. It also adds a test for loading a KeyStoreWrapper with
a wrong password.
Resolves#32411
In rare cases it is possible that a nodes gets an instruction to replace a replica
shard that's in `POST_RECOVERY` with a new initializing primary with the same allocation id.
This can happen by batching cluster states that include the starting of the replica, with
closing of the indices, opening it up again and allocating the primary shard to the node in
question. The node should then clean it's initializing replica and replace it with a new
initializing primary.
I'm not sure whether the test I added really adds enough value as existing tests found this. The main reason I added is to allow for simpler reproduction and to double check I fixed it. I'm open to discuss if we should keep.
Closes#32308
`GetResult` and `SearchHit` have been adjusted to parse back the `_ignored` meta field whenever it gets printed out. Expanded the existing tests to make sure this is covered. Fixed also a small problem around highlighted fields in `SearchHitTests`.
Due to the recent change in LUCENE-8263, we need to adjust the deletion
ration to between 10% to 33% to preserve the current behavior of the
test. However, we may need another refinement if soft-deletes is enabled
as the actual deletes are different because of delete tombstones.
This commit prefers to always execute forceMerge instead of adjusting
the deletion ratio so that this test can focus on testing docStats.
Closes#32449
Due to the recent change in LUCENE-8263, a merge can be triggered if the
deletion ration is higher than 33%. An in-progress merge can prevent a
synced-flush from issuing.
This commit avoids deletes by using different docIds.
Closes#32436
The main highlight is the removal of the reclaim_deletes_weight in the TieredMergePolicy.
The es setting index.merge.policy.reclaim_deletes_weight is deprecated in this commit and the value is ignored. The new merge policy setting setDeletesPctAllowed should be added in a follow up.
This commit changes the randomization to always create an index with a type.
It also adds a way to create a query shard context that maps to an index with
no type registered in order to explicitely test cases where there is no type.
* Using short script form normalized to a map that used 'inline' instead of 'source' so a short form processor definition like:
```
{
"script": "ctx.foo= 'bar'"
}
```
would always warn about the following deprecation:
```
#! Deprecation: Deprecated field [inline] used, expected [source]
```
In testSyncedFlushSkipOutOfSyncReplicas, we reindex the extra documents
to all shards including the out-of-sync replica. However, reindexing to
that replica can trigger merges (due to the new deletes) which cause the
synced-flush failed. This test starts failing after we aggressively
trigger merges segments with a large number of deletes in LUCENE-8263.
Removing some dead code or supressing warnings where apropriate. Most of the
time the variable tested for null is dereferenced earlier or never used before.
Today we allow plugins to add index store implementations yet we are not
doing this in our new way of managing plugins as pull versus push. That
is, today we still allow plugins to push index store providers via an on
index module call where they can turn around and add an index
store. Aside from being inconsistent with how we manage plugins today
where we would look to pull such implementations from plugins at node
creation time, it also means that we do not know at a top-level (for
example, in the indices service) which index stores are available. This
commit addresses this by adding a dedicated plugin type for index store
plugins, removing the index module hook for adding index stores, and by
aggregating these into the top-level of the indices service.
An upcoming [Lucene change](https://issues.apache.org/jira/browse/LUCENE-7976)
will make TieredMergePolicy respect the maximum merged segment size all the
time, meaning it will possibly not respect the `max_num_segments` parameter
anymore if the shard is larger than the maximum segment size.
This change makes sure that `max_num_segments` is respected for now in order
to give us time to think about how to integrate this change, and also to delay
it until 7.0 as this might be a big-enough change for us to wait for a new
major version.
* Introduce fips_mode setting and associated checks
Introduce xpack.security.fips_mode.enabled setting ( default false)
When it is set to true, a number of Bootstrap checks are performed:
- Check that Secure Settings are of the latest version (3)
- Check that no JKS keystores are configured
- Check that compliant algorithms ( PBKDF2 family ) are used for
password hashing
This commit introduces "Application Privileges" to the X-Pack security
model.
Application Privileges are managed within Elasticsearch, and can be
tested with the _has_privileges API, but do not grant access to any
actions or resources within Elasticsearch. Their purpose is to allow
applications outside of Elasticsearch to represent and store their own
privileges model within Elasticsearch roles.
Access to manage application privileges is handled in a new way that
grants permission to specific application names only. This lays the
foundation for more OLS on cluster privileges, which is implemented by
allowing a cluster permission to inspect not just the action being
executed, but also the request to which the action is applied.
To support this, a "conditional cluster privilege" is introduced, which
is like the existing cluster privilege, except that it has a Predicate
over the request as well as over the action name.
Specifically, this adds
- GET/PUT/DELETE actions for defining application level privileges
- application privileges in role definitions
- application privileges in the has_privileges API
- changes to the cluster permission class to support checking of request
objects
- a new "global" element on role definition to provide cluster object
level security (only for manage application privileges)
- changes to `kibana_user`, `kibana_dashboard_only_user` and
`kibana_system` roles to use and manage application privileges
Closes#29820Closes#31559
* Complete changes for running IT in a fips JVM
- Mute :x-pack:qa:sql:security:ssl:integTest as it
cannot run in FIPS 140 JVM until the SQL CLI supports key/cert.
- Set default JVM keystore/truststore password in top level build
script for all integTest tasks in a FIPS 140 JVM
- Changed top level x-pack build script to use keys and certificates
for trust/key material when spinning up clusters for IT
Adds a new single-value metrics aggregation that computes the weighted
average of numeric values that are extracted from the aggregated
documents. These values can be extracted from specific numeric
fields in the documents.
When calculating a regular average, each datapoint has an equal "weight"; it
contributes equally to the final value. In contrast, weighted averages
scale each datapoint differently. The amount that each datapoint contributes
to the final value is extracted from the document, or provided by a script.
As a formula, a weighted average is the `∑(value * weight) / ∑(weight)`
A regular average can be thought of as a weighted average where every value has
an implicit weight of `1`.
Closes#15731
ClassCastException can be thrown by callers of TransportActions.isShardNotAvailableException(e) as e is not always an instance of ElasticSearchException
fixes#32173
Currently we check that the queries that QueryStringQueryBuilder#toQuery returns
is one out of a list of many Lucene query classes. This list has extended a lot over time,
since QueryStringQueryBuilder can build all sort of queries. This makes the test hard to
maintain. The recent addition of alias fields which build a BlendedTermQuery show how
easy this test breaks. Also the current assertions doesn't add a lot in terms of catching
errors. This is why we decided to remove this check.
Closes#32234
The parent filter for nested sort should always match **all** parents regardless
of the child queries. It is used to find the boundaries of a single parent and we use
the child query to match all the filters set in the nested tree so there is no need to
repeat the nested filters.
With this change we ensure that we build bitset filters
only to find the root docs (or the docs at the level where the sort applies) that can be reused
among queries.
Closes#31554Closes#32130Closes#31783
Co-authored-by: Dominic Bevacqua <bev@treatwell.com>
* Enhance Parent circuit breaker error message
This adds information about either the current real usage (if tracking "real"
memory usage) or the child breaker usages to the exception message when the
parent circuit breaker trips.
The messages now look like:
```
[parent] Data too large, data for [my_request] would be [211288064/201.5mb], which is larger than the limit of [209715200/200mb], usages [request=157286400/150mb, fielddata=54001664/51.5mb, in_flight_requests=0/0b, accounting=0/0b]
```
Or when tracking real memory usage:
```
[parent] Data too large, data for [request] would be [251/251b], which is larger than the limit of [200/200b], real usage: [181/181b], new bytes reserved: [70/70b]
```
* Only call currentMemoryUsage once by returning structured object
Resolving wildcards in aliases expression is challenging as we may end
up with no aliases to replace the original expression with, but if we
replace with an empty array that means _all which is quite the opposite.
Now that we support and serialize the original requested aliases,
whenever aliases are replaced we will be able to know what was
initially requested. `MetaData#findAliases` can then be updated to not
return anything in case it gets empty aliases, but the original aliases
were not empty. That means that empty aliases are interpreted as _all
only if they were originally requested that way.
Relates to #31516
Throw an exception for doc['field'].value
if this document is missing a value for the field.
After deprecation changes have been backported to 6.x,
make this a default behaviour in 7.0
Closes#29286
Now write operations like Index, Delete, Update rely on the write-index associated with
an alias to operate against. This means writes will be accepted even when an alias points to multiple indices, so long as one is the write index. Routing values will be used from the AliasMetaData for the alias in the write-index. All read operations are left untouched.
* Add basic support for field aliases in index mappings. (#31287)
* Allow for aliases when fetching stored fields. (#31411)
* Add tests around accessing field aliases in scripts. (#31417)
* Add documentation around field aliases. (#31538)
* Add validation for field alias mappings. (#31518)
* Return both concrete fields and aliases in DocumentFieldMappers#getMapper. (#31671)
* Make sure that field-level security is enforced when using field aliases. (#31807)
* Add more comprehensive tests for field aliases in queries + aggregations. (#31565)
* Remove the deprecated method DocumentFieldMappers#getFieldMapper. (#32148)
When building custom tokenfilters without an index in the _analyze endpoint,
we need to ensure that referring filters are correctly built by calling
their #setReferences() method
Fixes#32154
When a replica is fully recovered (i.e., in `POST_RECOVERY` state) we send a request to the master
to start the shard. The master changes the state of the replica and publishes a cluster state to that
effect. In certain cases, that cluster state can be processed on the node hosting the replica
*together* with a cluster state that promotes that, now started, replica to a primary. This can
happen due to cluster state batched processing or if the master died after having committed the
cluster state that starts the shard but before publishing it to the node with the replica. If the master
also held the primary shard, the new master node will remove the primary (as it failed) and will also
immediately promote the replica (thinking it is started).
Sadly our code in IndexShard didn't allow for this which caused [assertions](13917162ad/server/src/main/java/org/elasticsearch/index/seqno/ReplicationTracker.java (L482)) to be tripped in some of our tests runs.
With the introduction of single types in 6.x, the `_type` field is no longer
indexed, which leads to certain queries that were working before throw errors
now. One such query is the `range` query, that, if performed on a single typer
index, currently throws an IAE since the field is not indexed.
This change adds special treatment for this case in the TypeFieldMapper,
comparing the range queries lower and upper bound to the one existing type and
either returns a MatchAllDocs or a MatchNoDocs query.
Relates to #31632Closes#31476
With the introduction of sequence number, we no longer use versionType to
resolve out of order collision in replication and recovery requests.
This PR removes removes the versionType from translog. We can only remove
it in 7.0 because it is still required in a mixed cluster between 6.x and 5.x.
This commit moves additional unit test runners from being dependencies
of the test task to dependencies of check. Without this change,
reproduce lines are incorrect due to the additional test runner not
matching any of the reproduce class/method info.
closes#31964
Previously we create a translog snapshot inside the resync method,
and that snapshot will be closed by the resync listener. However, if
the resync method throws an exception before the resync listener
is initialized, the translog snapshot won't be released.
Closes#32030
Ensure our tests can run in a FIPS JVM
JKS keystores cannot be used in a FIPS JVM as attempting to use one
in order to init a KeyManagerFactory or a TrustManagerFactory is not
allowed.( JKS keystore algorithms for private key encryption are not
FIPS 140 approved)
This commit replaces JKS keystores in our tests with the
corresponding PEM encoded key and certificates both for key and trust
configurations.
Whenever it's not possible to refactor the test, i.e. when we are
testing that we can load a JKS keystore, etc. we attempt to
mute the test when we are running in FIPS 140 JVM. Testing for the
JVM is naive and is based on the name of the security provider as
we would control the testing infrastrtucture and so this would be
reliable enough.
Other cases of tests being muted are the ones that involve custom
TrustStoreManagers or KeyStoreManagers, null TLS Ciphers and the
SAMLAuthneticator class as we cannot sign XML documents in the
way we were doing. SAMLAuthenticator tests in a FIPS JVM can be
reenabled with precomputed and signed SAML messages at a later stage.
IT will be covered in a subsequent PR
The current docs of the put-mapping Java API is currently broken. It its current
form, it creates an index and uses the whole mapping definition given as a JSON
string as the type name. Since we didn't check the index created in the
IndicesDocumentationIT so far this went unnoticed.
This change adds test to catch this error to the documentation test, changes the
documentation so it works correctly now and adds an input validation to
PutMappingRequest#buildFromSimplifiedDef() which was used internally to reject
calls where no mapping definition is given.
Closes#31906
Dealing with empty fields in the highlight phase can
slow down the query because the query terms extraction is done independently
on each field. This change shortcuts the highlighting performed by the unified highlighter
for fields that are not present in the document. In such cases there is nothing to higlight so
we don't need to visit the query to build the highligh builder.
With this commit we raise the limit of the child circuit breaker used in
the unit test for the circuit breaker service so it is high enough to trip
only the parent circuit breaker. The previous limit was 300 bytes but
theoretically (considering overhead) we could reach 346 bytes. Thus any
value larger than 300 bytes could trip the child circuit breaker leading
to spurious failures.
Relates #31767
* Replace Ingest ScriptContext with Custom Interface
* Make org.elasticsearch.ingest.common.ScriptProcessorTests#testScripting more precise
* Don't mock script factory in ScriptProcessorTests
* Adjust mock script plugin in IT for new API
Make SnapshotInfo and CreateSnapshotResponse parsers lenient for backwards compatibility. Remove extraneous fields from CreateSnapshotRequest toXContent.
* Adds a new auto-interval date histogram
This change adds a new type of histogram aggregation called `auto_date_histogram` where you can specify the target number of buckets you require and it will find an appropriate interval for the returned buckets. The aggregation works by first collecting documents in buckets at second interval, when it has created more than the target number of buckets it merges these buckets into minute interval bucket and continues collecting until it reaches the target number of buckets again. It will keep merging buckets when it exceeds the target until either collection is finished or the highest interval (currently years) is reached. A similar process happens at reduce time.
This aggregation intentionally does not support min_doc_count, offest and extended_bounds to keep the already complex logic from becoming more complex. The aggregation accepts sub-aggregations but will always operate in `breadth_first` mode deferring the computation of sub-aggregations until the final buckets from the shard are known. min_doc_count is effectively hard-coded to zero meaning that we will insert empty buckets where necessary.
Closes#9572
* Adds documentation
* Added sub aggregator test
* Fixes failing docs test
* Brings branch up to date with master changes
* trying to get tests to pass again
* Fixes multiBucketConsumer accounting
* Collects more buckets than needed on shards
This gives us more options at reduce time in terms of how we do the
final merge of the buckeets to produce the final result
* Revert "Collects more buckets than needed on shards"
This reverts commit 993c782d117892af9a3c86a51921cdee630a3ac5.
* Adds ability to merge within a rounding
* Fixes nonn-timezone doc test failure
* Fix time zone tests
* iterates on tests
* Adds test case and documentation changes
Added some notes in the documentation about the intervals that can bbe
returned.
Also added a test case that utilises the merging of conseecutive buckets
* Fixes performance bug
The bug meant that getAppropriate rounding look a huge amount of time
if the range of the data was large but also sparsely populated. In
these situations the rounding would be very low so iterating through
the rounding values from the min key to the max keey look a long time
(~120 seconds in one test).
The solution is to add a rough estimate first which chooses the
rounding based just on the long values of the min and max keeys alone
but selects the rounding one lower than the one it thinks is
appropriate so the accurate method can choose the final rounding taking
into account the fact that intervals are not always fixed length.
Thee commit also adds more tests
* Changes to only do complex reduction on final reduce
* merge latest with master
* correct tests and add a new test case for 10k buckets
* refactor to perform bucket number check in innerBuild
* correctly derive bucket setting, update tests to increase bucket threshold
* fix checkstyle
* address code review comments
* add documentation for default buckets
* fix typo
Because this is a static method on a public API, and one that we encourage
plugin authors to use, the method with the typo is deprecated in 6.x
rather than just renamed.
With this commit we introduce a new circuit-breaking strategy to the parent
circuit breaker. Contrary to the current implementation which only accounts for
memory reserved via child circuit breakers, the new strategy measures real heap
memory usage at the time of reservation. This allows us to be much more
aggressive with the circuit breaker limit so we bump it to 95% by default. The
new strategy is turned on by default and can be controlled with the new cluster
setting `indices.breaker.total.userealmemory`.
Note that we turn it off for all integration tests with an internal test cluster
because it leads to spurious test failures which are of no value (we cannot
fully control heap memory usage in tests). All REST tests, however, will make
use of the real memory circuit breaker.
Relates #31767
Forces fetch tasks to queue even in the event that the queue is
already full. The reasoning is that fetch tasks may only be follow-up
to query tasks, so the number of additional fetch tasks that may enter
the threadpool is expected to be reasonable.
Closes#29442
This test produced different implementations of joda time classes,
depending on if the data was serialized or not (DateTime vs
MutableDateTime). This now uses a common base class to extract the
milliseconds from the data.
Closes#31992
* Added lenient flag for synonym-tokenfilter.
Relates to #30968
* added docs for synonym-graph-tokenfilter
-- Also made lenient final
-- changed from !lenient to lenient == false
* Changes after review (1)
-- Renamed to ElasticsearchSynonymParser
-- Added explanation for ElasticsearchSynonymParser::add method
-- Changed ElasticsearchSynonymParser::logger instance to static
* Added lenient option for WordnetSynonymParser
-- also added more documentation
* Added additional documentation
* Improved documentation
The initial check will never be true, because of the special semantics of NaN,
where no value is equal to Nan, including NaN. Thus, x == Double.NaN always
evaluates to false. The method still works correct because later computations
will also return NaN if the avg argument is NaN, but the intended shortcut
doesn't work.
A newly added class called DateFormatters now contains java.time based
builders for dates, which also intends to be fully backwards compatible,
when the name based date formatters are picked. Also a new class named
CompoundDateTimeFormatter for being able to parse multiple different
formats has been added.
A duelling test class has been added that ensures the same dates when
parsing java or joda time formatted dates for the name based dates.
Note, that java.time and joda time are not fully backwards compatible,
which also means that old formats will currently not work with this
setup.
* add support for is_write_index in put-alias body parsing
The Rest Put-Alias Action does separate parsing of the alias body
to construct the IndicesAliasesRequest. This extra parsing
was missed in #30703.
* test flag was not just ignored by the parser
* disable backcompat tests
* Handle missing values in painless
Throw an exception for `doc['field'].value`
if this document is missing a value for the `field`.
For 7.0:
This is the default behaviour from 7.0
For 6.x:
To enable this behavior from 6.x, a user can set a jvm.option:
`-Des.script.exception_for_missing_value=true` on a node.
If a user does not enable this behavior, a deprecation warning is logged on start up.
Closes#29286
If a get alias api call requests a specific alias pattern then
indices not having any matching aliases should not be included in the response.
This is a second attempt to fix this (first attempt was #28294).
The reason that the first attempt was reverted is because when xpack
security is enabled then index expression (like * or _all) are resolved
prior to when a request is processed in the get aliases transport action,
then `MetaData#findAliases` can't know whether requested all where
requested since it was already expanded in concrete alias names. This
change replaces aliases(...) replaceAliases(...) method on AliasesRequests
class and leave the aliases(...) method on subclasses. So there is a distinction
between when xpack security replaces aliases and a user setting aliases via
the transport or high level http client.
Closes#27763
This is a followup to #31537. It makes a number of changes requested by
a review that came after the PR was merged. These are mostly cleanups
and doc improvements.
Fixes 2 issues that together cause errors during index creation
with geo_shapes that use the term strategy. The term strategy changes
the default for points_only parameter, but this wasn't taken into
account during serialization. So, setting the term strategy would add
`"points_only": true` to serialization. At the same time if the term
strategy would also cause the `points_only` setting to be not marked as
a processed element during parsing, which would cause index creation to
fail with the error: `Mapping definition for [location] has unsupported`
`parameters: [points_only : true]`.
Fixes#31707
Removes support for storing scripts without the usual json around the
script. So You can no longer do:
```
POST _scripts/<templatename>
{
"query": {
"match": {
"title": "{{query_string}}"
}
}
}
```
and must instead do:
```
POST _scripts/<templatename>
{
"script": {
"lang": "mustache",
"source": {
"query": {
"match": {
"title": "{{query_string}}"
}
}
}
}
}
```
This improves error reporting when you attempt to store a script but don't
quite get the syntax right. Before, there was a good chance that we'd
think of it as a "raw" template and just store it. Now we won't do that.
Nice.
Today TransportService is tightly coupled with Transport since it
requires an instance of TransportService in order to receive responses
and send requests. This is mainly due to the Request and Response handlers
being maintained in TransportService but also because of the lack of a proper
callback interface.
This change moves request handler registry and response handler registration into
Transport and adds all necessary methods to `TransportConnectionListener` in order
to remove the `TransportService` dependency from `Transport`
Transport now accepts one or more `TransportConnectionListener` instances that are
executed sequentially in a blocking fashion.
The Rectangle constructor validates bounds before coerce has a chance
to normalize coordinates so it cannot be used as intermittent storage.
This commit removes the Rectangle as an intermittent storage for the
bounding box coordinates.
Fixes#31718
AWS supports the creation and use of credentials that are only valid for a
fixed period of time. These credentials comprise three parts: the usual access
key and secret key, together with a session token. This commit adds support for
these three-part credentials to the EC2 discovery plugin and the S3 repository
plugin.
Note that session tokens are only valid for a limited period of time and yet
there is no mechanism for refreshing or rotating them when they expire without
restarting Elasticsearch. Nonetheless, this feature is already useful for
nodes that need only run for a few days, such as for training, testing or
evaluation. #29135 tracks the work towards allowing these credentials to be
refreshed at runtime.
Resolves#16428
This PR does the server side work for adding the Get Index API to the REST
high-level-client, namely moving resolving default settings to the
transport action. A follow up would be the client side changes.
So far the in-flight request circuit breaker has only accounted for the
on-the-wire representation of a request. However, we convert the raw
request into XContent internally which increases the overhead.
Therefore, we increase the value of the corresponding setting
`network.breaker.inflight_requests.overhead` from one to two. While this
value is still rather conservative (we assume that the representation as
structured objects has no overhead compared to the byte[]), it is closer
to reality than the current value.
Relates #31613
`MemoryCircuitBreakerTests` conflates two test aspects: It tests
individual circuit breakers as well as the circuit breaker hierarchy.
With this commit we split those two aspects into two test classes:
* Tests for individual circuit breakers stay in the current class
* Other tests are moved to `HierarchyCircuitBreakerServiceTests`
Adds a new parameter to the BlobContainer#write*Blob methods to specify whether the existing file
should be overridden or not. For some metadata files in the repository, we actually want to replace
the current file. This is currently implemented through an explicit blob delete and then a fresh write.
In case of using a cloud provider (S3, GCS, Azure), this results in 2 API requests instead of just 1.
This change will therefore allow us to achieve the same functionality using less API requests.