The plugin cli currently resides inside the elasticsearch jar. This
commit moves it into a plugin-cli jar. This is change alone is a no-op;
it does not change anything about what is loaded at runtime. But it will
allow easier testing (with fixtures in the future to test ES or maven
installation), as well as eventually not loading these classes when
starting elasticsearch.
This commit adds a XContentParserUtils.parseTypedKeysObject() method
that can be used to parse named XContent objects identified by a field
name containing a type identifier, a delimiter and the name of the
object to parse.
The MultiBucketsAggregation.Bucket interface extends Writeable, forcing
all implementation classes to implement writeTo(). This commit removes
the Writeable from the interface and move it down to the InternalBucket
implementation.
Changes in #24102 exposed the following oddity: PrioritizedEsThreadPoolExecutor.getPending() can return Pending entries where pending.task == null. This can happen for example when tasks are added to the pending list while they are in the clean up phase, i.e. TieBreakingPrioritizedRunnable#runAndClean has run already, but afterExecute has not removed the task yet. Instead of safeguarding consumers of the API (as was done before #24102) this changes the executor to not count these tasks as pending at all.
Currently we don't test for count = 0 which will make a difference when adding
tests for parsing for the high level rest client. Also min/max/sum should also
be tested with negative values and on a larger range.
The addition of the normalization feature on keywords slowed down the parsing
of large `terms` queries since all terms now have to go through normalization.
However this can be avoided in the default case that the analyzer is a
`keyword` analyzer since all that normalization will do is a UTF8 conversion.
Using `Analyzer.normalize` for that is a bit overkill and could be skipped.
Add option "enable_position_increments" with default value true.
If option is set to false, indexed value is the number of tokens
(not position increments count)
Currently any `query_string` query that use a wildcard field with no matching field is rewritten with the `_all` field.
For instance:
````
#creating test doc
PUT testing/t/1
{
"test": {
"field_one": "hello",
"field_two": "world"
}
}
#searching abc.* (does not exist) -> hit
GET testing/t/_search
{
"query": {
"query_string": {
"fields": [
"abc.*"
],
"query": "hello"
}
}
}
````
This bug first appeared in 5.0 after the query refactoring and impacts only users that use `_all` as default field.
Indices created in 6.x will not have this problem since `_all` is deactivated in this version.
This change fixes this bug by returning a MatchNoDocsQuery for any term that expand to an empty list of field.
Some of the base methods that don't have to do with reduce phase and serialization can be moved to the base class which is no longer an interface. This will be reusable by the high level REST client further on the road. Also it simplify things as having an interface with a single implementor is not that helpful.
This commit rewords the note on whitespace in Log4j settings to not
refer to only of the examples on the page, but instead be clear that the
note applies to all the examples on the page.
A confusing thing that can happen when configuring Log4j is that
extraneous whitespace throws off its configuration parsing yet the error
messages that arise give no indication that this is the problem. This
commit adds a note to the docs.
Relates #24198
Start moving built in analysis components into the new analysis-common
module. The goal of this project is:
1. Remove core's dependency on lucene-analyzers-common.jar which should
shrink the dependencies for transport client and high level rest client.
2. Prove that analysis plugins can do all the "built in" things by moving all
"built in" behavior to a plugin.
3. Force tests not to depend on any oddball analyzer behavior. If tests
need anything more than the standard analyzer they can use the mock
analyzer provided by Lucene's test infrastructure.
This commit removes the deprecated cloud.aws.* settings. It also removes
backcompat for specifying `discovery.type: ec2`, and unused aws signer
code which was removed in a previous PR.
This commit adds a call to jstack to see where each node is stuck when
starting up, if a timeout occurs. This also decreases the timeout back
to 30 seconds.
This leniency was left in after plugin installer refactoring for 2.0
because some tests still relied on it. However, the need for this
leniency no longer exists.
Unlike other implementations of InternalNumericMetricsAggregation.SingleValue,
the InternalBucketMetricValue aggregation currently doesn't implement a
specialized interface that exposes the `keys()` method. This change adds this so
that clients can access the keys via the interface.
This change adds an index setting to define how the documents should be sorted inside each Segment.
It allows any numeric, date, boolean or keyword field inside a mapping to be used to sort the index on disk.
It is not allowed to use a `nested` fields inside an index that defines an index sorting since `nested` fields relies on the original sort of the index.
This change does not add early termination capabilities in the search layer. This will be added in a follow up.
Relates #6720
Elasticsearch runs as user elasticsearch with uid:gid 1000:1000 inside
the Docker container. Clarify that bind mounted local directories need
to be accessible by this user.
Relates #24092
* Replicate write failures
Currently, when a primary write operation fails after generating
a sequence number, the failure is not communicated to the replicas.
Ideally, every operation which generates a sequence number on primary
should be recorded in all replicas.
In this change, a sequence number is associated with write operation
failure. When a failure with an assinged seqence number arrives at a
replica, the failure cause and sequence number is recorded in the translog
and the sequence number is marked as completed via executing `Engine.noOp`
on the replica engine.
* use zlong to serialize seq_no
* Incorporate feedback
* track write failures in translog as a noop in primary
* Add tests for replicating write failures.
Test that document failure (w/ seq no generated) are recorded
as no-op in the translog for primary and replica shards
* Update to master
* update shouldExecuteOnReplica comment
* rename indexshard noop to markSeqNoAsNoOp
* remove redundant conditional
* Consolidate possible replica action for bulk item request
depanding on it's primary execution
* remove bulk shard result abstraction
* fix failure handling logic for bwc
* add more tests
* minor fix
* cleanup
* incorporate feedback
* incorporate feedback
* add assert to remove handling noop primary response when 5.0 nodes are not supported
`script_stack` is super useful when debugging Painless scripts
because it skips all the "weird" stuff involved that obfuscates
where the actual error is. It skips Painless's internals and
call site bootstrapping.
It works fine, but it didn't have many tests. This converts a
test that we had for line numbers into a test for the
`script_stack`. The line numbers test was an indirect test
for `script_stack`.
This change simplifies how the rest test runner finds test files and
removes all leniency. Previously multiple prefixes and suffixes would
be tried, and tests could exist inside or outside of the classpath,
although outside of the classpath never quite worked. Now only classpath
tests are supported, and only one resource prefix is supported,
`/rest-api-spec/tests`.
closes#20240
The `maxUnsafeAutoIdTimestamp` timestamp is a safety marker guaranteeing that no retried-indexing operation with a higher auto gen id timestamp was process by the engine. This allows us to safely process documents without checking if they were seen before.
Currently this property is maintained in memory and is handed off from the primary to any replica during the recovery process.
This commit takes a more natural approach and stores it in the lucene commit, using the same semantics (no retry op with a higher time stamp is part of this commit). This means that the knowledge is transferred during the file copy and also means that we don't need to worry about crazy situations where an original append only request arrives at the engine after a retry was processed *and* the engine was restarted.
Today when we merge hits we have a hard check to prevent AIOOB exceptions
that simply skips an expected search hit. This can only happen if there is a
bug in the code which should be turned into a hard exception or an assertion
triggered. This change adds an assertion an removes the lenient check for the
fetched hits.
Some aggregations (like Min, Max etc) use a wrong DocValueFormat in
tests (like IP or GeoHash). We should not test aggregations that expect
a numeric value with a DocValueFormat like IP. Such wrong DocValueFormat
can also prevent the aggregation to be rendered as ToXContent, and this
will be an issue for the High Level Rest Client tests which expect to be
able to parse back aggregations.