Whether or not the stacktrace is displayed is controlled by bootstrap
log level setting, so that bootstrap: DEBUG displays the stack trace on
output, like it does on log
Closes#5102
After upgrading to 1.1.0, sending null values to geo points produces the following error:
```
MapperParsingException[failed to parse]; nested: ElasticsearchParseException[geo_point expected];
```
Closes#5680.
Closes#5681.
This test initially had three purposes:
- duels between equivalent aggregations on significant amounts of data,
- make sure that array growth works (when the number of buckets grows larger
than the initial number o buckets),
- make sure that page recycling works correctly.
Because of the last point, it needed large numbers of docs/terms since page
recycling only kicks in on arrays of more than 16KB. However, since then, we
added a MockPageCacheRecycler to track allocation/release of BigArrays and make
sure that all arrays get released, so we can now lower these numbers of docs/
terms to just make sure that array growth is triggered.
Those tests use RandomIW which can flush after each document taking forever
to index the 20k docs it was indexing. This commit makes sure the IW is
sane and the number of docs is not too crazy.
The default precision was way too exact and could lead people to
think that geo context suggestions are not working. This patch now
requires you to set the precision in the mapping, as elasticsearch itself
can never tell exactly, what the required precision for the users
suggestions are.
Closes#5621
There is no explicit method `flush/execute` in `BulkProcessor` class. This can be useful in certain scenarios.
Currently it requires to close and create a new BulkProcessor if one wants an immediate flush.
Closes#5575.
Closes#5570.
Instead the default implementation is used, but on top of empty
(Bytes|Long|Double|GeoPoint)Values. This makes sure there is no
inconsistency between documents depending on whether other documents in the
segment have values or not.
Close#5646
When a bulk request triggers an index creation a mapping might not be
created. The reason is that when indexing documents in a bulk,
an indexing operation might fail due to a shard not yet being
started. The mapping service, however, might already
have the mapping but the mapping update is never issued to the master,
even on subsequent indexing of documents.
Instead, the mapping must be propagated to master even if the
indexing fails due to a shard not being started.
closes#5623
A frequency caching terms enum, that also allows to be configured with an optional filter. To be used by both significant terms and phrase suggester.
This change extracts the frequency caching into the same code, and allow in the future to add a filter to control/customize the background frequencies
Closes#5597
Moved the circuit breaker memory reducing logic to the IndicesFieldDataCacheListener, so it always reduces the memory used when field data gets unloaded,
this fixes a issue where the circuit breaker didn't get reduced when segments where no shardId could be resolved get cleared up.
Also made sure that exceptions in the percolator service are bubbled up properly.
Closes#5588
* Removed XTermsFilter fixed in LUCENE-5502
* Switched back to automaton queries that caused failures due to LUCENE-5532
* Fixed Highlight test that has different results due to LUCENE-5538
Currently we throw misleading exception in acquireSearcher
if we try to acquire while we are failing the engine. We should
throw an EngineClosedException instead.
Relates to #5633
When refresh failed, it would fail due to a serious issue in the shard (mainly corrupted index). Fail the engine in that cause, which will cause the shard to fail. The reason why its not only on CorruptedIndex failed is that any type of failure seems to be relevant here to fail the shard
closes#5633
Update RelocationTests to use the above and unified testPrimaryRelocationWhileIndexing & testReplicaRelocationWhileIndexingRandom into a single randomized test.
This commit speeds up aggregations tests dramatically since they all
rely on the same index structure. Here we can safe a large amout of time
to not recreate the index for each small test.
All the ordinary test operations happen based on the ImmutableTestCluster base class and are executed via transport client. Will be used especially for the REST tests once migrated to the standard randomized runner.
Added new httpAddresses method to ImmutableTestCluster to be able to retrieve the http addresses to connect to for the REST tests. Both versions will look inside the cluster to figure out which nodes are available for http calls and their addresses.
The external cluster is used as global cluster if the tests.cluster system property is available. The property needs to contain a comma separated list of available elasticsearch nodes that will be used to connect to the cluster (e.g. localhost:9300,localhost:9301).
Only a subset of the integration tests can currently be run successfully against the external cluster, for more precision the ones that don't modify the cluster layout (don't require cluster() functionalities but rely only on immutableCluster()). Also at least two data nodes are required otherwise the ensureGreen calls cannot succeed.
Closes#5630
The way that SearchServiceTransportAction executes actions on a local node
today would propagate exceptions thrown in onResult to onFailure.
Close#5629
An infrastructure that allows to simulate different network topologies failures, including 2 basic ones in failure to send requests, and unresponsive nodes
closes#5631
Today the clusterstate is not asssociated with the cluster_name which is odd
since it's pretty much it's only valid identifier. Any node can send a cluster
state to another node today even if it's not the same cluster. These situations
can happen rarely during tests or even in the wild by accident.
- removed an abstraction layer that handles the values source (consolidated values source with field data source)
- better handling of value parser/formatter in range & histogram aggs
- the buckets key will now be shown by default in range agg
simplify rest response class hierarchy, by using BytesReference for content, and handling JSONP internally in the respective channel that sends the response.
Also, handle the future case where bytes might be releasable, when we start to potentially recycle bytes output stream.
A bunch of minor fixes have been included here, especially due
to wrongly parsed mappings. Also using assertions resulted in an
NPE because they were disabled in the distribution.
Closes#5525
The new base class contains all the immutable methods for a cluster, which will be extended by the future ExternalCluster impl that relies on an external cluster and won't be adding nodes etc. but only sending requests to an existing cluster whose layout never changes
Closes#5620
Before a bulk request is executed, missing indices are being created by default.
If this fails, the whole request is failed.
This patch changes the behaviour to not fail the whole request, but rather all
requests using the index where the creation has failed.
Closes#4987
If a node fails to forward a master node operation to the current master, it will schedule a retry using a listener for cluster state changes. Once the listener is in place (and future changes are guaranteed to be observed) it will double check nothing has change during the addition of the listener. This check has previously been change to use cluster state versions (see: #5499). This is however not reliable solution as master elections (which change the master) do not increment the cluster state version and thus could be missed. This commit changes the check to use reference equality making it stricter.
Closes#5548
Although high-severity bugs were mostly static variables that were not final,
it also found potential NPEs and bugs like `x ^= x;` or equality checks on
objects that don't share a common interface.
Close#5571
For around 10% of the calls that support the GET method, the request body gets now sent as `source` parameter with GET method (no randomized method in this case).
This way we can find bugs like #5556 (missing support for source param).
The `field_value_factor` function uses the value of a field in the
document to influence the score.
A query that looks like:
{
"query": {
"function_score": {
"query": {"match": { "body": "foo" }},
"functions": [
{
"field_value_factor": {
"field": "popularity",
"factor": 1.1,
"modifier": "square"
}
}
],
"score_mode": "max",
"boost_mode": "sum"
}
}
}
Would have the score modified by:
square(1.1 * doc['popularity'].value)
Closes#5519
allow to configure on the index level which blocks can optionally be applied using tribe.blocks.indices prefix settings.
allow to control what will be done when a conflict is detected on index names coming from several clusters using the tribe.on_conflict setting. Defaults remains "any", but now support also "drop" and "prefer_[tribeName]".
closes#5501
We have had a couple of bugs because of the use of these methods without paying
attention that it might return a negative value when provided with MIN_VALUE.
There is one common and legitimate usage of this method in order to perform
a modulo operation which would always return a positive number. This use-case
has been extracted to MathUtils.mod.
Close#5562
* Only run the test with one node, to make sure only a single purger thread is running.
* Instead of generating the _timestamp upon indexing in ES, define the _timestamp in the documents.
* Verify the responses from the index requests.
This is the first to make it possible to have a different impl of TestCluster (e.g. based on an external cluster) that has the same methods but a different impl for them (e.g. it might use the REST API to do the same instead of the Java API)
Closes#5542
The current implementations of BytesReference.Helper.bytesEquals and
bytesHashCode materialize a byte[] when passed an instance that returns `false`
in `hasArray()`. They should instead do a byte-by-byte comparison/hashcode
computation without materializing the array.
Close#5517
TransportMasterNodeOperationAction forwards incoming requests to the currently known master node. If that fails due to a connection error, a cluster state listener will be added in order to try again when a new master is elected. After the listener is in place, a check was made to see if the master has change *while* the listener was being added so that change will not be missed. The check was not enough as it may be that the same master was re-elected (for example, a network hick up) and thus test will fail even though the re-ellection event was missed. In these cases, the request would timeout unjustly. This commit changes this test to be more strict and retry if the cluster state version changed during the addition of the listener.
Closes#5499
When updating the min_master_nodes setting via the Cluster Settings API, the change is propagated to all nodes. The current master node also updates the ElectMasterService and validates that is still sees enough master eligible nodes and that it's election is still valid. Other master eligible nodes do not go through this validation (good) but also didn't update the ElectMasterService with the new settings. The result is that if the current master goes away, the next election will not be done with the latest setting.
Note - min_master_node set in the elasticsearch.yml file are processed correctly
Closes#5494
Currently we close the store and therefor the underlying directory
when the engine / shard is closed ie. during relocation etc. We also
just close it while there are still searches going on and/or we are
recovering from it. The recoveries might fail which is ok but searches
etc. will be working like pending fetch phases.
The contract of the Directory doesn't prevent to read from a stream
that was already opened before the Directory was closed but from a
system boundary perspective and from lifecycles that we test it seems
to be the right thing to do to wait until all resources are released.
Additionally it will also help to make sure everything is closed
properly before directories are closed itself.
Note: this commit adds Object#wait & Object@#notify/All to forbidden APIs
Closes#5432
The default mustache engine was using HTML escaping which breaks queries
if used with JSON etc. This commit adds escaping for:
```
\b Backspace (ascii code 08)
\f Form feed (ascii code 0C)
\n New line
\r Carriage return
\t Tab
\v Vertical tab
\" Double quote
\\ Backslash
```
Closes#5473
Adds a new API endpoint at /_recovery as well as to the Java API. The
recovery API allows one to see the recovery status of all shards in the
cluster. It will report on percent complete, recovery type, and which
files are copied.
Closes#4637
By default the date_/histogram returns all the buckets within the range of the data itself, that is, the documents with the smallest values (on which with histogram) will determine the min bucket (the bucket with the smallest key) and the documents with the highest values will determine the max bucket (the bucket with the highest key). Often, when when requesting empty buckets (min_doc_count : 0), this causes a confusion, specifically, when the data is also filtered.
To understand why, let's look at an example:
Lets say the you're filtering your request to get all docs from the last month, and in the date_histogram aggs you'd like to slice the data per day. You also specify min_doc_count:0 so that you'd still get empty buckets for those days to which no document belongs. By default, if the first document that fall in this last month also happen to fall on the first day of the **second week** of the month, the date_histogram will **not** return empty buckets for all those days prior to that second week. The reason for that is that by default the histogram aggregations only start building buckets when they encounter documents (hence, missing on all the days of the first week in our example).
With extended_bounds, you now can "force" the histogram aggregations to start building buckets on a specific min values and also keep on building buckets up to a max value (even if there are no documents anymore). Using extended_bounds only makes sense when min_doc_count is 0 (the empty buckets will never be returned if the min_doc_count is greater than 0).
Note that (as the name suggest) extended_bounds is **not** filtering buckets. Meaning, if the min bounds is higher than the values extracted from the documents, the documents will still dictate what the min bucket will be (and the same goes to the extended_bounds.max and the max bucket). For filtering buckets, one should nest the histogram agg under a range filter agg with the appropriate min/max.
Closes#5224
The test currently uses ensureGreen to guaranty all shard allocations has happened. This only guaranties it from the cluster perspective and in some cases the nodes are not fast enough to implement the changes (which is what the test is about).
Default precision was computed based on the number of MULTI_BUCKET parents
instead of PER_BUCKET.
The ordinals-based execution mode was almost always used although ordinals
might have non-negligible memory usage compared to the counters.
Close#5452
* moved `geo_point` parsing to GeoUtils
* cleaned up `gzipped.json` for bulktest
* merged `GeoPointFieldMapper` and `GeoPoint` parsing methods
Closes#5390
Instead of using byte arrays, pass the BytesReference to the actual translog file, and use the new copyTo(channel) method to write. This will improve by not potentially having to convert the data to a byte array
closes#5463
Today, we use ConcurrentMergeScheduler, and this can be painful since it is concurrent on a shard level, with a max of 3 threads doing concurrent merges. If there are several shards being indexed, then there will be a minor explosion of threads trying to do merges, all being throttled by our merge throttling.
Moving to serial merge scheduler will still maintain concurrency of merges across shards, as we have the merge thread pool that schedules those merges. It will just be a serial one on a specific shard.
Also, on serial merge scheduler, we now have a limit of how many merges it will do at one go, so it will let other shards get their fair chance of merging. We use the pending merges on IW to check if merges are needed or not for it.
Note, that if a merge is happening, it will not block due to a sync on the maybeMerge call at indexing (flush) time, since we wrap our merge scheduler with the EnabledMergeScheduler, where maybeMerge is not activated during indexing, only with explicit calls to IW#maybeMerge (see Merges).
closes#5447
Two changes:
1. In the StupidBackoffScorer only look for the trigram if there is a bigram.
2. Cache the frequencies in WordScorer so we don't look them up again and
again and again. This is implemented by wrapping the TermsEnum in a special
purpose wrapper that really only works in context of the WordScorer.
This provides a pretty substantial speedup when there are many candidates.
Closes#5395
* Index one at a time only rarely if doing more then 300.
* When launching async actions, take some care to make sure you don't already
have more then 150 other async actions in flight.
* When indexing in bulk split into chunks of 1000 documents.
Seen during CI tests, it could appears that the download service is not available for any reason.
This fix in test will check before each test which requires an internet access (annotated with @Network) that the download service we are testing is still working.
It won't fail the test but will mark the test as `Ignored` in case of failure.
if the async sendPingsHandler throws an unexpected exception the
corresponding latch is never counted down. This might only happen
during node shutdown but can still cause starvation or test failures.
If we want to have a full picture of versions running in a cluster, we need to add a `_cat/plugins` endpoint.
Response could look like:
```sh
% curl es2:9200/_cat/plugins?v
node component version type url desc
es1 mapper-attachments 1.7.0 j Adds the attachment type allowing to parse difference attachment formats
es1 lang-javascript 1.4.0 j JavaScript plugin allowing to add javascript scripting support
es1 analysis-smartcn 1.9.0 j Smart Chinese analysis support
es1 marvel 1.1.0 j/s http://localhost:9200/_plugins/marvel Elasticsearch Management & Monitoring
es1 kopf 0.5.3 s http://localhost:9200/_plugins/kopf kopf - simple web administration tool for ElasticSearch
es2 mapper-attachments 2.0.0.RC1 j Adds the attachment type allowing to parse difference attachment formats
es2 lang-javascript 2.0.0.RC1 j JavaScript plugin allowing to add javascript scripting support
es2 analysis-smartcn 2.0.0.RC1 j Smart Chinese analysis support
```
Closes#4824.
Fixed minor logging discrepancies introduced with randomized shard count.
Added logging to recoverWhileUnderLoadWithNodeShutdown
Added logging ElasticsearchIntegrationTest.allowNodes to indicate what nodes were excluded.
recoverWhileRelocating's shard setting were potentially ignored (depending on key order in hashmaps)
When you set a BulkProcessor with a bulk actions size of 100, it executes the bulk after 101 documents.
```java
BulkProcessor.builder(client(), listener).setBulkActions(100).setConcurrentRequests(1).setName("foo").build();
```
Same for size. If you set the bulk size to 1024 bytes, it will actually execute the bulk after 1025 bytes.
This patch fix it.
Closes#4265.
We have the default QueryWrapperFilter as well as our custom one while
our wrapper is explicitly marked as no_cache such that it will never
be included in a cache. This was not consistenly used and caused several
problems during tests where p/c related queries were used as filters
and ended up in the cache. This commit adds the QueryWrapperFilter
ctor to the forbidden APIs to enforce the query instance checks.
Some mappers do not support externalValue() to be set. So plugin developers can't use it while building their own mappers.
Support added in this PR for:
* `BinaryFieldMapper`
* `BooleanFieldMapper`
* `GeoPointFieldMapper`
* `GeoShapeFieldMapper`
Closes#4986.
Relative to #4154.
Due to bugs in jvm (specifically OSX), running zen discovery tests causes for "socket close" failure on receive on multicast socket, and under some jvm versions, even crashes. This happens because of the creation of multiple multicast sockets within the same VM. In practice, in our tests, we use the same settings, so we can share the same multicast socket across multiple channels.
This change creates an abstraction called MulticastChannel, that can be shared, with ref counting. Today, the shared option is only enabled under OSX.
closes#5410
A missing call to ArrayUtil.grow prevented the array that stores the values
from growing in case the number of values returned by the script was higher
than the original size of the array.
Close#5414
Seen during CI tests, it could appears that the download service is not available for any reason.
This fix in test will check before each test which requires an internet access (annotated with @Network) that the download service we are testing is still working.
It won't fail the test but will mark the test as `Ignored` in case of failure.
Note that the standard `atLeast` implementation has now Integer.MAX_VALUE as upper bound, thus it behaves differently from what we expect in our tests, as we never expect the upper bound to be that high.
Added our own `atLeast` to `AbstractRandomizedTest` so that it has the expected behaviour with a reasonable upper bound.
See https://github.com/carrotsearch/randomizedtesting/issues/131
Significance is related to the changes in document frequency observed between everyday use in the corpus and
frequency observed in the result set. The asciidocs include extensive details on the applications of this feature.
Closes#5146
Lucene's RamUsageEstimator.sizeOf(Object) is to expensive.
Query size estimation will be enabled when a cheaper way of query size estimation can be found.
Closes#5372
Relates to #5339
During snapshot finalization the snapshot file is getting overwritten. If we try to read the snapshot file at this moment we can get back an empty or incomplete snapshot. This change adds a retry mechanism in case of such failure.
The test was shutting down nodes even if some of the inidces had only a
single shard. This caused that we basically had no shard active that
could sever the docs and caused random failures. This commit fixed the
test to rather allocate enough shards such that we never need to resize
the cluster which also makes the test faster.
This aggregation computes unique term counts using the hyperloglog++ algorithm
which uses linear counting to estimate low cardinalities and hyperloglog on
higher cardinalities.
Since this algorithm works on hashes, it is useful for high-cardinality fields
to store the hash of values directly in the index, which is the purpose of
the new `murmur3` field type. This is less necessary on low-cardinality
string fields because the aggregator is smart enough to only compute the hash
once per unique value per segment thanks to ordinals, or on numeric fields
since hashing them is very fast.
Close#5426
Aggregations were still using CacheRecycler on the reduce phase. They are now
using page-based recycling for both the aggregation phase and the reduce phase.
Close#4929
Due to a regression edit distances > 2 threw exceptions after unifying
the fuzziness factor in Elasticsearch `1.0`. This commit brings back the
expceted behavior.
Closes#5292
Introduced two levels of randomization for the number of replicas when running tests:
1) through the existing random index template, which now sets a random number of replicas that can either be 0 or 1 that is shared across all the indices created in the same test method unless overwritten
2) through createIndex and prepareCreate methods, between 0 and the number of data nodes available, similar to what happens using the indexSettings method, which changes for every createIndex or prepareCreate unless overwritten (overwrites index template for what concerns the number of replicas)
Added the following facilities to deal with the random number of replicas:
- made it possible to retrieve how many data nodes are available in the `TestCluster`
- added common methods similar to indexSettings, to be used in combination with createIndex and prepareCreate method and explicitly control the second level of randomization: numberOfReplicas, minimumNumberOfReplicas and maximumNumberOfReplicas
Tests that specified the number of replicas have been reviewed:
- removed manual replicas randomization where present, replaced with ordinary one that's now available
- adapted tests that didn't need a specific number of replicas to the new random behaviour
- also done some more cleanup, used common methods like assertAcked, ensureGreen, refresh, flush and refreshAndFlush where possible
================
This commit extends the `CompletionSuggester` by context
informations. In example such a context informations can
be a simple string representing a category reducing the
suggestions in order to this category.
Three base implementations of these context informations
have been setup in this commit.
- a Category Context
- a Geo Context
All the mapping for these context informations are
specified within a context field in the completion
field that should use this kind of information.
Per single main ping request we maintain the received ping response per node. Each node level ping response is mapped into that. If from a previous node level ping request the response has already been set for a node, it will be overwritten. We give higher value to the latest response. This change makes sure that this doesn't happen if the previous response has a set master and the current response hasn't a set master. Otherwise a node will lose the fact that another node has elected itself as master, the result of that would be that there would multiple master nodes in a single cluster.
Closes#5413
It can happen that not all shards are ready, thus we won't have a total failure, but we do need to check that we have at least a failure. Checked also the message of the failure.
If randomization brings up a single shard per index in this test
we might run our searches on only one index which causes the assertions
to fail afterwards that's why we need to wait until everything is alloated.