We do wait for shards to be closed in IndicesService for 30 second.
Yet, if somebody holds on to a store reference ie. an open scroll request
the 30 seconds time-out and node shutdown takes very long. We should
release all other resources first before we shutdown IndicesService.
Closes#8940
The setting `mapping.date.round_ceil` (and the undocumented setting
`index.mapping.date.parse_upper_inclusive`) affect how date ranges using
`lte` are parsed. In #8556 the semantics of date rounding were
solidified, eliminating the need to have different parsing functions
whether the date is inclusive or exclusive.
This change removes these legacy settings and improves the tests
for the date math parser (now at 100% coverage!). It also removes the
unnecessary function `DateMathParser.parseTimeZone` for which
the existing `DateTimeZone.forID` handles all use cases.
Any user previously using these settings can refer to the changed
semantics and change their query accordingly. This is a breaking change
because even dates without datemath previously used the different
parsing functions depending on context.
closes#8598closes#8889
I replaced "high frequent terms" with "high frequency terms" and "low frequent terms" with "low frequency terms".
Alternatively, we could write, "highly frequent terms" and "minimally frequent terms" (or just "rare terms").
Closes#8962
In cases of heavy contention, it's possible for more than 2 threads
to race to a circuit breaking exception.
Essentially this means that if we have 3 threads all trying to add 3 and
simultaneously cause a circuit breaking exception (due to retry), when
adjusting after circuit breaking we can "rewind" past what this test
expects the child breaker to be at.
This adds leeway into the check, where it's okay to be within
NUM_THREADS from the parentLimit, because each thread should only add 1
to the breaker at a time.
We only have a single gatweway since es 1.3. There is no need to keep all
these abstractsion and nested packages. We can fold most of it into simpler
structures.
IndexEngine was an abstraction where we had index-level engines (instead
of shard-level) that could store meta information about the index. It
was never actually used by Elasticsearch, and only there for plugins.
This removes it, because it is a confusing abstraction and not needed,
no plugins should be implementing their own IndexEngines.
When a node fails (or closes), the master processes the network disconnect event and removes the node from the cluster state. If multiple nodes fail (or shut down) in rapid succession, we process the events and remove the nodes one by one. During this process, the intermediate cluster states may cause the node fault detection to signal the failure of nodes that are not yet removed from the cluster state. While this is fine, it currently causes unneeded reroutes and cluster state publishing, which can be cumbersome in big clusters.
Closes#8804Closes#8933
When we close a node all pending / active search requests need to be
cleared otherwise a node will wait up to 30 sec for shutdown sicne there
could be open scroll requests. This behavior was introduces in 1.5 such that
versions <= 1.4.x are not affected.
Closes#8940
Occasionally a the join thread successfully connected to a just closed node and which causes the subsequent join request to time out. It's default timeout 60s throws the test off when it waits for a cluster to form.
Calling cache.cleanUp() is kind of like calling System.gc(), meaning
that we should never have (non-test) things that rely on this
functionality.
For the field data and filter cache, we already have a periodic process
that runs this .cleanUp(), so there is no need to block index
closing/clearing on it. Instead, we can clean the field data cache in
InternalTestCluster before we check the circuit breaker.
This can help tests that time out because cleaning the cache is taking
too long
Today, shard stats are blocked while phase 3 of recovery (replay xlog)
is running; this change removes the engine readLock from shard stats
so it's not blocked.
Closes#8910
We have lots of boilerplate code that is unnecessarily abstracting
services ie InternalIndexShard and IndexShard or InternalIndexService and
IndexService. It's enough to have concrete classes for these core classes.
Closes#8904
If you use the java-package tool to create java packages, those
paths also should be added to the debian init script.
Also updated the docs, that it is ok to install java8.
Closes#7383
This change adds a 'http.publish_port' setting to the HTTP module to configure
the port which HTTP clients should use when communicating with the node. This
is useful when running on a bridged network interface or when running behind
a proxy or firewall.
Closes#8807Closes#8137
we currently have operationThreaded set to false when indexing a script. This setting
means that if we are executing the operation locally that we don't spawn a new thread for
it althought incoming thread in this case is the network thread. Now sicne we are indexing here
the engine is currently recovering which sometimes locks the engine for finalization blocks on
a network call waiting for the recovery target to come back the internal lock in engine will never be
released since we are waiting with our network thread for it to be released.
Upgrades lucene to latest, and supports the BEST_COMPRESSION parameter
now supported (with backwards compatibility, etc) in Lucene.
This option uses deflate, tuned for highly compressible data.
index.codec::
The default value compresses stored data with LZ4 compression, but
this can be set to best_compression for a higher compression ratio,
at the expense of slower stored fields performance.
IMO its safest to implement as a named codec here, because ES already
has logic to handle this correctly, and because its unrealistic to have
a plethora of options to Lucene's default codec... we are practically
limited in Lucene to what we can support with back compat, so I don't
think we should overengineer this and add additional unnecessary plumbing.
See also:
https://issues.apache.org/jira/browse/LUCENE-5914https://issues.apache.org/jira/browse/LUCENE-6089https://issues.apache.org/jira/browse/LUCENE-6090https://issues.apache.org/jira/browse/LUCENE-6100Closes#8863
This fix adds better error handling for parsing multipoint, linestring, and polygon GeoJSONs. Current logic throws a NPE when parsing a multipoint, linestring, or polygon that does not comply with the GeoJSON specification. That is, if a user provides a single coordinate instead of an array of coordinates, or array of linestrings, the ShapeParser throws a NPE wrapped in a SearchParseException instead of a more useful error message.
Closes#8432
After the refactoring in #8784 some settings didn't get passed to the
actual engine and there exists a race if the settings are updated while
the engine is started such that the actual starting engine doesn't see
the latest settings. This commit fixes the concurrency issue as well as
adds tests to ensure the settings are reflected.
After upgrading shard might start relocating again. If there are no
replicas the cluster state of a node might not be up to data for
a few miliseconds and direct a search request to a node that does not
have the shard anymore. This result in the following test failures:
1> java.lang.AssertionError: Count is 99 but 101 was expected. Total shards: 13 Successful shards: 12 & 0 shard failures:
1> __randomizedtesting.SeedInfo.seed([1932F73B458703CA:6F4FAD3DAC55591C]:0)
1> [...org.junit.*]
1> org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount(ElasticsearchAssertions.java:184)
1> org.elasticsearch.bwcompat.BasicBackwardsCompatibilityTest.testIndexRollingUpgrade(BasicBackwardsCompatibilityTest.java:358)
Waiting for relocation finished should fix this.
Today we have several upgrade methods that can read state written
pre 0.90 or even pre 0.19. Version 2.0 should not support these state
formats. Users of these version should upgrade to a 1.x or 0.90.x version
first.
Closes#8850