In case FieldNameAnalyzer does not find an explicit analyzer for a given
field name, it returns the default analyzer. This behaviour can hide bugs
where the analyzer fails to be propagated to FieldNameAnalyzer or an
analyzer is requested for a field which is not mapped.
We are using a a VLong to serialize the PercolateResponse#tookInMillis. This
can due to several `System.currentTimeMillis()` implemenation details be negative.
We should prevent the negavite value for being serialized as a VLong and make sure
we use a valid value for this in the first place
Closes#11138
A few meta fields can currently be set within a document's source.
However, the recommended way to set meta fields like this is through
the api, and setting within the document can be a performance trap
(e.g. needing to find _id in order to route the document).
This change removes the ability to set meta fields within
a document source for 2.0+ indexes.
closes#11051closes#11074
Several plugins (e.g. elasticsearch-cloud-aws, elasticsearch-cloud-azure, elasticsearch-cloud-gce)
have integration tests that run with actual credentials to a remote service, so test runs
need access to this file.
These all require the tester (or jenkins) to supply the file with -Dtests.config.
In `NodeEnvironment.deleteShardDirectoryUnderLock`, we will now attempt
to acquire, then release, the `write.lock` file for the Lucene index in
question to ensure that no other `IndexWriter` has the directory open
before deleting the data.
Note that the `write.lock` file must be released before the actual
deletion in order to allow the directory to be deleted.
Fixes#11097
Today we enforce blocking which doesnt' really fit in the elasticsearch model
this commit adds async execution to the synced flush service by passing a
ActinListener to the service returing immediately.
If the user has set a shard_min_doc_count setting then avoid looking up background frequencies if the term fails to meet the foreground threshold on a shard.
Closes#11093
* Properly support symlinks (e.g. /tmp -> /mnt/tmp)
* Check all configured paths up front and deliver the best exception we can when things are wrong
* Initialize securitymanager earlier
* Fix too-loud error logging of Natives root check
As a follow up to #10870, this removes support for
index templates on disk. It also removes a missed
place still allowing disk based mappings.
closes#11052
When the longitude is zero for a document, the left and right bounds do not get updated in the geo bounds aggregation which can cause the bounds to be returned with Infinite values for longitude
Closes#11085
There currently are small differences between search api and count, exists, validate query, explain api when it comes to reading query_string parameters. `analyze_wildcard`, `lowercase_expanded_terms` and `lenient` are only read by the search api and ignored by all other mentioned apis. Unified code to fix this and make sure it doesn't happen again. Also shared some code when it comes to printing out the query as part of SearchSourceBuilder conversion to ToXContent.
Extended REST spec to include all the supported params (some that were already supported weren't listed), and added REST tests (also some basic tests for count and search_exists which weren't tested at all).
Closes#11057
BytesQueryBuilder was introduced to be used internally by the phrase suggester and its collate feature. It ended up being exposed via Java api but the existing WrapperQueryBuilder could be used instead. Added WrapperQueryBuilder constructor that accepts a BytesReference as argument.
One other reason why this filter builder should be removed is that it gets on the way of the query parsers refactoring, given that it's the only query builder that allows to build a query through java api without having a respective query parser.
Closes#10919
Different responses hold the shards header, search, count, flush etc. The code was duplicated in two different places, centralized in RestActions.
It turns out that only the search response printed out the status field before the reason, which was added to all other broadcast responses too.
Closes#11064
Dynamic settings has to be injected into constructor with either @ClusterDynamicSettings or @IndexDynamicSettings. If annotations are not specified an empty instance of Dynamic Settings is injected that can lead to difficult to discover errors such as #10614. This commit will make any attempt to inject unannotated dynamic settings to generate a giuce error.
Our ThreadPool constructor creates a couple of threads (scheduler and timer) which might not get shut down if the initialization of a node fails. A guice error might occur for example, which causes the InternalNode constructor to throw an exception. In this case the two threads are left behind, which is not a big problem when running es standalone as the error will be intercepted and the jvm will be stopped as a whole. It can become more of a problem though when running es in embedded mode, as we'll end up with lingering threads or testing an handling of initialization failures.
Closes#9107
This commit makes queries and filters parsed the same way using the
QueryParser abstraction. This allowed to remove duplicate code that we had
for similar queries/filters such as `range`, `prefix` or `term`.
The mapper listener concept is only now used as a callback to the
MapperService when new fields are added. This change removes the
listeners, instead storing a link to the mapper service in
each doc mapper.
This commit makes create, update and delete operations on an index durable
by default. The user has the option to opt out to use async translog flushes
on a per-index basis by settings `index.translog.durability=request`.
Initial benchmarks running on SSDs have show that indexing is about 7% - 10% slower
with bulk indexing compared to async translog flushes. This change is orthogonal to
the transaction log sync interval and will only sync the transaction log if the operation
has not yet been concurrently synced. Ie. if multiple indexing requests are submitted and
one operations sync call already persists the operations of others only one sync call is executed.
Relates to #10933
The mapper listener abstractions for object and field mappers are used
to notify the mapper service of new fields, as well as collect
all object and field mappers through a set of traversal functions.
This change removes the traversal functions in favor of simple
iteration over subfields of a mapper.
This is pretty much a workaround for the fact that we simply
close the downstream resources once the shard is closed. This means
the document parser will barf with NPE or something similar while
AlreadyClosedException would be approriate.
Removes the More Like This API, users should now use the More Like This query.
The MLT API tests were converted to their query equivalent. Also some clean
ups in MLT tests.
Closes#10736Closes#11003
In some cases it might happen that a mapping which is already available on the
master node is not available yet on the node that holds the primary shard.
This commit changes indexing on the primary shard so that if a dynamic update
is triggered then the index operation is re-tried until required mappings are
available locally (using cluster state observing).
Previously, we were using the "statistical", technically accurate name. Instead, we
should probably use the name that people are familiar with, e.g. "Holt Winters" instead
of "triple exponential". To that end:
- `single_exp` becomes `ewma` (exponentially weighted moving average)
- `double_exp` becomes `holt`
When the `triple_exp` is added, it will be called `holt_winters`.
Current features (eg. update API) and future features (eg. reindex API)
depend on _source. This change locks down the field so that
it can no longer be disabled. It also removes legacy settings
compress/compress_threshold.
closes#8142closes#10915
If we have to do the one-time loading of fieldata, it requires
more permissions than groovy scripts currently have (zero). This
is because of RamUsageEstimator reflection and so on in PagedBytes.
GroovySecurityTests only test a numeric field, so add a string field
to the test (so pagedbytes fielddata gets created etc).
LUCENE-6422 - PackedQuadTree enhancement - was committed in Lucene 5.2 which is now integrated w/ ES 2.0. This eliminates the need to carry our own local lucene.spatial package. This commit removes the now unnecessary files.
When settings sync to 0, we benefit from using the buffered type, no need to change to simple, since we get a chance to fsync multiple operations (for that single operation) and not have to sync for the other ones before returning each one
This commit adds the path of the PID file to the Environment. It also add it to the Security Manager since the PID file is deleted by a shutdown hook when the JVM is exited.
Double.MIN_VALUE does not follow the same semantics as Integer.MIN_VALUE. Namely, it
represents the smallest positive, non-zero value a double can hold. Since the test uses negative
doubles, this can incorrectly find the min/max metric for a set of values.
Instead, Double.NEGATIVE_INFINITY needs to be used, which represents the smallest value possible.
Not strictly necessary, but MAX_VALUE was switched to POSITIVE_INFINITY just to be 100% correct
we already fail the shard in the `onFailure` method if the replica
operation barfs. This additional check has been added lately that
bypasses the clusterstate observer which causes replicas to fail
if the mappings are not yet present.
When a node was a data node only then the index state was not written.
In case this node connected to a master that did not have the index
in the cluster state, for example because a master was restarted and
the data folder was lost, then the indices were not imported as dangling
but instead deleted.
This commit makes sure that index state for data nodes is also written
if they have at least one shard of this index allocated.
closes#8823closes#9952
This commit moves the translog creation into the InternalEngine
to ensure the transactino log is created after we acquired the write
lock on the index. This also prevents races when ShadowEngines are shutting
down due to node restarts where another node already takes over the not yet
fully synced transaction log.
If the collect method was called with a bucketOrd of > 0 the arrays holding the state for the aggregation would be grown but the initial values for the bucketOrds > 0 were all set to Double.NEGATIVE_INFINITY meaning that for the bottom, posLeft and negLeft values no collected document would change the value since NEGATIVE_INFINITY is always less than every other value.
Closes#10804
This change removes the multiple implementations of different admin interfaces and centralizes it with AbstractClient. It also makes sure *all* executions of actions now go through a single AbstractClient#execute method, taking care of copying headers and wrapping listener.
This also has the side benefit of removing all the code around differnet possible clients, and removes quite a bit of code (most of the + code is actually removal of generics and such).
This change also changes how TransportClient is constructed, requiring a Builder to create it, its a breaking change and its noted in the migration guide.
Yea another step towards simplifying the action infra and making it simpler...
Currently, when all copies of a shard are lost, we reach out to all
other nodes to see whether they have a copy of the data. For a shared
filesystem, though, we can assume that each node has a copy of the data
available, so return a state version of at least 0 for each node.
This feature is set using the dynamic index setting
`index.shared_filesystem.recover_on_any_node`, which defaults to
`false`.
Fixes#10932
It appears the previous failure (-Dtests.seed=D9EF60095522804F) is just accumulation of
floating point error differences between expected and actual results. Making the tests less
stringent by requiring closeTo(0.1) instead of the previous 0.00001
This commit adds a counter for IndexShard that keeps track of how many write operations
are currently in flight on a shard. The counter is incremented whenever a write request is
submitted in TransportShardReplicationOperationAction and decremented when it is finished.
On a primary it stays incremented while replicas are being processed.
The counter is an instance of AbstractRefCounted. Once this counter reaches 0
each write operation will be rejected with an IndexClosedException.
closes#10610
Today we wait 30 sec for shards to flush and close and then simply exit the process.
This is often not desired and we should by default wait long enough for shards to
close etc. This commit adds a default timeout of one day which simplifies the code
and gives us _enough_ time to shut down.
Closes#10680
If the initial recovery is skipped all uncommitted changes are lost. This
must be enforced otherwise primary and replica will go out of sync once the
primary is started after restore and a replica recovers from it applying the
still referenced transaction logs.
Instead of failing the Engine for a shared filesystem, this change
allows a "soft close" of the Engine, where only the IndexWriter is
closed so that the replica can open an IndexWriter using the same
filesystem directory/mount.
Fixes#10469
This refcounting doesn't work for shadow replicas since we open
the same translog file from more than one node while running a rolling
restart. This functionality is also superseeded by our filesystem abstraction
which detects file leaks under the hood.
Today, we rely on the user to set request listener threads to true when they are on the client side in order not to block the IO threads on heavy operations. This proves to be very trappy for users, and end up creating problems that are very hard to debug.
Instead, we can do the right thing, and automatically thread listeners that are used from the client when the client is a node client or a transport client.
This change also removes the ability to set request level listener threading, in the effort of simplifying the code path and reasoning around when something is threaded and when it is not.
closes#10940
This removes Elasticsearch's filter cache and uses Lucene's instead. It has some
implications:
- custom cache keys (`_cache_key`) are unsupported
- decisions are made internally and can't be overridden by users ('_cache`)
- not only filters can be cached but also all queries that do not need scores
- parent/child queries can now be cached, however cached entries are only
valid for the current top-level reader so in practice it will likely only
be used on read-only indices
- the cache deduplicates filters, which plays nicer with large keys (eg. `terms`)
- better stats: we already had ram usage and evictions, but now also hit count,
miss count, lookup count, number of cached doc id sets and current number of
doc id sets in the cache
- dynamically changing the filter cache size is not supported anymore
Internally, an important change is that it removes the NoCacheFilter infrastructure
in favour of making Query.rewrite specializing the query for the current reader so
that it will only be cached on this reader (look for IndexCacheableQuery).
Note that consuming filters with the query API (createWeight/scorer) instead of
the filter API (getDocIdSet) is important for parent/child queries because
otherwise a QueryWrapperFilter(ParentQuery) would run the wrapped query per
segment while relations might be cross segments.
Expose new span queries from https://issues.apache.org/jira/browse/LUCENE-6083
Within returns matches from 'little' that are enclosed inside of a match from 'big'.
Containing returns matches from 'big' that enclose matches from 'little'.
Added infrastructure to allow basic member methods in the expressions
language to be called. The methods must have a signature with no arguments. Also
added the following member methods for date fields (and it should be easy to add more)
* getYear
* getMonth
* getDayOfMonth
* getHourOfDay
* getMinutes
* getSeconds
Allow fields to be accessed without using the member variable [value].
(Note that both ways can be used to access fields for back-compat.)
closes#10890
Using files that must be specified on each node is an anti-pattern
from the API based goal of ES. This change removes the ability
to specify the default mapping with a file on each node.
closes#10620
In order to safely complete recoveries / relocations we have to keep all operation done since the recovery start at available for replay. At the moment we do so by preventing the engine from flushing and thus making sure that the operations are kept in the translog. A side effect of this is that the translog keeps on growing until the recovery is done. This is not a problem as we do need these operations but if the another recovery starts concurrently it may have an unneededly long translog to replay. Also, if we shutdown the engine for some reason at this point (like when a node is restarted) we have to recover a long translog when we come back.
To void this, the translog is changed to be based on multiple files instead of a single one. This allows recoveries to keep hold to the files they need while allowing the engine to flush and do a lucene commit (which will create a new translog files bellow the hood).
Change highlights:
- Refactor Translog file management to allow for multiple files.
- Translog maintains a list of referenced files, both by outstanding recoveries and files containing operations not yet committed to Lucene.
- A new Translog.View concept is introduced, allowing recoveries to get a reference to all currently uncommitted translog files plus all future translog files created until the view is closed. They can use this view to iterate over operations.
- Recovery phase3 is removed. That phase was replaying operations while preventing new writes to the engine. This is unneeded as standard indexing also send all operations from the start of the recovery to the recovering shard. Replay all ops in the view acquired in recovery start is enough to guarantee no operation is lost.
- IndexShard now creates the translog together with the engine. The translog is closed by the engine on close. ShadowIndexShards do not open the translog.
- Moved the ownership of translog fsyncing to the translog it self, changing the responsible setting to `index.translog.sync_interval` (was `index.gateway.local.sync`)
Closes#10624
Only parent filters should use bitset filter cache, to avoid memory being wasted.
Also in case of object fields inline the field name into the nested object,
instead of creating an additional (dummy) nested identity.
Closes#10662Closes#10629
The assumption is that gaps in histogram are generally undesirable, for instance
if you want to build a visualization from it. Additionally, we are building new
aggregations that require that there are no gaps to work correctly (eg.
derivatives).
When doc values were turned on a by default, most meta fields
had it explicitly disabled. However, _field_names was missed.
This change forces doc values to be off always for _field_names
and removes the unnecessary support when creating index fields.
closes#10892
Adds support for calculating and sending diffs instead of full cluster state of the most frequently changing elements - cluster state, meta data and routing table.
Closes#6295
If you define exactly the same date range query using either `DATE+0200` notation or `DATE` and set `timezone: +0200`, elasticsearch gives back different results:
```
DELETE foo
PUT /foo
{
"mapping": {
"tweets": {
"properties": {
"tweet_date": {
"type": "date"
}
}
}
}
}
POST /foo/tweets/1/
{
"tweet_date": "2015-04-05T23:00:00+0000"
}
POST /foo/tweets/2/
{
"tweet_date": "2015-04-06T00:00:00+0000"
}
GET /foo/tweets/_search?pretty
{
"query": {
"query_string": {
"query": "tweet_date:[2015-04-06T00:00:00+0200 TO 2015-04-06T23:00:00+0200]"
}
}
}
GET /foo/tweets/_search?pretty
{
"query": {
"query_string": {
"query": "tweet_date:[2015-04-06T00:00:00 TO 2015-04-06T23:00:00]",
"time_zone": "+0200"
}
}
}
```
This PR fixes it and will also allow us to add the same feature to simple_query_string as well in another PR.
Closes#10477.
(cherry picked from commit 880f4a0)
`IndiceStore#indexCleanup` uses a disruption scheme to delay cluster state
processing. Yet, the delay is [1..2] seconds but tests are setting the shard
deletion timeout to 1 second to speed up tests. This can cause random not
reproducible failures in this test since the timeouts and delays are bascially
overlapping. This commit adds a longer timeout for this test to prevent these
problems.
Extend SearchParseException and QueryParsingException to report position information in query JSON where errors were found. All query DSL parser classes that throw these exception types now pass the underlying position information (line and column number) at the point the error was found.
Closes#3303
source parameter is implicitly supported and doesn't need to be declared in rest spec. It is tested though, as every api that supports get with body can also get requests using POST with body or get with source query_string parameter.
When a node was a data node only then the index state was not written.
In case this node connected to a master that did not have the index
in the cluster state, for example because a master was restarted and
the data folder was lost, then the indices were not imported as dangling
but instead deleted.
This commit makes sure that index state for data nodes is also written
if they have at least one shard of this index allocated.
closes#8823closes#9952
this commit removes the obsolete settings for distributors and updates
the documentation on multiple data.path. It also adds an explain to the
migration guide.
Relates to #9498Closes#10770
This commit removes unused thorws statements when RuntimeExceptions are
mentioned in the throws statement. It also removes obsolete import statements
for java.lang.IllegalArgumentException and java.lang.IllegalStateException
Remove the ability to specify search type ‘query_and_fetch’ and
‘df_query_and_fetch’ from the REST API.
- Adds REST tests
- Updates REST API spec to remove ‘query_and_fetch’ and
‘df_query_and_fetch’ as options
- Removes documentation for these options
Closes#9606
Regardless of the outcome of #8142, we should at least enforce that
when _source is enabled, it is sufficient to reindex. This change
removes the excludes and includes settings, since these modify
the source, causing us to lose the ability to reindex some fields.
closes#10814
The code to parse a document was spread across 3 different classes,
and depended on traversing the ObjectMapper hiearchy. This change
consolidates all the doc parsing code into a new DocumentParser.
This should allow adding unit tests (future issue) for document
parsing so the logic can be simplified. All code was copied
directly for this change with only minor modifications to make
it work within the new location.
closes#10802
Multi fields currently parse any field type passed in. However, they
were only intended to support copying simple values from the outter
field. This change adds validation to ensure object and nested
fields are not used within multi fields.
closes#10745
now that delete by query is out, we don't need this infrastructure code. The delete by query will be implenented as a plugin, with scan scroll + bulk delete, so it will not need this infra anyhow
The current implementation is dangerous: it unexpectedly refreshes,
which can quickly cause an unhealthy index (segment explosion). It
can also delete different documents on primary vs replicas, causing
inconsistent replicas.
For 2.0 we will replace this with an optional plugin that does a
scan/scroll search and then issues bulk delete requests.
Closes#10859
During pinging we open light , temporary connections to the the unicast hosts. After the pinging is done we close those. At the moment we do so before returning the results of the pings to the caller. On the other hand, in our transport logic we acquire a lock specific to the node id while opening a connection. When disconnecting from node, we have to acquire the same lock in order to guarantee the the connection opening has finished. This can cause big delays in environments where opening a connection is very slow, as the connection closing has to wait *after* the pinging was done.. This can be problematic as it causes master election to use stale data.
Closes#10849
The original goal of this cache was to avoid parsing the same query several
times in case several shards are held on the same node. While this might
sound like a good idea, this would only help when parsing the query takes
non-negligible time compared to actually running the query, which should not
be the case.
if we don't have an ElasticsearchException as the wrapper of the
actual cause we don't render a root cause today. This commit adds
support for 3rd party exceptions as root causes.
Closes#10836
We deployed our own code to check if directories are closed etc an d
if serachers are still open. Yet, since we don't have a global cluster
anymore we can just use lucene's internal mechanism to do that. This commit
removes all special handling and usese LuceneTestCase.closeAfterSuite to
fail if certain resources are not closed
Closes#10853
field_value_factor now takes a default that is used if the document doesn't
have a value for that field. It looks like:
"field_value_factor": {
"field": "popularity",
"missing": 1
}
Closes#10841
This commit adds support for running with only one node and sets the
maximum number of nodes to 3 by default. if run with test.nighly=true
at most 6 nodes are used. This gave a 20% speed improvement compared to
the previoulys minimum number of nodes of 3.
Groovy sandboxing was disabled by default from 1.4.3 on though since we found out that it could be worked around, so it makes little sense to keep it and maintain it.
Closes#10156Closes#10480
Best effort to print out the search source depending on how it was set to the SearchRequestBuilder, don't call `internalBuilder() as that causes the content of the request to be wiped.
Closes#5576
This pull request replaces the current self-made implementation of JSON encoding special chars with re-using the Jackson JsonStringEncoder. Turns out the previous implementation also missed a few special chars so had to adjust the tests accordingly (looked at RFC 4627 for reference).
Note: There's another JSON String encoder on our classpath (org.apache.commons.lang3.StringEscapeUtils) that essentially does the same thing but adds quoting to more characters than the Jackson Encoder above.
Relates to #5473
The _field_names field was fixed in 1.5.1 (#10268) to correctly be
disabled for indexes before 1.3.0. However, only the exists filter
was updated to check this enabled flag on 1.x/1.5. The missing
filter on those branches still checks the field type to see if it
is indexed, which causes the filter to always try and use
the _field_names field for those old indexes.
This change adds a test to the old index tests for missing filter.
closes#10842
1. initialize SM after things like mlockall. Their tests currently
don't run with securitymanager enabled, and its simpler to just
run mlockall etc first.
2. remove redundant test permissions (junit4.childvm.cwd/temp). This
is alreay added as java.io.tmpdir.
3. improve tests to load the generated policy with some various
settings and assert things about the permissions on configured
directories.
4. refactor logic to make it easier to fine-grain the permissions later.
for example we currently allow write access to conf/. In the future
I think we can improve testing so we are able to make improvements here.
Since the circuit breaking service doesn't actually change for
BigArrays, we can eagerly create a new instance only once and use that
for all further invocations of `withCircuitBreaking`.