Expose new span queries from https://issues.apache.org/jira/browse/LUCENE-6083
Within returns matches from 'little' that are enclosed inside of a match from 'big'.
Containing returns matches from 'big' that enclose matches from 'little'.
Added infrastructure to allow basic member methods in the expressions
language to be called. The methods must have a signature with no arguments. Also
added the following member methods for date fields (and it should be easy to add more)
* getYear
* getMonth
* getDayOfMonth
* getHourOfDay
* getMinutes
* getSeconds
Allow fields to be accessed without using the member variable [value].
(Note that both ways can be used to access fields for back-compat.)
closes#10890
Only parent filters should use bitset filter cache, to avoid memory being wasted.
Also in case of object fields inline the field name into the nested object,
instead of creating an additional (dummy) nested identity.
Closes#10662Closes#10629
The assumption is that gaps in histogram are generally undesirable, for instance
if you want to build a visualization from it. Additionally, we are building new
aggregations that require that there are no gaps to work correctly (eg.
derivatives).
When doc values were turned on a by default, most meta fields
had it explicitly disabled. However, _field_names was missed.
This change forces doc values to be off always for _field_names
and removes the unnecessary support when creating index fields.
closes#10892
Adds support for calculating and sending diffs instead of full cluster state of the most frequently changing elements - cluster state, meta data and routing table.
Closes#6295
If you define exactly the same date range query using either `DATE+0200` notation or `DATE` and set `timezone: +0200`, elasticsearch gives back different results:
```
DELETE foo
PUT /foo
{
"mapping": {
"tweets": {
"properties": {
"tweet_date": {
"type": "date"
}
}
}
}
}
POST /foo/tweets/1/
{
"tweet_date": "2015-04-05T23:00:00+0000"
}
POST /foo/tweets/2/
{
"tweet_date": "2015-04-06T00:00:00+0000"
}
GET /foo/tweets/_search?pretty
{
"query": {
"query_string": {
"query": "tweet_date:[2015-04-06T00:00:00+0200 TO 2015-04-06T23:00:00+0200]"
}
}
}
GET /foo/tweets/_search?pretty
{
"query": {
"query_string": {
"query": "tweet_date:[2015-04-06T00:00:00 TO 2015-04-06T23:00:00]",
"time_zone": "+0200"
}
}
}
```
This PR fixes it and will also allow us to add the same feature to simple_query_string as well in another PR.
Closes#10477.
(cherry picked from commit 880f4a0)
`IndiceStore#indexCleanup` uses a disruption scheme to delay cluster state
processing. Yet, the delay is [1..2] seconds but tests are setting the shard
deletion timeout to 1 second to speed up tests. This can cause random not
reproducible failures in this test since the timeouts and delays are bascially
overlapping. This commit adds a longer timeout for this test to prevent these
problems.
Extend SearchParseException and QueryParsingException to report position information in query JSON where errors were found. All query DSL parser classes that throw these exception types now pass the underlying position information (line and column number) at the point the error was found.
Closes#3303
source parameter is implicitly supported and doesn't need to be declared in rest spec. It is tested though, as every api that supports get with body can also get requests using POST with body or get with source query_string parameter.
When a node was a data node only then the index state was not written.
In case this node connected to a master that did not have the index
in the cluster state, for example because a master was restarted and
the data folder was lost, then the indices were not imported as dangling
but instead deleted.
This commit makes sure that index state for data nodes is also written
if they have at least one shard of this index allocated.
closes#8823closes#9952
this commit removes the obsolete settings for distributors and updates
the documentation on multiple data.path. It also adds an explain to the
migration guide.
Relates to #9498Closes#10770
This commit removes unused thorws statements when RuntimeExceptions are
mentioned in the throws statement. It also removes obsolete import statements
for java.lang.IllegalArgumentException and java.lang.IllegalStateException
Regardless of the outcome of #8142, we should at least enforce that
when _source is enabled, it is sufficient to reindex. This change
removes the excludes and includes settings, since these modify
the source, causing us to lose the ability to reindex some fields.
closes#10814
The code to parse a document was spread across 3 different classes,
and depended on traversing the ObjectMapper hiearchy. This change
consolidates all the doc parsing code into a new DocumentParser.
This should allow adding unit tests (future issue) for document
parsing so the logic can be simplified. All code was copied
directly for this change with only minor modifications to make
it work within the new location.
closes#10802
Multi fields currently parse any field type passed in. However, they
were only intended to support copying simple values from the outter
field. This change adds validation to ensure object and nested
fields are not used within multi fields.
closes#10745
The current implementation is dangerous: it unexpectedly refreshes,
which can quickly cause an unhealthy index (segment explosion). It
can also delete different documents on primary vs replicas, causing
inconsistent replicas.
For 2.0 we will replace this with an optional plugin that does a
scan/scroll search and then issues bulk delete requests.
Closes#10859
The original goal of this cache was to avoid parsing the same query several
times in case several shards are held on the same node. While this might
sound like a good idea, this would only help when parsing the query takes
non-negligible time compared to actually running the query, which should not
be the case.
if we don't have an ElasticsearchException as the wrapper of the
actual cause we don't render a root cause today. This commit adds
support for 3rd party exceptions as root causes.
Closes#10836
We deployed our own code to check if directories are closed etc an d
if serachers are still open. Yet, since we don't have a global cluster
anymore we can just use lucene's internal mechanism to do that. This commit
removes all special handling and usese LuceneTestCase.closeAfterSuite to
fail if certain resources are not closed
Closes#10853
field_value_factor now takes a default that is used if the document doesn't
have a value for that field. It looks like:
"field_value_factor": {
"field": "popularity",
"missing": 1
}
Closes#10841
This commit adds support for running with only one node and sets the
maximum number of nodes to 3 by default. if run with test.nighly=true
at most 6 nodes are used. This gave a 20% speed improvement compared to
the previoulys minimum number of nodes of 3.
Groovy sandboxing was disabled by default from 1.4.3 on though since we found out that it could be worked around, so it makes little sense to keep it and maintain it.
Closes#10156Closes#10480
Best effort to print out the search source depending on how it was set to the SearchRequestBuilder, don't call `internalBuilder() as that causes the content of the request to be wiped.
Closes#5576