We have had lots of test failures due to Groovy scripts making an assertion
trip in
`sun.reflect.generics.reflectiveObjects.WildcardTypeImpl.getLowerBoundASTs`.
See https://issues.apache.org/jira/browse/GROOVY-7528.
Date math index name resolution enables you to search a range of time-series indices, rather than searching all of your time-series indices and filtering the the results or maintaining aliases. Limiting the number of indices that are searched reduces the load on the cluster and improves execution performance. For example, if you are searching for errors in your daily logs, you can use a date math name template to restrict the search to the past two days.
The added `ExpressionResolver` implementation that is responsible for resolving date math expressions in index names. This resolver is evaluated before wildcard expressions are evaluated.
The supported format: `<static_name{date_math_expr{date_format|timezone_id}}>` and the date math expressions must be enclosed within angle brackets. The `date_format` is optional and defaults to `YYYY.MM.dd`. The `timezone_id` id is optional too and defaults to `utc`.
The `{` character can be escaped by places `\\` before it.
Closes#12059
When we rewrite to a MatchNoTermsQuery we were throwing out the boost which
could could lead to funky changes when the query against _all was in a
bool query.
Previously we would write the entire ByteBuffer to the stream to serialise the HDRHistogram even if it was not all needed. Now we only write the bytes that are actually written to in the ByteBuffer.
We currently test the toQuery method by comparing its return value with the output of createExpectedQuery. This seemed at first like a good deep test for the lucene generated queries, but ends up causing a lot of code duplication between tests and prod code, which smells like this test is not that useful after all.
This commit removes the requirement for createExpectedQuery, we test a bit less the lucene generated queries, but we do still have a testToQuery method in BaseQueryTestCase. This test acts as smoke test to verify that there are no issues with our toQuery method implementation for each query: it verifies that two equal queries should result in the same lucene query. It also verifies that changing the boost value of a query affects the result of the toQuery.
Part of the previous createExpectedQuery can be moved to assertLuceneQuery using normal assertions, we do it when it makes sense only though.
This commit also adds support for boost and queryName to SpanMultiTermQueryParser and SpanMultiTermQueryBuilder, plus it removes no-op setters for queryName and boost from QueryFilterBuilder. Instead we simply declare in our test infra that query filter doesn't support boost and queryName and we disable tests aroudn these fields in that specific case. Boost and queryName support is also moved down to the QueryBuilder interface.
Closes#12473
This can happen in two ways:
1. The _all field is disabled.
2. There are documents in the index, the _all field is enabled, but there are
no fields in any of the documents.
In both of these cases we now rewrite the query to a MatchNoDocsQuery which
should be safe because there isn't anything to match.
Closes#12439
When a node discovers shard content on disk which isn't used, we reach out to all other nodes that supposed to have the shard active. Only once all of those have confirmed the shard active, the shard has no unassigned copies *and* no cluster state change have happened in the mean while, do we go and delete the shard folder.
Currently, after removing a shard, the IndicesStores checks the indices services if that has no more shard active for this index and if so, it tries to delete the entire index folder (unless on master node, where we keep the index metadata around). This is wrong as both the check and the protections in IndicesServices.deleteIndexStore make sure that there isn't any shard *in use* from that index. However, it may be the we erroneously delete other unused shard copies on disk, without the proper safety guards described above.
Normally, this is not a problem as the missing copy will be recovered from another shard copy on another node (although a shame). However, in extremely rare cases involving multiple node failures/restarts where all shard copies are not available (i.e., shard is red) there are race conditions which can cause all shard copies to be deleted.
Instead, we should change the decision to clean up an index folder to based on checking the index directory for being empty and containing no shards.
This change creates a proper `distribution` modules in which we have today packaging for
all of our four current packages:
* zip
* tar.gz
* rpm
* deb
Licenes have moved into the distribution project as well. So have the config/ and the bin/ directory
from the core/ project.
The RPM package is now built, if rpmbuild exists.
The bats tests have been moved as well.
Also the zip distribution now executes the REST integration tests.
As Robert pointed out on #12465, it has the undesirable property of relying on
the operating system. So it would be better to use a simple rule such as
checking whether the file name starts with a dot.