In internal test clusters tests we check that wiping all indices was acknowledged
but in REST tests we didn't.
This aligns the behavior in both kinds of tests.
Relates #45605 which might be caused by unacked deletes that were just slow.
Changes the order of parameters in Geometries from lat, lon to lon, lat
and moves all Geometry classes are moved to the
org.elasticsearch.geomtery package.
Backport of #45332Closes#45048
* Add format parameter to the range queries built for CURRENT_* functions
used in comparison conditions
* Use range queries for date fields equality/non-equality as well.
(cherry picked from commit c1e81e90f937ee5a002524d632bfce74d76962f9)
* Name each inner_hits section of nested queries differently and extract and combine the multiple values it generates into a single list.
This also introduces a limitation (its origin it's with Elasticsearch
though) on the sorting capabilities when the sorting is based on the
nested fields filtered: only one of the conditions applied to nested
documents will be used in the nested sorting.
(cherry picked from commit cfc5cf68f6e83b07bb9006986d0903d6be418ec6)
* Switch from using docvalue_fields to extracting values from _source
where applicable. Doing this means parsing the _source and handling the
numbers parsing just like Elasticsearch is doing it when it's indexing
a document.
* This also introduces a minor limitation: aliases type of fields that
are NOT part of a tree of sub-fields will not be able to be retrieved
anymore. field_caps API doesn't shed any light into a field being an
alias or not and at _source parsing time there is no way to know if a
root field is an alias or not. Fields of the type "a.b.c.alias" can be
extracted from docvalue_fields, only if the field they point to can be
extracted from docvalue_fields. Also, not all fields in a hierarchy of
fields can be evaluated to being an alias.
(cherry picked from commit 8bf8a055e38f00df5f49c8d97f632f69d6e00c2c)
Test clusters currently has its own set of logic for dealing with
finding different versions of Elasticsearch, downloading them, and
extracting them. This commit converts testclusters to use the
DistributionDownloadPlugin.
By default, we don't check ranges while indexing geo_shapes. As a
result, it is possible to index geoshapes that contain contain
coordinates outside of -90 +90 and -180 +180 ranges. Such geoshapes
will currently break SQL and ML retrieval mechanism. This commit removes
these restriction from the validator is used in SQL and ML retrieval.
* Add test for SQL not being available error message in JDBC.
* Add a new qa sub-project that explicitly disables SQL XPack module in Gradle.
(cherry picked from commit 8a1ac8a3a88a325ec9b99963e0fa288c18ee0ee5)
This commit removes some very old test logging annotations that appeared
to be added to investigate test failures that are long since closed. If
these are needed, they can be added back on a case-by-case basis with a
comment associating them to a test failure.
To be consistent with the `search.max_buckets` default setting,
set the hard limit of the PriorityQueue used for in memory sorting,
when sorting on an aggregate function, to 10000.
Fixes: #43168
(cherry picked from commit 079e012fdea68ea0a7daae078359495047e9c407)
- Previously, when shorting on an aggregate function the bucket
processing ended early when the explicit (LIMIT XXX) or the impliciti
limit of 512 was reached. As a consequence, only a set of grouping
buckets was processed and the results returned didn't reflect the global
ordering.
- Previously, the priority queue shorting method had an inverse
comparison check and the final response from the priority queue was also
returned in the inversed order because of the calls to the `pop()`
method.
Fixes: #42851
(cherry picked from commit 19909edcfdf5792b38c1363b07379783ebd0e6c4)
In AsciiDoc, `subs="attributes,callouts,macros"` options were required
to render `include-tagged::` in a code block.
With elastic/docs#827, Elasticsearch Reference documentation migrated
from AsciiDoc to Asciidoctor.
In Asciidoctor, the `subs="attributes,callouts,macros"` options are no
longer needed to render `include-tagged::` in a code block. This commit
removes those unneeded options.
Resolves#41589
Refactors the WKT and GeoJSON parsers from an utility class into an
instantiatable objects. This is a preliminary step in
preparation for moving out coordinate validators from Geometry
constructors. This should allow us to make validators plugable.
Allow querying of FROZEN indices both through dedicated SQL grammar
extension:
> SELECT field FROM FROZEN index
and also through driver configuration parameter, namely:
> index.include.frozen: true/false
Fix#39390Fix#39377
(cherry picked from commit 2445a933915f420c7f51e8505afa0a7978ce6b0f)
Due to a bug in JTS WKT parser, JTS cannot parse most of WKT shapes if
the shape type is written in the lower case. For examples `point (1 2)`
is causing JTS inside H2GIS to fail on tr-TR locale as a result of
case-insensitive comparison.
Interval * integer number is a valid operation which previously was
only supported for foldables (literals) and not when a field was
involved. That was because:
1. There was no common type returned for that combination
2. The `BinaryArithmeticOperation` was permitting the multiplication
(called by fold()) but the BinaryArithmeticProcessor didn't allow it
Moreover the error message for invalid arithmetic operations was wrong
because of the issue with the overloading methods of
`LoggerMessageFormat.format`.
Fixes: #41239Fixes: #41200
(cherry picked from commit 91039bab12d3ef27d6eac9cdc891a3b3ad0c694d)
Adds an initial limited implementations of geo features to SQL. This implementation is based on the [OpenGIS® Implementation Standard for Geographic information - Simple feature access](http://www.opengeospatial.org/standards/sfs), which is the current standard for GIS system implementation. This effort is concentrate on SQL option AKA ISO 19125-2.
Queries that are supported as a result of this initial implementation
Metadata commands
- `DESCRIBE table` - returns the correct column types `GEOMETRY` for geo shapes and geo points.
- `SHOW FUNCTIONS` - returns a list that includes supported `ST_` functions
- `SYS TYPES` and `SYS COLUMNS` display correct types `GEO_SHAPE` and `GEO_POINT` for geo shapes and geo points accordingly.
Returning geoshapes and geopoints from elasticsearch
- `SELECT geom FROM table` - returns the geoshapes and geo_points as libs/geo objects in JDBC or as WKT strings in console.
- `SELECT ST_AsWKT(geom) FROM table;` and `SELECT ST_AsText(geom) FROM table;`- returns the geoshapes ang geopoints in their WKT representation;
Using geopoints to elasticsearch
- The following functions will be supported for geopoints in queries, sorting and aggregations: `ST_GeomFromText`, `ST_X`, `ST_Y`, `ST_Z`, `ST_GeometryType`, and `ST_Distance`. In most cases when used in queries, sorting and aggregations, these function are translated into script. These functions can be used in the SELECT clause for both geopoints and geoshapes.
- `SELECT * FROM table WHERE ST_Distance(ST_GeomFromText(POINT(1 2), point) < 10;` - returns all records for which `point` is located within 10m from the `POINT(1 2)`. In this case the WHERE clause is translated into a range query.
Limitations:
Geoshapes cannot be used in queries, sorting and aggregations as part of this initial effort. In order to fully take advantage of geoshapes we would need to have access to geoshape doc values, which is coming in #37206. `ST_Z` cannot be used on geopoints in queries, sorting and aggregations since we don't store altitude in geo_point doc values.
Relates to #29872
Backport of #42031
Thanks to #34071, there is enough information in field caps to infer
the table structure and thus use the same API consistently across the
IndexResolver.
(cherry picked from commit f99946943a3350206b6bca774b2f060f41a787b3)
Implement a more trivial case of the CASE expression which is
expressed as a traditional function with 2 or 3 arguments. e.g.:
IIF(a = 1, 'one', 'many')
IIF(a > 0, 'positive')
Closes: #40917
(cherry picked from commit add02f4f553ad472026dcc1eaa84245a0558a4b0)
Implement the ANSI SQL CASE expression which provides the if/else
functionality common to most programming languages.
The CASE expression can have multiple WHEN branches and becomes a
powerful tool for SQL queries as it can be used in SELECT, WHERE,
GROUP BY, HAVING and ORDER BY clauses.
Closes: #36200
(cherry picked from commit 8b2577406f47ae60d15803058921d128390af0b6)
The SimplifyConditional rule is removing NULL literals from those
functions to simplify their evaluation. This happens in the Optimizer
and a new instance of the conditional function is generated. Previously,
the dataType was not set properly (defaulted to DataType.NULL) for
those new instances and since the resolveType() wasn't called again
it resulted in returning always null.
E.g.:
SELECT COALESCE(null, 'foo', null, 'bar')
COALESCE(null, 'foo', null, 'bar')
-----------------
null
This issue was not visible before because the tests always used an alias
for the conditional function which caused the resolveType() to be
called which sets the dataType properly.
E.g.:
SELECT COALESCE(null, 'foo', null, 'bar') as c
c
-----------------
foo
(cherry picked from commit c39980a65dd593363f1d8d1b038b26cb0ce02aaf)
* Handle the scenario where assertLogs() is not called from a test method
but the audit rolling file rolls over.
* Use a local boolean variable instead of the static one to account for
assertBusy() code block possibly being called multiple times and having
different execution paths.
(cherry picked from commit 6f642196cbab90079c610097befc794746170df1)
When specifying a limit over an agg sorting, the limit will be pushed
down to the grouping which affects the custom sorting. This commit fixes
that and restricts the limit only to sorting.
Fix#40984
(cherry picked from commit da3726528d9011b05c0677ece6d11558994eccd9)
Although the translation rule was implemented in the `Optimizer`,
the rule was not added in the list of rules to be executed.
Relates to #41195
Follows #37936
(cherry picked from commit f426a339b77af6008d41cc000c9199fe384e9269)
* Replace usages RandomizedTestingTask with built-in Gradle Test (#40978)
This commit replaces the existing RandomizedTestingTask and supporting code with Gradle's built-in JUnit support via the Test task type. Additionally, the previous workaround to disable all tasks named "test" and create new unit testing tasks named "unitTest" has been removed such that the "test" task now runs unit tests as per the normal Gradle Java plugin conventions.
(cherry picked from commit 323f312bbc829a63056a79ebe45adced5099f6e6)
* Fix forking JVM runner
* Don't bump shadow plugin version
Properly treat '%' as a wildcard for catalog filtering instead of doing
a straight string match.
Table filtering now considers aliases as well.
Add escaping char for LIKE queries with user defined params
Fix monotony of ORDINAL_POSITION
Add integration test for SYS COLUMNS - currently running only inside
single_node since the cluster name is test dependent.
Add pattern unescaping for index names
Fix#40582
(cherry picked from commit 8e61b77d3f849661b7175544f471119042fe9551)
* Avoid sharing source directories as it breaks intellij
* Subprojects share main project output classes directory
* Fix jar hell
* Fix sql security with ssl integ tests
* Relax dependency ordering rule so we don't explode on cycles
After `TIME` SQL data type is introduced, implement
`CURRENT_TIME/CURTIME` functions similarly to CURRENT_TIMESTAMP
that return the system's current time (only, without the date part).
Closes: #40468
(cherry picked from commit 9feede781409d0e264ce45951a25b28ff129b187)
TimeProcessor didn't implement `getWriteableName()` so the one from
the parent was used which returned the `NAME` of the parent. This
caused `TimeProcessor` objects to be deserialised into
DateTimeProcessor.
Moreover, added a restriction to run the TIME related integration tests
only in UTC timezone.
Fixes: #40717
(cherry picked from commit cfea348bec20e547df72c415cccd85279accb767)
A full format for a DATETIME would be:
`2019-03-30T10:20:30.123+10:00` which is 29 chars long.
For DATE a full format would be: `2019-03-30T00:00:00.000+10:00`
which is also 29 chars long.
(cherry picked from commit 6be83964ed025528778bca8d35692762e166983b)
Support ANSI SQL's TIME type by introductin a runtime-only
ES SQL time type.
Closes: #38174
(cherry picked from commit 046ccd4cf0a251b2a3ddff6b072ab539a6711900)
Enable some Ignored integration tests for issues/features that
have already been resolved/implemented.
(cherry picked from commit c23580f477ffc61c5701e14a91006db7bf21a8d4)
Previously, `getTime(colIdx/colLabel)` and
`getObject(colIdx/colLabel, java.sql.Time.class)` methods were computing
the time from a `ZonedDateTime` by applying day in millis modulo on the epoch millis
of the `ZonedDateTime` object. This is wrong as we need to keep the time
related fields at the timezone of the `ZonedDateTime` object and just
set the date info to the epoch date (01/01/1970).
Additionally fixes a testing issue as the original timezone id is converted
to an offset string when parsing the response from the server.
* Document MATCH and QUERY function predicates.
* Polish the functions pages and add a list of functions to the main Functions & Operators page.
(cherry picked from commit 4cec0ae1b962ec7ea011a290aec72740386eb808)
To avoid having to specify each spec by hand (which can miss specs to be
added), the test infrastructure now performs classpath discovery so that
each spec added, is automatically considered.
Relates #40358
(cherry picked from commit d0f60b4425c731509aa8ca765d55f563f866ef90)
Previously, `getDate(int columnIdx)/getDate(String columnLabel)` and
were using legacy`java.util.Calendar` instead of the the `java.time.*`
classes to reset to the start of day. This resulted in different results
for certain timestamps and timezones when calling
`getDate(col)` vs`getObject(col, java.sql.Date)`
Now only the methods (that must be implemented due to the JDBC spec)
`getDate(int columnIdx, Calendar cal)/getDate(String columnLabel, Calendar cal)`
are still using the `java.util.Calendar` for those conversion.
The same change was applied to
`getTime(int columnIdx)/getTime(String columnLabel)`
and
`getTimestamp(int columnIdx)/getTimestamp(String columnLabel)`
Fixes: #40289
(cherry picked from commit 44560671f18397e0c58e3647732880fcb73a5034)
Previously metric aggregations on date fields would return a double
which caused errors when trying to apply scalar functions on top, e.g.:
```
SELECT YEAR(MAX(date)) FROM test
```
Fixes: #40376
(cherry-picked from commit 41d0a038467fbdbbf67fd9bfdf27623451cae63a)
* Refactor RegexMatch to support both LIKE and RLIKE
* Add integration tests for RLIKE
* Polish the rest of tests
(cherry picked from commit 7562d6eeeb77c04794002649fe726f4b3a9a398b)
Upgrade JLine to 3.10.0
Switch to using JLine granular jars instead of the uber-one
Remove Jansi dependency (due to errors in closing streams)
Pin JNA dependency to our own artifact
Fix#40239
(cherry picked from commit 9afa65fa80111f3b68c13373c7b6db13c11dde31)
* Rewrite Round and Truncate functions to have a slightly different
approach to handling the optional parameter in the constructor. Until now
the optional parameter was considered 0 if the value was missing and the
constructor was filling in this value. The current solution is to have
the optional parameter as null right until the actual calculation is done.
(cherry picked from commit 3e314f8fa4cb322e67949e80857561ce51268726)
* Define a equals method for Like function so that the pattern used
is considered in the equality check. Whenever the functions are resolved
this check should be used.
(cherry picked from commit 4e5d5af58a140573b8ee19d57c7839db7b779e3b)
Previously, calling getDate()/getTime()/getTimestamp() and getObject()
with the corresponding java.sql class on a column of SQL DATE type from
the JDBC result set would throw an Exception.
When selecting columns of ES type `date` (SQL's DATETIME) the
`FieldHitExtractor` was not using the timezone of the client session
but always resorted to UTC. The same behaviour (UTC only) was
encountered also for grouping keys (`CompositeKeyExtractor`) and
for First/Last functions on dates (`TopHitsAggExtractor`).
Fixes: #40152
* Take into consideration aliases that can be used as aggregates
and in the ORDER BY element so that the groupings are re-ordered inside
the composite aggregation according to the ORDER BY ordering.
(cherry picked from commit 110c0b90b9cf2e9344ab3f412cfa8f8cd94ad71f)
For cases where fields can have multi values, allow the behavior to be
customized through a dedicated configuration field.
By default this will be enabled on the drivers so that existing datasets
work instead of throwing an exception.
For regular SQL usage, the behavior is false so that the user is aware
of the underlying data.
Fix#39700
(cherry picked from commit 2b351571961f172fd59290ee079126bbd081ceaf)
Previously, JDBC's REST call to the server was always sending UTC
instead of the timezone passed through connection string/properties.
Moreover the conversion to java.sql.Date was problematic as a
calculation on the epoch millis was used to set the time to 00:00:00.000
and the timezone info was lost. This caused the resulting java.sql.Date
object which is always using the JVM's timezone (no matter what timezone
setting is used in the connection string/properties) to be wrongly created.
Fixes: #39915
When a query is translated into script terms agg where key has a date
type, it should generate a terms agg with value_type long instead of
date, otherwise the key gets formatted as a string, which confuses
hit extractor.
Fixes#37042
Enhance ConstantProcessor to properly serialize complex objects
(Intervals) that have their own custom serialization/deserialization
mechanism
Fix#39875
(cherry picked from commit ed8a1f9340673e69a44ea7a89679cadb4762e43d)
* Bundle java in distributions
Setting up a jdk is currently a required external step when installing
elasticsearch. This is particularly problematic for the rpm/deb packages
as installing a jdk in the same package installation command does not
guarantee any order, so must be done in separate steps. Additionally,
JAVA_HOME must be set and often causes problems in selecting a correct
jdk when, for example, the system java is an older unsupported version.
This commit bundles platform specific openjdks into each distribution.
In addition to eliminating the issues above, it also presents future
possible improvements like using jlink to build jdk images only
containing modules that elasticsearch uses.
closes#31845
Queries that require counting of all hits (COUNT(*) on implicit
group by), now enable accurate hit tracking.
Fix#37971
(cherry picked from commit 265b637cf6df08986a890b8b5daf012c2b0c1699)
For functions: move checks for `text` fields without underlying `keyword`
fields or with many of them (ambiguity) to the type resolution stage.
For Order By/Group By: move checks to the `Verifier` to catch early
before `QueryTranslator` or execution.
Closes: #38501Fixes: #35203
Previously, if a text field had an underlying keyword field
the latter was not used instead of the text leading to wrong
results returned by queries filtering with LIKE/RLIKE.
Fixes: #39442
* Add "columnar" option for REST requests (but be lenient for non-"plain"
modes) for json, yaml, smile and cbor formats.
* Updated documentation
(cherry picked from commit 5b7e0de237fb514d14a61a347bc669d4b4adbe56)
This changes the name of the internal security index to ".security-7",
but supports indices that were upgraded from earlier versions and use
the ".security-6" name.
In all cases, both ".security-6" and ".security-7" are considered to
be restricted index names regardless of which name is actually in use
on the cluster.
Backport of: #39337
`<expression>::<dataType>` is a simplified altenative syntax to
`CAST(<expression> AS <dataType> which exists in PostgreSQL and
provides an improved user experience and possibly more compact
SQL queries.
Fixes: #38717
`CreateIndexRequest#source(Map<String, Object>, ... )`, which is used when
deserializing index creation requests, accidentally accepts mappings that are
nested twice under the type key (as described in the bug report #38266).
This in turn causes us to be too lenient in parsing typeless mappings. In
particular, we accept the following index creation request, even though it
should not contain the type key `_doc`:
```
PUT index?include_type_name=false
{
"mappings": {
"_doc": {
"properties": { ... }
}
}
}
```
There is a similar issue for both 'put templates' and 'put mappings' requests
as well.
This PR makes the minimal changes to detect and reject these typed mappings in
requests. It does not address #38266 generally, or attempt a larger refactor
around types in these server-side requests, as I think this should be done at a
later time.
* Add checks for Grouping functions restriction to be placed inside GROUP BY
* Fixed bug where GROUP BY HISTOGRAM (not using alias) wasn't recognized
properly in the Verifier due to functions equality not working correctly.
Introduce client-side sorting of groups based on aggregate
functions. To allow this, the Analyzer has been extended to push down
to underlying Aggregate, aggregate function and the Querier has been
extended to identify the case and consume the results in order and sort
them based on the given columns.
The underlying QueryContainer has been slightly modified to allow a view
of the underlying values being extracted as the columns used for sorting
might not be requested by the user.
The PR also adds minor tweaks, mainly related to tree output.
Close#35118
FIRST and LAST can be used with one argument and work similarly to MIN
and MAX but they are implemented using a Top Hits aggregation and
therefore can also operate on keyword fields. When a second argument is
provided then they return the first/last value of the first arg when its
values are ordered ascending/descending (respectively) by the values of
the second argument. Currently because of the usage of a Top Hits
aggregation FIRST and LAST cannot be used in the HAVING clause of a
GROUP BY query to filter on the results of the aggregation.
Closes: #35639
Doc-value fields now return a value that is based on the mappings rather than
the script implementation by default.
This deprecates the special `use_field_mapping` docvalue format which was added
in #29639 only to ease the transition to 7.x and it is not necessary anymore in
7.0.
When the arguements of PERCENTILE and PERCENTILE_RANK can be folded,
the `ConstantFolding` rule kicks in and calls the `replaceChildren()`
method on `InnerAggregate` which is created from the aggregation rules
of the `Optimizerz. `InnerAggregate` in turn, cannot implement the method
as the logic of creating a new `InnerAggregate` instance from a list of
`Expression`s resides in the Optimizer. So, instead, `ConstantFolding`
should be applied before any of the aggregations related rules.
Fixes: #37099
This commit removes the Index Audit Output type, following its deprecation
in 6.7 by 8765a31d4e6770. It also adds the migration notice (settings notice).
In general, the problem with the index audit output is that event indexing
can be slower than the rate with which audit events are generated,
especially during the daily rollovers or the rolling cluster upgrades.
In this situation audit events will be lost which is a terrible failure situation
for an audit system.
Besides of the settings under the `xpack.security.audit.index` namespace, the
`xpack.security.audit.outputs` setting has also been deprecated and will be
removed in 7. Although explicitly configuring the logfile output does not touch
any deprecation bits, this setting is made redundant in 7 so this PR deprecates
it as well.
Relates #29881