This PR is a backport a of #43214 from v8.0.0
A number of the aggregation base classes have an abstract doEquals() and doHashCode() (e.g. InternalAggregation.java, AbstractPipelineAggregationBuilder.java).
Theoretically this is so the sub-classes can add to the equals/hashCode and don't need to worry about calling super.equals(). In practice, it's mostly just confusing/inconsistent. And if there are more than two levels, we end up with situations like InternalMappedSignificantTerms which has to call super.doEquals() which defeats the point of having these overridable methods.
This PR removes the do versions and just use equals/hashCode ensuring the super when necessary.
To be consistent with the `search.max_buckets` default setting,
set the hard limit of the PriorityQueue used for in memory sorting,
when sorting on an aggregate function, to 10000.
Fixes: #43168
(cherry picked from commit 079e012fdea68ea0a7daae078359495047e9c407)
- Previously, when shorting on an aggregate function the bucket
processing ended early when the explicit (LIMIT XXX) or the impliciti
limit of 512 was reached. As a consequence, only a set of grouping
buckets was processed and the results returned didn't reflect the global
ordering.
- Previously, the priority queue shorting method had an inverse
comparison check and the final response from the priority queue was also
returned in the inversed order because of the calls to the `pop()`
method.
Fixes: #42851
(cherry picked from commit 19909edcfdf5792b38c1363b07379783ebd0e6c4)
The description field of xpack featuresets is optionally part of the
xpack info api, when using the verbose flag. However, this information
is unnecessary, as it is better left for documentation (and the existing
descriptions describe anything meaningful). This commit removes the
description field from feature sets.
* Take into consideration a wider range of Numbers when extracting the
values from source, more specifically - BigInteger and BigDecimal.
(cherry picked from commit 561b8d73dd7b03c50242e4e3f0128b2142959176)
In AsciiDoc, `subs="attributes,callouts,macros"` options were required
to render `include-tagged::` in a code block.
With elastic/docs#827, Elasticsearch Reference documentation migrated
from AsciiDoc to Asciidoctor.
In Asciidoctor, the `subs="attributes,callouts,macros"` options are no
longer needed to render `include-tagged::` in a code block. This commit
removes those unneeded options.
Resolves#41589
Refactors the WKT and GeoJSON parsers from an utility class into an
instantiatable objects. This is a preliminary step in
preparation for moving out coordinate validators from Geometry
constructors. This should allow us to make validators plugable.
Allow querying of FROZEN indices both through dedicated SQL grammar
extension:
> SELECT field FROM FROZEN index
and also through driver configuration parameter, namely:
> index.include.frozen: true/false
Fix#39390Fix#39377
(cherry picked from commit 2445a933915f420c7f51e8505afa0a7978ce6b0f)
Due to a bug in JTS WKT parser, JTS cannot parse most of WKT shapes if
the shape type is written in the lower case. For examples `point (1 2)`
is causing JTS inside H2GIS to fail on tr-TR locale as a result of
case-insensitive comparison.
Interval * integer number is a valid operation which previously was
only supported for foldables (literals) and not when a field was
involved. That was because:
1. There was no common type returned for that combination
2. The `BinaryArithmeticOperation` was permitting the multiplication
(called by fold()) but the BinaryArithmeticProcessor didn't allow it
Moreover the error message for invalid arithmetic operations was wrong
because of the issue with the overloading methods of
`LoggerMessageFormat.format`.
Fixes: #41239Fixes: #41200
(cherry picked from commit 91039bab12d3ef27d6eac9cdc891a3b3ad0c694d)
Adds an initial limited implementations of geo features to SQL. This implementation is based on the [OpenGIS® Implementation Standard for Geographic information - Simple feature access](http://www.opengeospatial.org/standards/sfs), which is the current standard for GIS system implementation. This effort is concentrate on SQL option AKA ISO 19125-2.
Queries that are supported as a result of this initial implementation
Metadata commands
- `DESCRIBE table` - returns the correct column types `GEOMETRY` for geo shapes and geo points.
- `SHOW FUNCTIONS` - returns a list that includes supported `ST_` functions
- `SYS TYPES` and `SYS COLUMNS` display correct types `GEO_SHAPE` and `GEO_POINT` for geo shapes and geo points accordingly.
Returning geoshapes and geopoints from elasticsearch
- `SELECT geom FROM table` - returns the geoshapes and geo_points as libs/geo objects in JDBC or as WKT strings in console.
- `SELECT ST_AsWKT(geom) FROM table;` and `SELECT ST_AsText(geom) FROM table;`- returns the geoshapes ang geopoints in their WKT representation;
Using geopoints to elasticsearch
- The following functions will be supported for geopoints in queries, sorting and aggregations: `ST_GeomFromText`, `ST_X`, `ST_Y`, `ST_Z`, `ST_GeometryType`, and `ST_Distance`. In most cases when used in queries, sorting and aggregations, these function are translated into script. These functions can be used in the SELECT clause for both geopoints and geoshapes.
- `SELECT * FROM table WHERE ST_Distance(ST_GeomFromText(POINT(1 2), point) < 10;` - returns all records for which `point` is located within 10m from the `POINT(1 2)`. In this case the WHERE clause is translated into a range query.
Limitations:
Geoshapes cannot be used in queries, sorting and aggregations as part of this initial effort. In order to fully take advantage of geoshapes we would need to have access to geoshape doc values, which is coming in #37206. `ST_Z` cannot be used on geopoints in queries, sorting and aggregations since we don't store altitude in geo_point doc values.
Relates to #29872
Backport of #42031
The CircuitBreaker was introduced as means of preventing a
`StackOverflowException` during the build of the AST by the parser.
The ANTLR4 grammar causes a weird behaviour for a Parser Listener.
The `enterEveryRule()` method is often called with a different parsing
context than the respective `exitEveryRule()`. This makes it difficult
to keep track of the tree's depth, and a custom Map was used as an
attempt of matching the contextes as they are encounter during `enter`
and during `exit` of the rules.
This approach had 2 important drawbacks:
1. It's hard to maintain this custom Map as the grammar changes.
2. The CircuitBreaker could often lead to false positives which caused
valid queries to return an Exception and prevent them from executing.
So, this removes completely the CircuitBreaker which is replaced be
a simple handling of the `StackOverflowException`
Fixes: #41471
(cherry picked from commit 1559a8e2dbd729138b52e89b7e80264c9f4ad1e7)
Thanks to #34071, there is enough information in field caps to infer
the table structure and thus use the same API consistently across the
IndexResolver.
(cherry picked from commit f99946943a3350206b6bca774b2f060f41a787b3)
Implement a more trivial case of the CASE expression which is
expressed as a traditional function with 2 or 3 arguments. e.g.:
IIF(a = 1, 'one', 'many')
IIF(a > 0, 'positive')
Closes: #40917
(cherry picked from commit add02f4f553ad472026dcc1eaa84245a0558a4b0)
Implement the ANSI SQL CASE expression which provides the if/else
functionality common to most programming languages.
The CASE expression can have multiple WHEN branches and becomes a
powerful tool for SQL queries as it can be used in SELECT, WHERE,
GROUP BY, HAVING and ORDER BY clauses.
Closes: #36200
(cherry picked from commit 8b2577406f47ae60d15803058921d128390af0b6)
The SimplifyConditional rule is removing NULL literals from those
functions to simplify their evaluation. This happens in the Optimizer
and a new instance of the conditional function is generated. Previously,
the dataType was not set properly (defaulted to DataType.NULL) for
those new instances and since the resolveType() wasn't called again
it resulted in returning always null.
E.g.:
SELECT COALESCE(null, 'foo', null, 'bar')
COALESCE(null, 'foo', null, 'bar')
-----------------
null
This issue was not visible before because the tests always used an alias
for the conditional function which caused the resolveType() to be
called which sets the dataType properly.
E.g.:
SELECT COALESCE(null, 'foo', null, 'bar') as c
c
-----------------
foo
(cherry picked from commit c39980a65dd593363f1d8d1b038b26cb0ce02aaf)
Fix bug in predicate subtraction that caused the evaluation to be
skipped on the first mismatch instead of evaluating the whole list. In
some cases this caused not only an incorrect result but one that kept on
growing causing the engine to bail
Fix#40835
(cherry picked from commit bd2b33d6eaca616a5acd846204e2d12f905854d4)
* Handle the scenario where assertLogs() is not called from a test method
but the audit rolling file rolls over.
* Use a local boolean variable instead of the static one to account for
assertBusy() code block possibly being called multiple times and having
different execution paths.
(cherry picked from commit 6f642196cbab90079c610097befc794746170df1)
CURRENT_DATE/CURRENT_TIME/CURRENT_TIMESTAMP can be used as SQL keywords
(without parentheses) and therefore there is a special rule in the
grammar to accommodate this.
Previously, this rule was also catching the parenthesised version of those functions too,
not allowing the {fn <functionName>()} to be used. E.g.:
{fn current_time(2)} or {fn current_timestamp()}
Now, the grammar rule catches only the keyword versions and all the parenthesised
versions go through the normal function resolution. As a consequence the validation
of the precision is moved from the parser lever (ExpressionBuilder) to the function
implementations.
Fixes: #41240
(cherry picked from commit bfbc9f140144b5a35aa29008b58bf58074419853)
When specifying a limit over an agg sorting, the limit will be pushed
down to the grouping which affects the custom sorting. This commit fixes
that and restricts the limit only to sorting.
Fix#40984
(cherry picked from commit da3726528d9011b05c0677ece6d11558994eccd9)
Although the translation rule was implemented in the `Optimizer`,
the rule was not added in the list of rules to be executed.
Relates to #41195
Follows #37936
(cherry picked from commit f426a339b77af6008d41cc000c9199fe384e9269)
Yet another improvement to SYS TABLES on differentiating between table
types specified as '%' and '' while maintaining legacy support for null
Fix#40775
(cherry picked from commit 6dbca5edd335eb1da8e7825389a15e5fe45397d4)
As empty string has a certain meaning, the JDBC driver returns an empty
set instead for better client compatibility.
Fix#41028
(cherry picked from commit 4cbafa585b7a514eb6c156606dd516324cd3980a)
* Replace usages RandomizedTestingTask with built-in Gradle Test (#40978)
This commit replaces the existing RandomizedTestingTask and supporting code with Gradle's built-in JUnit support via the Test task type. Additionally, the previous workaround to disable all tasks named "test" and create new unit testing tasks named "unitTest" has been removed such that the "test" task now runs unit tests as per the normal Gradle Java plugin conventions.
(cherry picked from commit 323f312bbc829a63056a79ebe45adced5099f6e6)
* Fix forking JVM runner
* Don't bump shadow plugin version
Properly treat '%' as a wildcard for catalog filtering instead of doing
a straight string match.
Table filtering now considers aliases as well.
Add escaping char for LIKE queries with user defined params
Fix monotony of ORDINAL_POSITION
Add integration test for SYS COLUMNS - currently running only inside
single_node since the cluster name is test dependent.
Add pattern unescaping for index names
Fix#40582
(cherry picked from commit 8e61b77d3f849661b7175544f471119042fe9551)
Move verification of arguments for Conditional functions and IN
from `Verifier` to the `resolveType()` method of the functions.
(cherry picked from commit 241644aac57baee1eb128b993ee410c7d08172a5)
* Avoid sharing source directories as it breaks intellij
* Subprojects share main project output classes directory
* Fix jar hell
* Fix sql security with ssl integ tests
* Relax dependency ordering rule so we don't explode on cycles
Changed the JDBC metadata to return empty results sets instead of
throwing SQLFeatureNotSupported as it seems a more safer/compatible
approach for consumers.
Fix#40533
(cherry picked from commit ef2d2527c2b5140556fd477e7ff6ea36966684da)
- Remove superfluous methods that are already
defined in superclasses.
- Improve tests for null folding on conditionals
(cherry picked from commit 67f9404f5004362e569353d1e950ffe5d7a9ab6e)
After `TIME` SQL data type is introduced, implement
`CURRENT_TIME/CURTIME` functions similarly to CURRENT_TIMESTAMP
that return the system's current time (only, without the date part).
Closes: #40468
(cherry picked from commit 9feede781409d0e264ce45951a25b28ff129b187)
TimeProcessor didn't implement `getWriteableName()` so the one from
the parent was used which returned the `NAME` of the parent. This
caused `TimeProcessor` objects to be deserialised into
DateTimeProcessor.
Moreover, added a restriction to run the TIME related integration tests
only in UTC timezone.
Fixes: #40717
(cherry picked from commit cfea348bec20e547df72c415cccd85279accb767)
A full format for a DATETIME would be:
`2019-03-30T10:20:30.123+10:00` which is 29 chars long.
For DATE a full format would be: `2019-03-30T00:00:00.000+10:00`
which is also 29 chars long.
(cherry picked from commit 6be83964ed025528778bca8d35692762e166983b)
Support ANSI SQL's TIME type by introductin a runtime-only
ES SQL time type.
Closes: #38174
(cherry picked from commit 046ccd4cf0a251b2a3ddff6b072ab539a6711900)
* Have LIKE and RLIKE only use term-level queries (wildcard and regexp respectively). They
are already working only with exact fields, thus be in-line with how
SQL works in general (what you index is what you search on).
(cherry picked from commit 1bba887d481b49db231a1442922f1813952dcc67)
Enable some Ignored integration tests for issues/features that
have already been resolved/implemented.
(cherry picked from commit c23580f477ffc61c5701e14a91006db7bf21a8d4)
Previously, an expression like `10 + 2::long` would be interpreted
as `CAST(10 + 2 AS LONG)` instead of `10 + CAST(2 AS LONG)`.
(cherry picked from commit e34cc2f38b1477e78788ee377938f42cc47187c7)
SYS TABLES meta command has been improved to better adhere to the ODBC
spec in particular with regards to the handling of enumerations (and
the differences between '%', null and ''(empty string))
Fix#40348
(cherry picked from commit e3070615000228c283d17ce8d182b44f1450a5d5)
Previously, `getTime(colIdx/colLabel)` and
`getObject(colIdx/colLabel, java.sql.Time.class)` methods were computing
the time from a `ZonedDateTime` by applying day in millis modulo on the epoch millis
of the `ZonedDateTime` object. This is wrong as we need to keep the time
related fields at the timezone of the `ZonedDateTime` object and just
set the date info to the epoch date (01/01/1970).
Additionally fixes a testing issue as the original timezone id is converted
to an offset string when parsing the response from the server.
* Document MATCH and QUERY function predicates.
* Polish the functions pages and add a list of functions to the main Functions & Operators page.
(cherry picked from commit 4cec0ae1b962ec7ea011a290aec72740386eb808)
To avoid having to specify each spec by hand (which can miss specs to be
added), the test infrastructure now performs classpath discovery so that
each spec added, is automatically considered.
Relates #40358
(cherry picked from commit d0f60b4425c731509aa8ca765d55f563f866ef90)
Previously, `getDate(int columnIdx)/getDate(String columnLabel)` and
were using legacy`java.util.Calendar` instead of the the `java.time.*`
classes to reset to the start of day. This resulted in different results
for certain timestamps and timezones when calling
`getDate(col)` vs`getObject(col, java.sql.Date)`
Now only the methods (that must be implemented due to the JDBC spec)
`getDate(int columnIdx, Calendar cal)/getDate(String columnLabel, Calendar cal)`
are still using the `java.util.Calendar` for those conversion.
The same change was applied to
`getTime(int columnIdx)/getTime(String columnLabel)`
and
`getTimestamp(int columnIdx)/getTimestamp(String columnLabel)`
Fixes: #40289
(cherry picked from commit 44560671f18397e0c58e3647732880fcb73a5034)
Previously metric aggregations on date fields would return a double
which caused errors when trying to apply scalar functions on top, e.g.:
```
SELECT YEAR(MAX(date)) FROM test
```
Fixes: #40376
(cherry-picked from commit 41d0a038467fbdbbf67fd9bfdf27623451cae63a)
* Refactor RegexMatch to support both LIKE and RLIKE
* Add integration tests for RLIKE
* Polish the rest of tests
(cherry picked from commit 7562d6eeeb77c04794002649fe726f4b3a9a398b)
Upgrade JLine to 3.10.0
Switch to using JLine granular jars instead of the uber-one
Remove Jansi dependency (due to errors in closing streams)
Pin JNA dependency to our own artifact
Fix#40239
(cherry picked from commit 9afa65fa80111f3b68c13373c7b6db13c11dde31)
Extend CAST to support all data types notations (whether SQL or ES
specific)
Fix#40282
(cherry picked from commit eb2ee8a344da946920598839a5db76c8bb9bc3fe)
* Rewrite Round and Truncate functions to have a slightly different
approach to handling the optional parameter in the constructor. Until now
the optional parameter was considered 0 if the value was missing and the
constructor was filling in this value. The current solution is to have
the optional parameter as null right until the actual calculation is done.
(cherry picked from commit 3e314f8fa4cb322e67949e80857561ce51268726)
* Define a equals method for Like function so that the pattern used
is considered in the equality check. Whenever the functions are resolved
this check should be used.
(cherry picked from commit 4e5d5af58a140573b8ee19d57c7839db7b779e3b)
Previously, calling getDate()/getTime()/getTimestamp() and getObject()
with the corresponding java.sql class on a column of SQL DATE type from
the JDBC result set would throw an Exception.
Previously, when a trival plain `SELECT` or a trivial `SELECT` with
aggregations has also an `ORDER BY` or a `LIMIT` or both, then the
optimization to convert it to a `LocalRelation` was skipped resulting
in exception thrown. E.g.::
```
SELECT 'foo' FROM test LIMIT 10
```
or
```
SELECT 'foo' FROM test GROUP BY 1 ORDER BY 1
```
Fixes: #40211
When selecting columns of ES type `date` (SQL's DATETIME) the
`FieldHitExtractor` was not using the timezone of the client session
but always resorted to UTC. The same behaviour (UTC only) was
encountered also for grouping keys (`CompositeKeyExtractor`) and
for First/Last functions on dates (`TopHitsAggExtractor`).
Fixes: #40152
* Take into consideration aliases that can be used as aggregates
and in the ORDER BY element so that the groupings are re-ordered inside
the composite aggregation according to the ORDER BY ordering.
(cherry picked from commit 110c0b90b9cf2e9344ab3f412cfa8f8cd94ad71f)
For cases where fields can have multi values, allow the behavior to be
customized through a dedicated configuration field.
By default this will be enabled on the drivers so that existing datasets
work instead of throwing an exception.
For regular SQL usage, the behavior is false so that the user is aware
of the underlying data.
Fix#39700
(cherry picked from commit 2b351571961f172fd59290ee079126bbd081ceaf)
Since other classes besides intervals can be serialized as part of
the Cursor, the getNamedWritables method should be moved from Intervals
to a more generic class Literals.
Relates to #39973
Previously, JDBC's REST call to the server was always sending UTC
instead of the timezone passed through connection string/properties.
Moreover the conversion to java.sql.Date was problematic as a
calculation on the epoch millis was used to set the time to 00:00:00.000
and the timezone info was lost. This caused the resulting java.sql.Date
object which is always using the JVM's timezone (no matter what timezone
setting is used in the connection string/properties) to be wrongly created.
Fixes: #39915
When a query is translated into script terms agg where key has a date
type, it should generate a terms agg with value_type long instead of
date, otherwise the key gets formatted as a string, which confuses
hit extractor.
Fixes#37042
Painless allows ZonedDateTime objects to be passed natively to scripts
which creates problematic translate queries as the ZonedDateTime is
passed as a string instead.
Wrap this with a dedicated method to perform the conversion.
Fix#39877
(cherry picked from commit 4957cad5bda77257d10430ac102e93f5e062148a)
Enhance ConstantProcessor to properly serialize complex objects
(Intervals) that have their own custom serialization/deserialization
mechanism
Fix#39875
(cherry picked from commit ed8a1f9340673e69a44ea7a89679cadb4762e43d)
* Bundle java in distributions
Setting up a jdk is currently a required external step when installing
elasticsearch. This is particularly problematic for the rpm/deb packages
as installing a jdk in the same package installation command does not
guarantee any order, so must be done in separate steps. Additionally,
JAVA_HOME must be set and often causes problems in selecting a correct
jdk when, for example, the system java is an older unsupported version.
This commit bundles platform specific openjdks into each distribution.
In addition to eliminating the issues above, it also presents future
possible improvements like using jlink to build jdk images only
containing modules that elasticsearch uses.
closes#31845
MIN/MAX on strings are supported and are implemented with
TopAggs FIRST/LAST respectively, but they cannot operate on
`text` fields without underlying `keyword` fields => inexact.
Follows: #39427
Fix bug in IndexResolver that caused conflicts in multi-field types to
be ignored up (causing the query to fail later on due to mapping
conflicts).
The issue was caused by the multi-field which forced the parent creation
before checking its validity across mappings
Fix#39547
(cherry picked from commit 4e4fe289f90b9b5eae09072d54903701a3128696)
Queries that require counting of all hits (COUNT(*) on implicit
group by), now enable accurate hit tracking.
Fix#37971
(cherry picked from commit 265b637cf6df08986a890b8b5daf012c2b0c1699)
* SYS COLUMNS will skip UNSUPPORTED field types in ODBC and JDBC, as well.
NESTED and OBJECT types were already skipped in ODBC mode, now they are
skipped in JDBC mode, as well.
(cherry picked from commit 9e0df64b2d36c9069dfa506570468f0522c86417)
For functions: move checks for `text` fields without underlying `keyword`
fields or with many of them (ambiguity) to the type resolution stage.
For Order By/Group By: move checks to the `Verifier` to catch early
before `QueryTranslator` or execution.
Closes: #38501Fixes: #35203
Backport of #39350
Contains the following:
* LUCENE-8635: Move terms dictionary off-heap for non-primary-key fields in `MMapDirectory`
* LUCENE-8292: `TermsEnum` is fully abstract
* LUCENE-8679: Return WITHIN in `EdgeTree#relateTriangle` only when polygon and triangle share one edge
* LUCENE-8676: Nori tokenizer deals correctly with large buffers
* LUCENE-8697: `GraphTokenStreamFiniteStrings` better handles side paths with gaps
* LUCENE-8664: Add `equals` and `hashCode` to `TotalHits`
* LUCENE-8660: `TopDocsCollector` returns accurate hit counts if the total equals the threshold
* LUCENE-8654: `Polygon2D#relateTriangle` fix for when the polygon is inside the triangle
* LUCENE-8645: `Intervals#fixField` can merge intervals from different fields
* LUCENE-8585: Create jump-tables for DocValues at index time
Previously, if a text field had an underlying keyword field
the latter was not used instead of the text leading to wrong
results returned by queries filtering with LIKE/RLIKE.
Fixes: #39442