We switched most of the commands to asynchronous mode, this commit removes the synchronous version of exec to prevent accidental use of it in the future.
Original commit: elastic/x-pack-elasticsearch@03d0a6350d
Right now if you run `gradle regen` on Windows you'll get `CRLF` line
endings on all the ANTLR generated files because we run
```
ant.fixcrlf(srcdir: outputPath) {
patternset(includes: 'SqlBase*.java')
}
```
The docs for fixcrlf say that the default line endings that it
corrects to is based on the OS:
https://ant.apache.org/manual/Tasks/fixcrlf.html
This change locks it to `LF`.
Original commit: elastic/x-pack-elasticsearch@4396729e04
`RowSetCursor` was just like `RowSet` only it had methods that allowed
you to scroll to the next page. We now use `RowSet#nextPageCursor` to
get the next page in a way that doesn't require us to store state on
the server. So we can remove `RowSetCursor` entirely now.
Original commit: elastic/x-pack-elasticsearch@6a4a1efb20
* Switch `ResultSet#getFetchSize` from returning the *requested*
fetch size to returning the size of the current page of results.
For scrolling searches without parent/child this'll match the
requested fetch size but for other things it won't. The nice thing
about this is that it lets us tell users a thing to look at if
they are wondering why they are using a bunch of memory.
* Remove all the entire JDBC action and implement it on the REST
layer instead.
* Create some code that can be shared between the cli and jdbc
actions to properly handle errors and request deserialization so
there isn't as much copy and paste between the two. This helps
because it calls out explicitly the places where they are different.
* I have not moved the CLI REST handle to shared code because
I think it'd be clearer to make that change in a followup PR.
* There is now no more need for constructs that allow iterating
all of the results in the same request. I have not removed these
because I feel that that cleanup is big enough to deserve its own
PR.
Original commit: elastic/x-pack-elasticsearch@3b12afd11c
These wrap `DataInput` and `DataOutput` to add the protocol
version being serialized. This is similar to the mechanism
used by core and it has made adding and removing fields from
the serialization protocol fairly simple.
Original commit: elastic/x-pack-elasticsearch@90b3f1199a
* big refactor of Processor by introducing ProcessorDefinition an
immutable tree structure used for resolving multiple inputs across
folding (in particular for aggregations) which at runtime gets
translated into 'compiled' or small Processors
Add expression arithmetic, expression folding and type coercion
Folding
* for literals, scalars and inside the optimizer
Type validation happens per type hierarchy (numeric vs decimal) not type
Ceil/Floor/Round functions return long/int instead of double
ScalarFunction preserves ProcessorDefinition instead of functionId
Original commit: elastic/x-pack-elasticsearch@a703f8b455
Removes a few NOCOMMITs that are tracked other places and updates
a few with plans on how to work on them.
Original commit: elastic/x-pack-elasticsearch@8d1cfdf4ee
This renames that `write` and `read` methods in SQL to `writeTo` and
`readFrom` to line up with the names used in core. I don't have a
strong opinion whether or not any name is better than any other but
I figure there isn't a good reason for SQL to be different from the
rest of Elasticsearch.
Original commit: elastic/x-pack-elasticsearch@e5de9a4b81
* Move CLI to TransportSqlAction
* Moves REST endpoint from `/_cli` to `/_sql/cli`
* Removes the special purpose CLI transport action instead
implements the CLI entirely on the REST layer, delegating
all SQL stuff to the same action that backs the `/_sql` REST
API.
* Reworks "embedded testing mode" to use a `FilterClient` to
bounce capture the sql transport action and execute in embedded.
* Switches CLI formatting from consuming the entire response
to consuming just the first page of the response and returning
a `cursor` that can be used to read the next page. That read is
not yet implemented.
* Switch CLI formatting from the consuming the `RowSetCursor` to
consuming the `SqlResponse` object.
* Adds tests for CLI formatting.
* Support next page in the cli
* Rename cli's CommandRequest/CommandResponse to
QueryInitRequest/QueryInitResponse to line up with jdbc
* Implement QueryPageRequest/QueryPageResponse in cli
* Use `byte[]` to represent the cursor in the cli. Those bytes
mean something, but only to the server. The only reasonint that
the client does about them is "if length == 0 then there isn't a
next page."
* Pull common code from jdbc's QueryInitRequest, QueryPageRequest,
QueryInitResponse, and QueryPageResponse into the shared-proto
project
* By implication this switches jdbc's QueryPageRequest to using
the same cursor implementation as the cli
Original commit: elastic/x-pack-elasticsearch@193586f1ee
Instead of throwing and catching an exception for invalid
indices this returns *why* they are invalid in a convenient
object form that can be thrown as an exception when the index
is required or the entire index can be ignored when listing
indices.
Original commit: elastic/x-pack-elasticsearch@f45cbce647
This integrates SQL's metadata calls with security by creating
`SqlIndicesAction` and routing all of SQL's metadata calls through
it. Since it *does* know up from which indices it is working against
it can be an `IndicesRequest.Replaceable` and integrate with the
existing security infrastructure for filtering indices.
This request is implemented fairly similarly to the `GetIndexAction`
with the option to read from the master or from a local copy of
cluster state. Currently SQL forces it to run on the local copy
because the request doesn't properly support serialization. I'd
like to implement that in a followup.
Original commit: elastic/x-pack-elasticsearch@15f9512820
In core we prefer not to extend `AbstractLifecycleComponent` because
the reasons for the tradeoffs that it makes are lost to the sands of
time.
Original commit: elastic/x-pack-elasticsearch@ec1a32bbb3
Core doesn't go in for fancy collection utils in general and just
manipulates the required collections in line. In an effort to keep
SQL "more like the rest of Elasticsearch" I'm starting to remove
SQL's `CollectionUtils`.
Original commit: elastic/x-pack-elasticsearch@878ee181cb
Scrolling was only implemented for the `SqlAction` (not jdbc or cli)
and it was implemented by keeping request state on the server. On
principle we try to avoid adding extra state to elasticsearch where
possible because it creates extra points of failure and tends to
have lots of hidden complexity.
This replaces the state on the server with serializing state to the
client. This looks to the user like a "next_page" key with fairly
opaque content. It actually consists of an identifier for the *kind*
of scroll, the scroll id, and a base64 string containing the field
extractors.
Right now this only implements scrolling for `SqlAction`. The plan
is to reuse the same implementation for jdbc and cli in a followup.
This also doesn't implement all of the required serialization.
Specifically it doesn't implement serialization of
`ProcessingHitExtractor` because I haven't implemented serialization
for *any* `ColumnProcessors`.
Original commit: elastic/x-pack-elasticsearch@a8567bc5ec
Adds a granular licensing support to SQL. JDBC now requires a platinum license, everything else work with any non-expired license.
Original commit: elastic/x-pack-elasticsearch@a30470e2c9
This moves validating that each index contains only a single type
into EsCatalog which removes a race condition and removes a method
from the Catalog interface. Removing methods from the Catalog
interface is nice because the fewer methods that the interface has
the fewer have to be secured.
Original commit: elastic/x-pack-elasticsearch@85cd089e47
This adds support for field level security to SQL by creating a new type of flow for securing requests that look like sql requests. `AuthorizationService` verifies that the user can execute the request but doesn't check the indices in the request because they are not yet ready. Instead, it adds a `BiFunction` to the context that can be used to check permissions for an index while servicing the request. This allows requests to cooperatively secure themselves. SQL does this by implementing filtering on top of its `Catalog` abstraction and backing that filtering with security's filters. This minimizes the touch points between security and SQL.
Stuff I'd like to do in followups:
What doesn't work at all still:
1. `SHOW TABLES` is still totally unsecured
2. `DESCRIBE TABLE` is still totally unsecured
3. JDBC's metadata APIs are still totally unsecured
What kind of works but not well:
1. The audit trail doesn't show the index being authorized for SQL.
Original commit: elastic/x-pack-elasticsearch@86f88ba2f5
Indices discovery actively ignores indices with more than one type.
However queries made such indices throw an exception (assuming the user
by mistake or not, selects such an index).
Original commit: elastic/x-pack-elasticsearch@16855c7b8f
SQL relies on being able to fetch information about fields from
the cluster state and it'd be disasterous if that information
wasn't available. This should catch that.
Original commit: elastic/x-pack-elasticsearch@1a62747332
Running the sql rest action test inside the server caused a dependency
loop which was failing the build.
Original commit: elastic/x-pack-elasticsearch@43283671d8
This is a hack to remove a dependency cycle I added in elastic/x-pack-elasticsearch#2109. I think
it'd be cleaner to remove the cycle by making sql its own plugin that
doesn't depend on the rest of x-pack-elasticsearch but is still
included within x-pack-elasticsearch. But that is a broader change.
Original commit: elastic/x-pack-elasticsearch@47b7d69d80
1. We talked about doing 422 but that doesn't work because the netty plugin
maps 422 -> 400 and I didn't think it was worth changing that right now.
2. Missing columns in the `WHERE` clause still cause a 500 response because
we don't resolve the `SELECT` part if there is an error in the `WHERE`
clause.
Original commit: elastic/x-pack-elasticsearch@355d8292e5
Similar to my work in elastic/x-pack-elasticsearch#2106, this adds a basic integration test for the rest sql action. We can add more as we go. Specifically, I'd like to add testing around handling of invalid SQL and a test for timezones, but neither work particularly well yet.
Original commit: elastic/x-pack-elasticsearch@923d941d0d
Add some basic security testing/integration.
The good news:
1. Basic security now works. Users without access to an index can't run sql queries against it. Without this change they could.
2. Document level security works! At least so far as I can tell.
The work left to do:
1. Field level security doesn't work properly. I mean, it kind of works in that the field's values don't leak but it just looks like they all have null values.
2. We will need to test scrolling.
3. I've only added tests for the rest sql action. I'll need to add tests for jdbc and the CLI as well.
4. I've only added tests for `SELECT` and have ignored stuff like `DESCRIBE` and `SHOW TABLES`.
Original commit: elastic/x-pack-elasticsearch@b9909bbda0
To avoid leaking client information across the entire code-base, client
settings like TimeZone or pagination are stored in
SqlSession>SqlSettings which are available as a ThreadLocal (during
analysis) so that components that need them, can pick them up.
Since ES internally uses Joda, the date/time functionality relies on Joda,
whenever possible to match the behavior.
Original commit: elastic/x-pack-elasticsearch@20f41e2bb3
Too big. Sorry. Some good things though:
1. Share some code between CLI and JDBC. Probably a good thing
at this point, better as we go on, I think.
2. Add round trip tests for all of proto.
3. Remove the `data` member from `QueryInitResponse` and
`QueryPageResponse` so we response serialization is consistent with
everything else.
Original commit: elastic/x-pack-elasticsearch@c6940a32ed
* Move read from a static method to a ctor to mirror core.
* Make read and writes read and write the same data.
* Instead of the "header" integer use a byte for the response
type.
* For responses that do not make their request type obvious
then serialize the request type.
* Remove the request type member from requests and responses and
replace with an abstract method. These type members have caused
us trouble in core in the past.
* Remove the Message superclass as it didn't have anything in it.
* Pass client version to the request reader and response writer.
* Add round trip tests for the protocol.
* Force Requests and Responses to provide good `toString`, `equals`,
and `hashCode`.
Original commit: elastic/x-pack-elasticsearch@653ed8c27f
* Switch `data` member from Object to `String`
* Compress packages on server so easier to build `data` as `String`
* Move write of `data` member into `encode` method
* Move read of `data` member into ctor
Original commit: elastic/x-pack-elasticsearch@e3a52e7493
`gradle check -xforbiddenPatterns` now passes in jdbc.
This makes running the embedded HTTP server slightly more difficult,
you now have to add the following to your jvm arguments.
```
-ea -Dtests.rest.cluster=localhost:9200 -Dtests.embed.sql=true -Dtests.security.manager=false
```
Depending on your environment the embedded jdbc connection may give
spurious failures that look like:
```
org.elasticsearch.xpack.sql.jdbc.jdbc.JdbcException: RemoteTransportException[[node-0][127.0.0.1:9300][indices:data/read/search]]; nested: SearchPhaseExecutionException[]; nested: GeneralScriptException[Failed to compile inline script [( params.a0 > params.v0 ) && ( params.a1 > params.v1 )] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting];
...
Caused by: Failed to execute phase [fetch],
..
Caused by: GeneralScriptException[Failed to compile inline script [( params.a0 > params.v0 ) && ( params.a1 > params.v1 )] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting];
...
Caused by: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting]
```
`gradle check` works around this by setting `script.max_compilations_per_minute`
to `1000`.
Another change is that we no longer support loading the test data by
uncommenting some code. Instead we load the test data into Elaticsearch
before the first test and we deleted it after the last test. This is
so that tests that required different test data can interoperate with
eachother. The spec tests all use the same test data but the metadata
tests do not.
Original commit: elastic/x-pack-elasticsearch@8b8f684ac1
- simplify handling of timezone in H2
- fix leaking threadpool in HttpServer
- update Csv tests
- keep the dates as long in internal Page
Original commit: elastic/x-pack-elasticsearch@43a804607f
Flows time zones through the `QueryInitRequest` and into the
`ExpressionBuilder` which attaches the time zones to date/time
expressions. Modifies the code that generates date aggs,
scripts, and extracts results to use the time zones.
Original commit: elastic/x-pack-elasticsearch@d6682580d1