Scrolling was only implemented for the `SqlAction` (not jdbc or cli)
and it was implemented by keeping request state on the server. On
principle we try to avoid adding extra state to elasticsearch where
possible because it creates extra points of failure and tends to
have lots of hidden complexity.
This replaces the state on the server with serializing state to the
client. This looks to the user like a "next_page" key with fairly
opaque content. It actually consists of an identifier for the *kind*
of scroll, the scroll id, and a base64 string containing the field
extractors.
Right now this only implements scrolling for `SqlAction`. The plan
is to reuse the same implementation for jdbc and cli in a followup.
This also doesn't implement all of the required serialization.
Specifically it doesn't implement serialization of
`ProcessingHitExtractor` because I haven't implemented serialization
for *any* `ColumnProcessors`.
Original commit: elastic/x-pack-elasticsearch@a8567bc5ec
Adds a granular licensing support to SQL. JDBC now requires a platinum license, everything else work with any non-expired license.
Original commit: elastic/x-pack-elasticsearch@a30470e2c9
This moves validating that each index contains only a single type
into EsCatalog which removes a race condition and removes a method
from the Catalog interface. Removing methods from the Catalog
interface is nice because the fewer methods that the interface has
the fewer have to be secured.
Original commit: elastic/x-pack-elasticsearch@85cd089e47
This adds support for field level security to SQL by creating a new type of flow for securing requests that look like sql requests. `AuthorizationService` verifies that the user can execute the request but doesn't check the indices in the request because they are not yet ready. Instead, it adds a `BiFunction` to the context that can be used to check permissions for an index while servicing the request. This allows requests to cooperatively secure themselves. SQL does this by implementing filtering on top of its `Catalog` abstraction and backing that filtering with security's filters. This minimizes the touch points between security and SQL.
Stuff I'd like to do in followups:
What doesn't work at all still:
1. `SHOW TABLES` is still totally unsecured
2. `DESCRIBE TABLE` is still totally unsecured
3. JDBC's metadata APIs are still totally unsecured
What kind of works but not well:
1. The audit trail doesn't show the index being authorized for SQL.
Original commit: elastic/x-pack-elasticsearch@86f88ba2f5
Indices discovery actively ignores indices with more than one type.
However queries made such indices throw an exception (assuming the user
by mistake or not, selects such an index).
Original commit: elastic/x-pack-elasticsearch@16855c7b8f
SQL relies on being able to fetch information about fields from
the cluster state and it'd be disasterous if that information
wasn't available. This should catch that.
Original commit: elastic/x-pack-elasticsearch@1a62747332
Running the sql rest action test inside the server caused a dependency
loop which was failing the build.
Original commit: elastic/x-pack-elasticsearch@43283671d8
1. We talked about doing 422 but that doesn't work because the netty plugin
maps 422 -> 400 and I didn't think it was worth changing that right now.
2. Missing columns in the `WHERE` clause still cause a 500 response because
we don't resolve the `SELECT` part if there is an error in the `WHERE`
clause.
Original commit: elastic/x-pack-elasticsearch@355d8292e5
Similar to my work in elastic/x-pack-elasticsearch#2106, this adds a basic integration test for the rest sql action. We can add more as we go. Specifically, I'd like to add testing around handling of invalid SQL and a test for timezones, but neither work particularly well yet.
Original commit: elastic/x-pack-elasticsearch@923d941d0d
Add some basic security testing/integration.
The good news:
1. Basic security now works. Users without access to an index can't run sql queries against it. Without this change they could.
2. Document level security works! At least so far as I can tell.
The work left to do:
1. Field level security doesn't work properly. I mean, it kind of works in that the field's values don't leak but it just looks like they all have null values.
2. We will need to test scrolling.
3. I've only added tests for the rest sql action. I'll need to add tests for jdbc and the CLI as well.
4. I've only added tests for `SELECT` and have ignored stuff like `DESCRIBE` and `SHOW TABLES`.
Original commit: elastic/x-pack-elasticsearch@b9909bbda0
To avoid leaking client information across the entire code-base, client
settings like TimeZone or pagination are stored in
SqlSession>SqlSettings which are available as a ThreadLocal (during
analysis) so that components that need them, can pick them up.
Since ES internally uses Joda, the date/time functionality relies on Joda,
whenever possible to match the behavior.
Original commit: elastic/x-pack-elasticsearch@20f41e2bb3
Too big. Sorry. Some good things though:
1. Share some code between CLI and JDBC. Probably a good thing
at this point, better as we go on, I think.
2. Add round trip tests for all of proto.
3. Remove the `data` member from `QueryInitResponse` and
`QueryPageResponse` so we response serialization is consistent with
everything else.
Original commit: elastic/x-pack-elasticsearch@c6940a32ed
* Move read from a static method to a ctor to mirror core.
* Make read and writes read and write the same data.
* Instead of the "header" integer use a byte for the response
type.
* For responses that do not make their request type obvious
then serialize the request type.
* Remove the request type member from requests and responses and
replace with an abstract method. These type members have caused
us trouble in core in the past.
* Remove the Message superclass as it didn't have anything in it.
* Pass client version to the request reader and response writer.
* Add round trip tests for the protocol.
* Force Requests and Responses to provide good `toString`, `equals`,
and `hashCode`.
Original commit: elastic/x-pack-elasticsearch@653ed8c27f
* Switch `data` member from Object to `String`
* Compress packages on server so easier to build `data` as `String`
* Move write of `data` member into `encode` method
* Move read of `data` member into ctor
Original commit: elastic/x-pack-elasticsearch@e3a52e7493
`gradle check -xforbiddenPatterns` now passes in jdbc.
This makes running the embedded HTTP server slightly more difficult,
you now have to add the following to your jvm arguments.
```
-ea -Dtests.rest.cluster=localhost:9200 -Dtests.embed.sql=true -Dtests.security.manager=false
```
Depending on your environment the embedded jdbc connection may give
spurious failures that look like:
```
org.elasticsearch.xpack.sql.jdbc.jdbc.JdbcException: RemoteTransportException[[node-0][127.0.0.1:9300][indices:data/read/search]]; nested: SearchPhaseExecutionException[]; nested: GeneralScriptException[Failed to compile inline script [( params.a0 > params.v0 ) && ( params.a1 > params.v1 )] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting];
...
Caused by: Failed to execute phase [fetch],
..
Caused by: GeneralScriptException[Failed to compile inline script [( params.a0 > params.v0 ) && ( params.a1 > params.v1 )] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting];
...
Caused by: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting]
```
`gradle check` works around this by setting `script.max_compilations_per_minute`
to `1000`.
Another change is that we no longer support loading the test data by
uncommenting some code. Instead we load the test data into Elaticsearch
before the first test and we deleted it after the last test. This is
so that tests that required different test data can interoperate with
eachother. The spec tests all use the same test data but the metadata
tests do not.
Original commit: elastic/x-pack-elasticsearch@8b8f684ac1
- simplify handling of timezone in H2
- fix leaking threadpool in HttpServer
- update Csv tests
- keep the dates as long in internal Page
Original commit: elastic/x-pack-elasticsearch@43a804607f
Flows time zones through the `QueryInitRequest` and into the
`ExpressionBuilder` which attaches the time zones to date/time
expressions. Modifies the code that generates date aggs,
scripts, and extracts results to use the time zones.
Original commit: elastic/x-pack-elasticsearch@d6682580d1