Adds a granular licensing support to SQL. JDBC now requires a platinum license, everything else work with any non-expired license.
Original commit: elastic/x-pack-elasticsearch@a30470e2c9
Most tests worked fine. The datetime tests are broken for some time
zones. The csv tests were broken because they accepted the default
fetch size which looks like it is broken.
Original commit: elastic/x-pack-elasticsearch@e034c2f102
This moves validating that each index contains only a single type
into EsCatalog which removes a race condition and removes a method
from the Catalog interface. Removing methods from the Catalog
interface is nice because the fewer methods that the interface has
the fewer have to be secured.
Original commit: elastic/x-pack-elasticsearch@85cd089e47
Adds a test that shows *how* SQL fails to address an index with two types
to the full cluster restart tests. Because we're writing this code
against 7.0 don't actually execute the test, but we will execute it when
we merge to 6.x and it *should* work.
Original commit: elastic/x-pack-elasticsearch@b536e9a142
This adds support for field level security to SQL by creating a new type of flow for securing requests that look like sql requests. `AuthorizationService` verifies that the user can execute the request but doesn't check the indices in the request because they are not yet ready. Instead, it adds a `BiFunction` to the context that can be used to check permissions for an index while servicing the request. This allows requests to cooperatively secure themselves. SQL does this by implementing filtering on top of its `Catalog` abstraction and backing that filtering with security's filters. This minimizes the touch points between security and SQL.
Stuff I'd like to do in followups:
What doesn't work at all still:
1. `SHOW TABLES` is still totally unsecured
2. `DESCRIBE TABLE` is still totally unsecured
3. JDBC's metadata APIs are still totally unsecured
What kind of works but not well:
1. The audit trail doesn't show the index being authorized for SQL.
Original commit: elastic/x-pack-elasticsearch@86f88ba2f5
Indices discovery actively ignores indices with more than one type.
However queries made such indices throw an exception (assuming the user
by mistake or not, selects such an index).
Original commit: elastic/x-pack-elasticsearch@16855c7b8f
SQL relies on being able to fetch information about fields from
the cluster state and it'd be disasterous if that information
wasn't available. This should catch that.
Original commit: elastic/x-pack-elasticsearch@1a62747332
Running the sql rest action test inside the server caused a dependency
loop which was failing the build.
Original commit: elastic/x-pack-elasticsearch@43283671d8
This is a hack to remove a dependency cycle I added in elastic/x-pack-elasticsearch#2109. I think
it'd be cleaner to remove the cycle by making sql its own plugin that
doesn't depend on the rest of x-pack-elasticsearch but is still
included within x-pack-elasticsearch. But that is a broader change.
Original commit: elastic/x-pack-elasticsearch@47b7d69d80
1. We talked about doing 422 but that doesn't work because the netty plugin
maps 422 -> 400 and I didn't think it was worth changing that right now.
2. Missing columns in the `WHERE` clause still cause a 500 response because
we don't resolve the `SELECT` part if there is an error in the `WHERE`
clause.
Original commit: elastic/x-pack-elasticsearch@355d8292e5
Similar to my work in elastic/x-pack-elasticsearch#2106, this adds a basic integration test for the rest sql action. We can add more as we go. Specifically, I'd like to add testing around handling of invalid SQL and a test for timezones, but neither work particularly well yet.
Original commit: elastic/x-pack-elasticsearch@923d941d0d
Add some basic security testing/integration.
The good news:
1. Basic security now works. Users without access to an index can't run sql queries against it. Without this change they could.
2. Document level security works! At least so far as I can tell.
The work left to do:
1. Field level security doesn't work properly. I mean, it kind of works in that the field's values don't leak but it just looks like they all have null values.
2. We will need to test scrolling.
3. I've only added tests for the rest sql action. I'll need to add tests for jdbc and the CLI as well.
4. I've only added tests for `SELECT` and have ignored stuff like `DESCRIBE` and `SHOW TABLES`.
Original commit: elastic/x-pack-elasticsearch@b9909bbda0
I had added this to help VersionTests.java identify when it was running
in gradle so it could be sure that it got the real Version information
but even running in gradle doesn't give the real version information
because it runs against a build directory instead of a jar. And the jar
is all that has the version information because the version information
is read from the manifest.
Original commit: elastic/x-pack-elasticsearch@3acbd01433
This removes a hack I'd left in the build file that hard coded the
hash of jdbc driver. Now we dig the version information out of the
MANIFEST.MF file that is written during the jar process for all
projects in the Elasticsearch build.
Original commit: elastic/x-pack-elasticsearch@fa01cc6fb3
To avoid leaking client information across the entire code-base, client
settings like TimeZone or pagination are stored in
SqlSession>SqlSettings which are available as a ThreadLocal (during
analysis) so that components that need them, can pick them up.
Since ES internally uses Joda, the date/time functionality relies on Joda,
whenever possible to match the behavior.
Original commit: elastic/x-pack-elasticsearch@20f41e2bb3
Too big. Sorry. Some good things though:
1. Share some code between CLI and JDBC. Probably a good thing
at this point, better as we go on, I think.
2. Add round trip tests for all of proto.
3. Remove the `data` member from `QueryInitResponse` and
`QueryPageResponse` so we response serialization is consistent with
everything else.
Original commit: elastic/x-pack-elasticsearch@c6940a32ed
* Move read from a static method to a ctor to mirror core.
* Make read and writes read and write the same data.
* Instead of the "header" integer use a byte for the response
type.
* For responses that do not make their request type obvious
then serialize the request type.
* Remove the request type member from requests and responses and
replace with an abstract method. These type members have caused
us trouble in core in the past.
* Remove the Message superclass as it didn't have anything in it.
* Pass client version to the request reader and response writer.
* Add round trip tests for the protocol.
* Force Requests and Responses to provide good `toString`, `equals`,
and `hashCode`.
Original commit: elastic/x-pack-elasticsearch@653ed8c27f
* Switch `data` member from Object to `String`
* Compress packages on server so easier to build `data` as `String`
* Move write of `data` member into `encode` method
* Move read of `data` member into ctor
Original commit: elastic/x-pack-elasticsearch@e3a52e7493
* A unit test for cli
* Licenses for cli
* Remove licenses for protos (no more deps)
* `SHOW TABLES` returns results in order (makes testing easier)
* Clean up embedded jdbc server
* Wire up embedded cli server
Original commit: elastic/x-pack-elasticsearch@b98aaf446b
`gradle check -xforbiddenPatterns` now passes in jdbc.
This makes running the embedded HTTP server slightly more difficult,
you now have to add the following to your jvm arguments.
```
-ea -Dtests.rest.cluster=localhost:9200 -Dtests.embed.sql=true -Dtests.security.manager=false
```
Depending on your environment the embedded jdbc connection may give
spurious failures that look like:
```
org.elasticsearch.xpack.sql.jdbc.jdbc.JdbcException: RemoteTransportException[[node-0][127.0.0.1:9300][indices:data/read/search]]; nested: SearchPhaseExecutionException[]; nested: GeneralScriptException[Failed to compile inline script [( params.a0 > params.v0 ) && ( params.a1 > params.v1 )] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting];
...
Caused by: Failed to execute phase [fetch],
..
Caused by: GeneralScriptException[Failed to compile inline script [( params.a0 > params.v0 ) && ( params.a1 > params.v1 )] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting];
...
Caused by: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting]
```
`gradle check` works around this by setting `script.max_compilations_per_minute`
to `1000`.
Another change is that we no longer support loading the test data by
uncommenting some code. Instead we load the test data into Elaticsearch
before the first test and we deleted it after the last test. This is
so that tests that required different test data can interoperate with
eachother. The spec tests all use the same test data but the metadata
tests do not.
Original commit: elastic/x-pack-elasticsearch@8b8f684ac1
- simplify handling of timezone in H2
- fix leaking threadpool in HttpServer
- update Csv tests
- keep the dates as long in internal Page
Original commit: elastic/x-pack-elasticsearch@43a804607f