This commits adds a new end point for closing in-flight cursors, it also ensures that all cursors are properly closed by adding after test checks that ensures that we don't leave any search context open.
relates elastic/x-pack-elasticsearch#2878
Original commit: elastic/x-pack-elasticsearch@1052ea28dc
While working on cursor cleanup, I realized that we still have two ways to serialize the cursor and the second way doesn't contain the cursor version (only client version, that can be potentially different from the cursor version). This commit switches to the unified way of serializing the cursor.
This is a follow up for elastic/x-pack-elasticsearch#3064.
Original commit: elastic/x-pack-elasticsearch@ef1a6427dd
While we're fairly sure we're going to remove the binary protocol in the
long run, we're also fairly sure we're going to release the first
version of SQL with the binary protocol. One big problem with it is that
it blows up when it attempts to serialize fairly long strings. These
long strings are actually quite common in the CLI. They are also
possible in JDBC. I say "fairly long strings" because exactly how long
the strings has to be is kind of funky. It is based on the number of
bytes that it takes to encode the string, and the strings are encoded in
a utf-8-like encoding of utf-16 encoded string documented here:
https://docs.oracle.com/javase/7/docs/api/java/io/DataOutput.html#writeUTF(java.lang.String)
Anyway, this fixes the protocol for these "fairly long strings" by
chunking the strings and adding an extra 4 byte integer before each
string to count the number of chunks. After that 4 byte integer the
strings are serialized using the "normal" DataInput/DataOutput encoding,
the funny utf-8-like encoding of the utf-16 encoded string.
relates elastic/x-pack-elasticsearch#3018
Original commit: elastic/x-pack-elasticsearch@11f0d59f20
Instead of returning "error response" objects and then translating them
into SQL exceptions this just throws the SQL exceptions directly. This
means the CLI catches exceptions and prints out the messages which isn't
ideal if this were hot code but it isn't and this is a much simpler way
of doing things.
Original commit: elastic/x-pack-elasticsearch@08431d3941
Now that we can parse Elasticsearch's standard error messages in the CLI
and JDBC client we can just let those standard error messages bubble out
of Elasticsearch rather than catch and encode them.
In a followup we can remove the encoding entirely.
Original commit: elastic/x-pack-elasticsearch@bad043b6f7
This teaches SQL to parse Elasticsearch's standard error responses
but doesn't change SQL to general Elasticsearch's standard error responses
in all cases. That can come in a followup. We do this parsing with
jackson-core, the same dependency Elasticsearch uses for parsing
json. We shade jackson-core in the JDBC driver so that users don't have to worry about
dependency clashes. We do not do so in the CLI because it is a standalone
application.
We get a few "bonus" changes along the way:
1. We save a copy operation. Before this change responses were spooled
into memory and then parsed. After this change they are parsed directly
from the response stream.
2. We had a few classes entirely to support the spooling operation that we
no longer need: `BytesArray`, `FastByteArrayInputStream`, and
`BasicByteArrayOutputStream`.
3. SQL's `Version` was incorrectly parsing the version from the jar manifest.
We didn't notice because the test was rigged to return `UNKNOWN` because
we *were* running the test from the compiled classes directory instead of the
jar. As part of shading jackson we moved running the tests to running against
the shaded jar. Now we can actually assert that we parse the version correctly.
It turns out we weren't. So I fixed it.
Original commit: elastic/x-pack-elasticsearch@2e8f397bf4
* Fix several NOCOMMITS
- renamed Assert to Check to make the intent clear
- clarify esMajor/Minor inside connection (thse are actually our own
methods, not part of JDBC API)
- wire pageTimeout into Cursor#nextPage
Original commit: elastic/x-pack-elasticsearch@7626c0a44a
* Remove usage of Settings inside SqlSettings
Also hook client timeouts to the backend
Set UTC as default timezone when using CSV
As the JVM timezone changes, make sure to pin it to UTC since this is what the results are computed against
Original commit: elastic/x-pack-elasticsearch@3e7aad8c1f
Better handling of SQL exceptions (result of incorrect queries) vs
unexpected ones (engine failure, ES...)
Original commit: elastic/x-pack-elasticsearch@2698402cdb
These wrap `DataInput` and `DataOutput` to add the protocol
version being serialized. This is similar to the mechanism
used by core and it has made adding and removing fields from
the serialization protocol fairly simple.
Original commit: elastic/x-pack-elasticsearch@90b3f1199a
This renames that `write` and `read` methods in SQL to `writeTo` and
`readFrom` to line up with the names used in core. I don't have a
strong opinion whether or not any name is better than any other but
I figure there isn't a good reason for SQL to be different from the
rest of Elasticsearch.
Original commit: elastic/x-pack-elasticsearch@e5de9a4b81
* TimeoutInfo - This is now tracked in the SQL tracker github issue
* AbstractProto - Convert to a TODO as we *can* handle it after
release. I've added it to the SQL tracker github issue in a special
section for low priority protocol stuff. Protocol stuff is special
because if we can make the change before release we don't have to
worry about backwards compatibility.
Original commit: elastic/x-pack-elasticsearch@dbef9db5f8
* Move CLI to TransportSqlAction
* Moves REST endpoint from `/_cli` to `/_sql/cli`
* Removes the special purpose CLI transport action instead
implements the CLI entirely on the REST layer, delegating
all SQL stuff to the same action that backs the `/_sql` REST
API.
* Reworks "embedded testing mode" to use a `FilterClient` to
bounce capture the sql transport action and execute in embedded.
* Switches CLI formatting from consuming the entire response
to consuming just the first page of the response and returning
a `cursor` that can be used to read the next page. That read is
not yet implemented.
* Switch CLI formatting from the consuming the `RowSetCursor` to
consuming the `SqlResponse` object.
* Adds tests for CLI formatting.
* Support next page in the cli
* Rename cli's CommandRequest/CommandResponse to
QueryInitRequest/QueryInitResponse to line up with jdbc
* Implement QueryPageRequest/QueryPageResponse in cli
* Use `byte[]` to represent the cursor in the cli. Those bytes
mean something, but only to the server. The only reasonint that
the client does about them is "if length == 0 then there isn't a
next page."
* Pull common code from jdbc's QueryInitRequest, QueryPageRequest,
QueryInitResponse, and QueryPageResponse into the shared-proto
project
* By implication this switches jdbc's QueryPageRequest to using
the same cursor implementation as the cli
Original commit: elastic/x-pack-elasticsearch@193586f1ee
Too big. Sorry. Some good things though:
1. Share some code between CLI and JDBC. Probably a good thing
at this point, better as we go on, I think.
2. Add round trip tests for all of proto.
3. Remove the `data` member from `QueryInitResponse` and
`QueryPageResponse` so we response serialization is consistent with
everything else.
Original commit: elastic/x-pack-elasticsearch@c6940a32ed