OpenSearch/docs/en/sql/endpoints/sql-rest.asciidoc

189 lines
6.6 KiB
Plaintext

[role="xpack"]
[[sql-rest]]
== SQL REST API
The SQL REST API accepts SQL in a JSON document, executes it,
and returns the results. For example:
[source,js]
--------------------------------------------------
POST /_xpack/sql
{
"query": "SELECT * FROM library ORDER BY page_count DESC LIMIT 5"
}
--------------------------------------------------
// CONSOLE
// TEST[setup:library]
Which returns:
[source,text]
--------------------------------------------------
author | name | page_count | release_date
-----------------+--------------------+---------------+---------------
Peter F. Hamilton|Pandora's Star |768 |1078185600000
Vernor Vinge |A Fire Upon the Deep|613 |707356800000
Frank Herbert |Dune |604 |-144720000000
Alastair Reynolds|Revelation Space |585 |953078400000
James S.A. Corey |Leviathan Wakes |561 |1306972800000
--------------------------------------------------
// TESTRESPONSE[s/\|/\\|/ s/\+/\\+/]
// TESTRESPONSE[_cat]
You can also choose to get results in a structured format by adding the `format` parameter. Currently supported formats:
- text (default)
- json
- smile
- yaml
- cbor
Alternatively you can set the Accept HTTP header to the appropriate media format.
All formats above are supported, the GET parameter takes precedence over the header.
[source,js]
--------------------------------------------------
POST /_xpack/sql?format=json
{
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5
}
--------------------------------------------------
// CONSOLE
// TEST[setup:library]
Which returns:
[source,js]
--------------------------------------------------
{
"columns": [
{"name": "author", "type": "text"},
{"name": "name", "type": "text"},
{"name": "page_count", "type": "short"},
{"name": "release_date", "type": "date"}
],
"size": 5,
"rows": [
["Peter F. Hamilton", "Pandora's Star", 768, 1078185600000],
["Vernor Vinge", "A Fire Upon the Deep", 613, 707356800000],
["Frank Herbert", "Dune", 604, -144720000000],
["Alastair Reynolds", "Revelation Space", 585, 953078400000],
["James S.A. Corey", "Leviathan Wakes", 561, 1306972800000]
],
"cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl+v///w8="
}
--------------------------------------------------
// TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl\+v\/\/\/w8=/$body.cursor/]
You can continue to the next page by sending back the `cursor` field. In
case of text format the cursor is returned as `Cursor` http header.
[source,js]
--------------------------------------------------
POST /_xpack/sql?format=json
{
"cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f///w8="
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
// TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f\/\/\/w8=/$body.cursor/]
Which looks like:
[source,js]
--------------------------------------------------
{
"size" : 5,
"rows" : [
["Dan Simmons", "Hyperion", 482, 612144000000],
["Iain M. Banks", "Consider Phlebas", 471, 546134400000],
["Neal Stephenson", "Snow Crash", 470, 707356800000],
["Frank Herbert", "God Emperor of Dune", 454, 359856000000],
["Frank Herbert", "Children of Dune", 408, 198892800000]
],
"cursor" : "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWODRMaXBUaVlRN21iTlRyWHZWYUdrdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl9f///w8="
}
--------------------------------------------------
// TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWODRMaXBUaVlRN21iTlRyWHZWYUdrdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl9f\/\/\/w8=/$body.cursor/]
Note that the `column` object is only part of the first page.
You've reached the last page when there is no `cursor` returned
in the results. Like Elasticsearch's <<search-request-scroll,scroll>>,
SQL may keep state in Elasticsearch to support the cursor. Unlike
scroll, receiving the last page is enough to guarantee that the
Elasticsearch state is cleared.
To clear the state earlier, you can use the clear cursor command:
[source,js]
--------------------------------------------------
POST /_xpack/sql/close
{
"cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f///w8="
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
// TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f\/\/\/w8=/$body.cursor/]
Which will like return the
[source,js]
--------------------------------------------------
{
"succeeded" : true
}
--------------------------------------------------
// TESTRESPONSE
[[sql-rest-filtering]]
You can filter the results that SQL will run on using a standard
Elasticsearch query DSL by specifying the query in the filter
parameter.
[source,js]
--------------------------------------------------
POST /_xpack/sql
{
"query": "SELECT * FROM library ORDER BY page_count DESC",
"filter": {
"range": {
"page_count": {
"gte" : 100,
"lte" : 200
}
}
},
"fetch_size": 5
}
--------------------------------------------------
// CONSOLE
// TEST[setup:library]
Which returns:
[source,text]
--------------------------------------------------
author | name | page_count | release_date
---------------+------------------------------------+---------------+---------------
Douglas Adams |The Hitchhiker's Guide to the Galaxy|180 |308534400000
--------------------------------------------------
// TESTRESPONSE[s/\|/\\|/ s/\+/\\+/]
// TESTRESPONSE[_cat]
[[sql-rest-fields]]
In addition to the `query` and `cursor` fields, the request can
contain `fetch_size` and `time_zone`. `fetch_size` is a hint for how
many results to return in each page. SQL might chose to return more
or fewer results though. `time_zone` is the time zone to use for date
functions and date parsing. `time_zone` defaults to `utc` and can take
any values documented
http://www.joda.org/joda-time/apidocs/org/joda/time/DateTimeZone.html[here].