OpenSearch/docs/en/sql/endpoints/rest.asciidoc

186 lines
6.8 KiB
Plaintext
Raw Normal View History

[[sql-rest]]
== SQL REST API
The SQL REST API accepts SQL in a JSON document, executes it,
and returns the results. For example:
[source,js]
--------------------------------------------------
POST /_xpack/sql
{
"query": "SELECT * FROM library ORDER BY page_count DESC LIMIT 5"
}
--------------------------------------------------
// CONSOLE
// TEST[setup:library]
Which returns:
[source,text]
--------------------------------------------------
author | name | page_count | release_date
-----------------+--------------------+---------------+------------------------
Peter F. Hamilton|Pandora's Star |768 |2004-03-02T00:00:00.000Z
Vernor Vinge |A Fire Upon the Deep|613 |1992-06-01T00:00:00.000Z
Frank Herbert |Dune |604 |1965-06-01T00:00:00.000Z
Alastair Reynolds|Revelation Space |585 |2000-03-15T00:00:00.000Z
James S.A. Corey |Leviathan Wakes |561 |2011-06-02T00:00:00.000Z
--------------------------------------------------
// TESTRESPONSE[s/\|/\\|/ s/\+/\\+/]
// TESTRESPONSE[_cat]
You can also choose to get results in a structured format by adding the `format` parameter. Currently supported formats:
- text (default)
- json
- smile
- yaml
- cbor
Alternatively you can set the Accept HTTP header to the appropriate media format.
All formats above are supported, the GET parameter takes precedence over the header.
[source,js]
--------------------------------------------------
POST /_xpack/sql?format=json
{
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5
}
--------------------------------------------------
// CONSOLE
// TEST[setup:library]
Which returns:
[source,js]
--------------------------------------------------
{
"columns": [
{"name": "author", "type": "text"},
{"name": "name", "type": "text"},
{"name": "page_count", "type": "short"},
{"name": "release_date", "type": "date"}
],
"rows": [
["Peter F. Hamilton", "Pandora's Star", 768, "2004-03-02T00:00:00.000Z"],
["Vernor Vinge", "A Fire Upon the Deep", 613, "1992-06-01T00:00:00.000Z"],
["Frank Herbert", "Dune", 604, "1965-06-01T00:00:00.000Z"],
["Alastair Reynolds", "Revelation Space", 585, "2000-03-15T00:00:00.000Z"],
["James S.A. Corey", "Leviathan Wakes", 561, "2011-06-02T00:00:00.000Z"]
],
"cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl+v///w8="
}
--------------------------------------------------
// TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl\+v\/\/\/w8=/$body.cursor/]
You can continue to the next page by sending back the `cursor` field. In
case of text format the cursor is returned as `Cursor` http header.
[source,js]
--------------------------------------------------
POST /_xpack/sql?format=json
{
"cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f///w8="
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
// TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f\/\/\/w8=/$body.cursor/]
Which looks like:
[source,js]
--------------------------------------------------
{
"rows" : [
["Dan Simmons", "Hyperion", 482, "1989-05-26T00:00:00.000Z"],
["Iain M. Banks", "Consider Phlebas", 471, "1987-04-23T00:00:00.000Z"],
["Neal Stephenson", "Snow Crash", 470, "1992-06-01T00:00:00.000Z"],
["Frank Herbert", "God Emperor of Dune", 454, "1981-05-28T00:00:00.000Z"],
["Frank Herbert", "Children of Dune", 408, "1976-04-21T00:00:00.000Z"]
],
"cursor" : "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWODRMaXBUaVlRN21iTlRyWHZWYUdrdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl9f///w8="
}
--------------------------------------------------
// TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWODRMaXBUaVlRN21iTlRyWHZWYUdrdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl9f\/\/\/w8=/$body.cursor/]
Note that the `column` object is only part of the first page.
You've reached the last page when there is no `cursor` returned
in the results. Like Elasticsearch's <<search-request-scroll,scroll>>,
SQL may keep state in Elasticsearch to support the cursor. Unlike
scroll, receiving the last page is enough to guarantee that the
Elasticsearch state is cleared.
To clear the state earlier, you can use the clear cursor command:
[source,js]
--------------------------------------------------
POST /_xpack/sql/close
{
"cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f///w8="
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
// TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f\/\/\/w8=/$body.cursor/]
Which will like return the
[source,js]
--------------------------------------------------
{
"succeeded" : true
}
--------------------------------------------------
// TESTRESPONSE
[[sql-rest-filtering]]
You can filter the results that SQL will run on using a standard
Elasticsearch query DSL by specifying the query in the filter
parameter.
[source,js]
--------------------------------------------------
POST /_xpack/sql
{
"query": "SELECT * FROM library ORDER BY page_count DESC",
"filter": {
"range": {
"page_count": {
"gte" : 100,
"lte" : 200
}
}
},
"fetch_size": 5
}
--------------------------------------------------
// CONSOLE
// TEST[setup:library]
Which returns:
[source,text]
--------------------------------------------------
author | name | page_count | release_date
---------------+------------------------------------+---------------+------------------------
Douglas Adams |The Hitchhiker's Guide to the Galaxy|180 |1979-10-12T00:00:00.000Z
--------------------------------------------------
// TESTRESPONSE[s/\|/\\|/ s/\+/\\+/]
// TESTRESPONSE[_cat]
[[sql-rest-fields]]
In addition to the `query` and `cursor` fields, the request can
contain `fetch_size` and `time_zone`. `fetch_size` is a hint for how
many results to return in each page. SQL might chose to return more
or fewer results though. `time_zone` is the time zone to use for date
functions and date parsing. `time_zone` defaults to `utc` and can take
any values documented
http://www.joda.org/joda-time/apidocs/org/joda/time/DateTimeZone.html[here].