2018-06-23 06:40:25 +08:00
[role="xpack"]
[testenv="basic"]
2017-12-13 23:19:31 +08:00
[[sql-rest]]
== SQL REST API
2019-06-11 17:04:00 +08:00
* <<sql-rest-overview>>
* <<sql-rest-format>>
* <<sql-pagination>>
* <<sql-rest-filtering>>
* <<sql-rest-columnar>>
2020-01-20 21:29:53 +08:00
* <<sql-rest-params>>
2021-04-19 21:15:12 +08:00
* <<sql-runtime-fields>>
2019-06-11 17:04:00 +08:00
* <<sql-rest-fields>>
2021-07-07 21:07:03 +08:00
* <<sql-async>>
2019-06-11 17:04:00 +08:00
[[sql-rest-overview]]
=== Overview
2017-12-13 23:19:31 +08:00
The SQL REST API accepts SQL in a JSON document, executes it,
2020-07-29 22:08:39 +08:00
and returns the results.
2019-02-16 03:15:40 +08:00
For example:
2017-12-13 23:19:31 +08:00
2019-09-05 00:51:02 +08:00
[source,console]
2017-12-13 23:19:31 +08:00
--------------------------------------------------
2018-11-28 11:16:21 +08:00
POST /_sql?format=txt
2017-12-13 23:19:31 +08:00
{
2020-07-22 00:24:26 +08:00
"query": "SELECT * FROM library ORDER BY page_count DESC LIMIT 5"
2017-12-13 23:19:31 +08:00
}
--------------------------------------------------
// TEST[setup:library]
Which returns:
[source,text]
--------------------------------------------------
author | name | page_count | release_date
2018-01-09 03:52:27 +08:00
-----------------+--------------------+---------------+------------------------
Peter F. Hamilton|Pandora's Star |768 |2004-03-02T00:00:00.000Z
Vernor Vinge |A Fire Upon the Deep|613 |1992-06-01T00:00:00.000Z
Frank Herbert |Dune |604 |1965-06-01T00:00:00.000Z
Alastair Reynolds|Revelation Space |585 |2000-03-15T00:00:00.000Z
James S.A. Corey |Leviathan Wakes |561 |2011-06-02T00:00:00.000Z
2017-12-13 23:19:31 +08:00
--------------------------------------------------
2017-12-19 09:57:50 +08:00
// TESTRESPONSE[s/\|/\\|/ s/\+/\\+/]
2019-06-10 21:33:32 +08:00
// TESTRESPONSE[non_json]
2017-12-13 23:19:31 +08:00
2019-02-16 03:15:40 +08:00
[[sql-kibana-console]]
2020-01-07 23:17:54 +08:00
[TIP]
2019-02-16 03:15:40 +08:00
.Using Kibana Console
2020-01-07 23:17:54 +08:00
====
2019-06-11 17:04:00 +08:00
If you are using {kibana-ref}/console-kibana.html[Kibana Console]
2019-02-16 03:15:40 +08:00
(which is highly recommended), take advantage of the
triple quotes `"""` when creating the query. This not only automatically escapes double
quotes (`"`) inside the query string but also support multi-line as shown below:
image:images/sql/rest/console-triple-quotes.png[]
2020-01-07 23:17:54 +08:00
====
2019-02-16 03:15:40 +08:00
[[sql-rest-format]]
=== Response Data Formats
While the textual format is nice for humans, computers prefer something
more structured.
{es-sql} can return the data in the following formats which can be set
either through the `format` property in the URL or by setting the `Accept` HTTP header:
NOTE: The URL parameter takes precedence over the `Accept` HTTP header.
If neither is specified then the response is returned in the same format as the request.
[cols="^m,^4m,^8"]
|===
s|format
s|`Accept` HTTP header
s|Description
3+h| Human Readable
|csv
|text/csv
2020-08-17 21:44:24 +08:00
|{wikipedia}/Comma-separated_values[Comma-separated values]
2019-02-16 03:15:40 +08:00
|json
|application/json
|https://www.json.org/[JSON] (JavaScript Object Notation) human-readable format
|tsv
|text/tab-separated-values
2020-08-17 21:44:24 +08:00
|{wikipedia}/Tab-separated_values[Tab-separated values]
2019-02-16 03:15:40 +08:00
|txt
|text/plain
|CLI-like representation
|yaml
|application/yaml
2020-08-17 21:44:24 +08:00
|{wikipedia}/YAML[YAML] (YAML Ain't Markup Language) human-readable format
2019-02-16 03:15:40 +08:00
3+h| Binary Formats
|cbor
|application/cbor
2020-08-01 03:58:38 +08:00
|https://cbor.io/[Concise Binary Object Representation]
2019-02-16 03:15:40 +08:00
|smile
|application/smile
2020-08-17 21:44:24 +08:00
|{wikipedia}/Smile_(data_interchange_format)[Smile] binary data format similar to CBOR
2019-02-16 03:15:40 +08:00
|===
2017-12-13 23:19:31 +08:00
2020-07-29 22:08:39 +08:00
The `CSV` format accepts a formatting URL query attribute, `delimiter`, which indicates which character should be used to separate the CSV
values. It defaults to comma (`,`) and cannot take any of the following values: double quote (`"`), carriage-return (`\r`) and new-line (`\n`).
The tab (`\t`) can also not be used, the `tsv` format needs to be used instead.
2019-06-11 17:04:00 +08:00
Here are some examples for the human readable formats:
==== CSV
2019-09-05 00:51:02 +08:00
[source,console]
2019-06-11 17:04:00 +08:00
--------------------------------------------------
POST /_sql?format=csv
{
2020-07-22 00:24:26 +08:00
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5
2019-06-11 17:04:00 +08:00
}
--------------------------------------------------
// TEST[setup:library]
2020-07-29 22:08:39 +08:00
which returns:
2019-06-11 17:04:00 +08:00
[source,text]
--------------------------------------------------
author,name,page_count,release_date
Peter F. Hamilton,Pandora's Star,768,2004-03-02T00:00:00.000Z
Vernor Vinge,A Fire Upon the Deep,613,1992-06-01T00:00:00.000Z
Frank Herbert,Dune,604,1965-06-01T00:00:00.000Z
Alastair Reynolds,Revelation Space,585,2000-03-15T00:00:00.000Z
James S.A. Corey,Leviathan Wakes,561,2011-06-02T00:00:00.000Z
--------------------------------------------------
2019-06-11 17:38:26 +08:00
// TESTRESPONSE[non_json]
2019-06-11 17:04:00 +08:00
2020-07-29 22:08:39 +08:00
or:
[source,console]
--------------------------------------------------
POST /_sql?format=csv&delimiter=%3b
{
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5
}
--------------------------------------------------
// TEST[setup:library]
which returns:
[source,text]
--------------------------------------------------
author;name;page_count;release_date
Peter F. Hamilton;Pandora's Star;768;2004-03-02T00:00:00.000Z
Vernor Vinge;A Fire Upon the Deep;613;1992-06-01T00:00:00.000Z
Frank Herbert;Dune;604;1965-06-01T00:00:00.000Z
Alastair Reynolds;Revelation Space;585;2000-03-15T00:00:00.000Z
James S.A. Corey;Leviathan Wakes;561;2011-06-02T00:00:00.000Z
--------------------------------------------------
// TESTRESPONSE[non_json]
2019-06-11 17:04:00 +08:00
==== JSON
2017-12-13 23:19:31 +08:00
2019-09-05 00:51:02 +08:00
[source,console]
2017-12-13 23:19:31 +08:00
--------------------------------------------------
2018-11-28 11:16:21 +08:00
POST /_sql?format=json
2017-12-13 23:19:31 +08:00
{
2020-07-22 00:24:26 +08:00
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5
2017-12-13 23:19:31 +08:00
}
--------------------------------------------------
// TEST[setup:library]
Which returns:
2019-09-07 02:05:36 +08:00
[source,console-result]
2017-12-13 23:19:31 +08:00
--------------------------------------------------
{
2020-07-22 00:24:26 +08:00
"columns": [
{"name": "author", "type": "text"},
{"name": "name", "type": "text"},
{"name": "page_count", "type": "short"},
{"name": "release_date", "type": "datetime"}
],
"rows": [
["Peter F. Hamilton", "Pandora's Star", 768, "2004-03-02T00:00:00.000Z"],
["Vernor Vinge", "A Fire Upon the Deep", 613, "1992-06-01T00:00:00.000Z"],
["Frank Herbert", "Dune", 604, "1965-06-01T00:00:00.000Z"],
["Alastair Reynolds", "Revelation Space", 585, "2000-03-15T00:00:00.000Z"],
["James S.A. Corey", "Leviathan Wakes", 561, "2011-06-02T00:00:00.000Z"]
],
"cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl+v///w8="
2017-12-13 23:19:31 +08:00
}
--------------------------------------------------
// TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl\+v\/\/\/w8=/$body.cursor/]
2019-06-11 17:04:00 +08:00
==== TSV
2019-09-05 00:51:02 +08:00
[source,console]
2019-06-11 17:04:00 +08:00
--------------------------------------------------
POST /_sql?format=tsv
{
2020-07-22 00:24:26 +08:00
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5
2019-06-11 17:04:00 +08:00
}
--------------------------------------------------
// TEST[setup:library]
Which returns:
[source,text]
--------------------------------------------------
author name page_count release_date
Peter F. Hamilton Pandora's Star 768 2004-03-02T00:00:00.000Z
Vernor Vinge A Fire Upon the Deep 613 1992-06-01T00:00:00.000Z
Frank Herbert Dune 604 1965-06-01T00:00:00.000Z
Alastair Reynolds Revelation Space 585 2000-03-15T00:00:00.000Z
James S.A. Corey Leviathan Wakes 561 2011-06-02T00:00:00.000Z
--------------------------------------------------
// TESTRESPONSE[s/\t/ /]
2019-06-11 17:38:26 +08:00
// TESTRESPONSE[non_json]
2019-06-11 17:04:00 +08:00
==== TXT
2019-09-05 00:51:02 +08:00
[source,console]
2019-06-11 17:04:00 +08:00
--------------------------------------------------
POST /_sql?format=txt
{
2020-07-22 00:24:26 +08:00
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5
2019-06-11 17:04:00 +08:00
}
--------------------------------------------------
// TEST[setup:library]
Which returns:
[source,text]
--------------------------------------------------
2020-07-29 22:08:39 +08:00
author | name | page_count | release_date
2019-06-11 17:04:00 +08:00
-----------------+--------------------+---------------+------------------------
Peter F. Hamilton|Pandora's Star |768 |2004-03-02T00:00:00.000Z
Vernor Vinge |A Fire Upon the Deep|613 |1992-06-01T00:00:00.000Z
Frank Herbert |Dune |604 |1965-06-01T00:00:00.000Z
Alastair Reynolds|Revelation Space |585 |2000-03-15T00:00:00.000Z
James S.A. Corey |Leviathan Wakes |561 |2011-06-02T00:00:00.000Z
--------------------------------------------------
// TESTRESPONSE[s/\|/\\|/ s/\+/\\+/]
2019-06-11 17:38:26 +08:00
// TESTRESPONSE[non_json]
2019-06-11 17:04:00 +08:00
==== YAML
2019-09-05 00:51:02 +08:00
[source,console]
2019-06-11 17:04:00 +08:00
--------------------------------------------------
POST /_sql?format=yaml
{
2020-07-22 00:24:26 +08:00
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5
2019-06-11 17:04:00 +08:00
}
--------------------------------------------------
// TEST[setup:library]
Which returns:
[source,yaml]
--------------------------------------------------
columns:
- name: "author"
type: "text"
- name: "name"
type: "text"
- name: "page_count"
type: "short"
- name: "release_date"
type: "datetime"
rows:
- - "Peter F. Hamilton"
- "Pandora's Star"
- 768
- "2004-03-02T00:00:00.000Z"
- - "Vernor Vinge"
- "A Fire Upon the Deep"
- 613
- "1992-06-01T00:00:00.000Z"
- - "Frank Herbert"
- "Dune"
- 604
- "1965-06-01T00:00:00.000Z"
- - "Alastair Reynolds"
- "Revelation Space"
- 585
- "2000-03-15T00:00:00.000Z"
- - "James S.A. Corey"
- "Leviathan Wakes"
- 561
- "2011-06-02T00:00:00.000Z"
cursor: "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl+v///w8="
--------------------------------------------------
// TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl\+v\/\/\/w8=/$body.cursor/]
2019-02-16 03:15:40 +08:00
[[sql-pagination]]
=== Paginating through a large response
2020-01-07 23:17:54 +08:00
Using the example from the <<sql-rest-format,previous section>>, one can
2020-07-29 22:08:39 +08:00
continue to the next page by sending back the cursor field. In the case of CSV, TSV and TXT
formats, the cursor is returned in the `Cursor` HTTP header.
2017-12-13 23:19:31 +08:00
2019-09-05 00:51:02 +08:00
[source,console]
2017-12-13 23:19:31 +08:00
--------------------------------------------------
2018-11-28 11:16:21 +08:00
POST /_sql?format=json
2017-12-13 23:19:31 +08:00
{
2020-07-22 00:24:26 +08:00
"cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f///w8="
2017-12-13 23:19:31 +08:00
}
--------------------------------------------------
// TEST[continued]
// TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f\/\/\/w8=/$body.cursor/]
Which looks like:
2019-09-07 02:05:36 +08:00
[source,console-result]
2017-12-13 23:19:31 +08:00
--------------------------------------------------
{
2020-07-22 00:24:26 +08:00
"rows" : [
["Dan Simmons", "Hyperion", 482, "1989-05-26T00:00:00.000Z"],
["Iain M. Banks", "Consider Phlebas", 471, "1987-04-23T00:00:00.000Z"],
["Neal Stephenson", "Snow Crash", 470, "1992-06-01T00:00:00.000Z"],
["Frank Herbert", "God Emperor of Dune", 454, "1981-05-28T00:00:00.000Z"],
["Frank Herbert", "Children of Dune", 408, "1976-04-21T00:00:00.000Z"]
],
"cursor" : "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWODRMaXBUaVlRN21iTlRyWHZWYUdrdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl9f///w8="
2017-12-13 23:19:31 +08:00
}
--------------------------------------------------
// TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWODRMaXBUaVlRN21iTlRyWHZWYUdrdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl9f\/\/\/w8=/$body.cursor/]
2019-02-16 03:15:40 +08:00
Note that the `columns` object is only part of the first page.
2017-12-13 23:19:31 +08:00
You've reached the last page when there is no `cursor` returned
2020-07-31 23:43:06 +08:00
in the results. Like Elasticsearch's <<scroll-search-results,scroll>>,
2017-12-13 23:19:31 +08:00
SQL may keep state in Elasticsearch to support the cursor. Unlike
scroll, receiving the last page is enough to guarantee that the
Elasticsearch state is cleared.
To clear the state earlier, you can use the clear cursor command:
2019-09-05 00:51:02 +08:00
[source,console]
2017-12-13 23:19:31 +08:00
--------------------------------------------------
2018-11-28 11:16:21 +08:00
POST /_sql/close
2017-12-13 23:19:31 +08:00
{
2020-07-22 00:24:26 +08:00
"cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f///w8="
2017-12-13 23:19:31 +08:00
}
--------------------------------------------------
// TEST[continued]
// TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f\/\/\/w8=/$body.cursor/]
Which will like return the
2019-09-06 04:47:18 +08:00
[source,console-result]
2017-12-13 23:19:31 +08:00
--------------------------------------------------
{
2020-07-22 00:24:26 +08:00
"succeeded" : true
2017-12-13 23:19:31 +08:00
}
--------------------------------------------------
[[sql-rest-filtering]]
2021-02-19 07:57:19 +08:00
=== Filtering using {es} Query DSL
2017-12-13 23:19:31 +08:00
2020-01-20 21:29:53 +08:00
One can filter the results that SQL will run on using a standard
2021-02-19 07:57:19 +08:00
{es} Query DSL by specifying the query in the filter
2017-12-13 23:19:31 +08:00
parameter.
2019-09-05 00:51:02 +08:00
[source,console]
2017-12-13 23:19:31 +08:00
--------------------------------------------------
2018-11-28 11:16:21 +08:00
POST /_sql?format=txt
2017-12-13 23:19:31 +08:00
{
2020-07-22 00:24:26 +08:00
"query": "SELECT * FROM library ORDER BY page_count DESC",
"filter": {
"range": {
"page_count": {
"gte" : 100,
"lte" : 200
}
}
},
"fetch_size": 5
2017-12-13 23:19:31 +08:00
}
--------------------------------------------------
// TEST[setup:library]
Which returns:
[source,text]
--------------------------------------------------
author | name | page_count | release_date
2018-01-09 03:52:27 +08:00
---------------+------------------------------------+---------------+------------------------
Douglas Adams |The Hitchhiker's Guide to the Galaxy|180 |1979-10-12T00:00:00.000Z
2017-12-13 23:19:31 +08:00
--------------------------------------------------
2017-12-19 09:57:50 +08:00
// TESTRESPONSE[s/\|/\\|/ s/\+/\\+/]
2019-06-10 21:33:32 +08:00
// TESTRESPONSE[non_json]
2017-12-13 23:19:31 +08:00
2020-02-15 00:58:45 +08:00
[TIP]
=================
2021-02-19 07:57:19 +08:00
A useful and less obvious usage for standard Query DSL filtering is to search documents by a specific <<search-routing, routing key>>.
2020-02-15 00:58:45 +08:00
Because {es-sql} does not support a `routing` parameter, one can specify a <<mapping-routing-field, `terms` filter for the `_routing` field>> instead:
[source,console]
--------------------------------------------------
POST /_sql?format=txt
{
2020-07-22 00:24:26 +08:00
"query": "SELECT * FROM library",
"filter": {
"terms": {
"_routing": ["abc"]
2020-02-15 00:58:45 +08:00
}
2020-07-22 00:24:26 +08:00
}
2020-02-15 00:58:45 +08:00
}
--------------------------------------------------
// TEST[setup:library]
=================
2019-02-27 15:24:52 +08:00
[[sql-rest-columnar]]
=== Columnar results
The most well known way of displaying the results of an SQL query result in general is the one where each
individual record/document represents one line/row. For certain formats, {es-sql} can return the results
in a columnar fashion: one row represents all the values of a certain column from the current page of results.
The following formats can be returned in columnar orientation: `json`, `yaml`, `cbor` and `smile`.
2019-09-05 00:51:02 +08:00
[source,console]
2019-02-27 15:24:52 +08:00
--------------------------------------------------
POST /_sql?format=json
{
2020-07-22 00:24:26 +08:00
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5,
"columnar": true
2019-02-27 15:24:52 +08:00
}
--------------------------------------------------
// TEST[setup:library]
Which returns:
2019-09-07 02:05:36 +08:00
[source,console-result]
2019-02-27 15:24:52 +08:00
--------------------------------------------------
{
2020-07-22 00:24:26 +08:00
"columns": [
{"name": "author", "type": "text"},
{"name": "name", "type": "text"},
{"name": "page_count", "type": "short"},
{"name": "release_date", "type": "datetime"}
],
"values": [
["Peter F. Hamilton", "Vernor Vinge", "Frank Herbert", "Alastair Reynolds", "James S.A. Corey"],
["Pandora's Star", "A Fire Upon the Deep", "Dune", "Revelation Space", "Leviathan Wakes"],
[768, 613, 604, 585, 561],
["2004-03-02T00:00:00.000Z", "1992-06-01T00:00:00.000Z", "1965-06-01T00:00:00.000Z", "2000-03-15T00:00:00.000Z", "2011-06-02T00:00:00.000Z"]
],
"cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl+v///w8="
2019-02-27 15:24:52 +08:00
}
--------------------------------------------------
// TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl\+v\/\/\/w8=/$body.cursor/]
Any subsequent calls using a `cursor` still have to contain the `columnar` parameter to preserve the orientation,
meaning the initial query will not _remember_ the columnar option.
2019-09-05 00:51:02 +08:00
[source,console]
2019-02-27 15:24:52 +08:00
--------------------------------------------------
POST /_sql?format=json
{
2020-07-22 00:24:26 +08:00
"cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl+v///w8=",
"columnar": true
2019-02-27 15:24:52 +08:00
}
--------------------------------------------------
// TEST[continued]
// TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl\+v\/\/\/w8=/$body.cursor/]
Which looks like:
2019-09-07 02:05:36 +08:00
[source,console-result]
2019-02-27 15:24:52 +08:00
--------------------------------------------------
{
2020-07-22 00:24:26 +08:00
"values": [
["Dan Simmons", "Iain M. Banks", "Neal Stephenson", "Frank Herbert", "Frank Herbert"],
["Hyperion", "Consider Phlebas", "Snow Crash", "God Emperor of Dune", "Children of Dune"],
[482, 471, 470, 454, 408],
["1989-05-26T00:00:00.000Z", "1987-04-23T00:00:00.000Z", "1992-06-01T00:00:00.000Z", "1981-05-28T00:00:00.000Z", "1976-04-21T00:00:00.000Z"]
],
"cursor": "46ToAwFzQERYRjFaWEo1UVc1a1JtVjBZMmdCQUFBQUFBQUFBQUVXWjBaNlFXbzNOV0pVY21Wa1NUZDJhV2t3V2xwblp3PT3/////DwQBZgZhdXRob3IBBHRleHQAAAFmBG5hbWUBBHRleHQAAAFmCnBhZ2VfY291bnQBBGxvbmcBAAFmDHJlbGVhc2VfZGF0ZQEIZGF0ZXRpbWUBAAEP"
2019-02-27 15:24:52 +08:00
}
--------------------------------------------------
// TESTRESPONSE[s/46ToAwFzQERYRjFaWEo1UVc1a1JtVjBZMmdCQUFBQUFBQUFBQUVXWjBaNlFXbzNOV0pVY21Wa1NUZDJhV2t3V2xwblp3PT3\/\/\/\/\/DwQBZgZhdXRob3IBBHRleHQAAAFmBG5hbWUBBHRleHQAAAFmCnBhZ2VfY291bnQBBGxvbmcBAAFmDHJlbGVhc2VfZGF0ZQEIZGF0ZXRpbWUBAAEP/$body.cursor/]
2020-01-20 21:29:53 +08:00
[[sql-rest-params]]
=== Passing parameters to a query
Using values in a query condition, for example, or in a `HAVING` statement can be done "inline",
by integrating the value in the query string itself:
[source,console]
--------------------------------------------------
POST /_sql?format=txt
{
"query": "SELECT YEAR(release_date) AS year FROM library WHERE page_count > 300 AND author = 'Frank Herbert' GROUP BY year HAVING COUNT(*) > 0"
}
--------------------------------------------------
// TEST[setup:library]
or it can be done by extracting the values in a separate list of parameters and using question mark placeholders (`?`) in the query string:
[source,console]
--------------------------------------------------
POST /_sql?format=txt
{
"query": "SELECT YEAR(release_date) AS year FROM library WHERE page_count > ? AND author = ? GROUP BY year HAVING COUNT(*) > ?",
"params": [300, "Frank Herbert", 0]
}
--------------------------------------------------
// TEST[setup:library]
[IMPORTANT]
The recommended way of passing values to a query is with question mark placeholders, to avoid any attempts of hacking or SQL injection.
2021-04-19 21:15:12 +08:00
[[sql-runtime-fields]]
=== Use runtime fields
Use the `runtime_mappings` parameter to extract and create <<runtime,runtime
fields>>, or columns, from existing ones during a search.
The following search creates a `release_day_of_week` runtime field from
`release_date` and returns it in the response.
[source,console]
----
POST _sql?format=txt
{
"runtime_mappings": {
"release_day_of_week": {
"type": "keyword",
"script": """
emit(doc['release_date'].value.dayOfWeekEnum.toString())
"""
}
},
"query": """
SELECT * FROM library WHERE page_count > 300 AND author = 'Frank Herbert'
"""
}
----
// TEST[setup:library]
The API returns:
[source,txt]
----
author | name | page_count | release_date |release_day_of_week
---------------+---------------+---------------+------------------------+-------------------
Frank Herbert |Dune |604 |1965-06-01T00:00:00.000Z|TUESDAY
----
// TESTRESPONSE[non_json]
2021-07-07 21:07:03 +08:00
[[sql-async]]
=== Run an async SQL search
By default, SQL searches are synchronous. They wait for complete results before
returning a response. However, results can take longer for searches across large
data sets or <<data-tiers,frozen data>>.
To avoid long waits, run an async SQL search. Set `wait_for_completion_timeout`
to a duration you’ d like to wait for synchronous results.
[source,console]
----
POST _sql?format=json
{
"wait_for_completion_timeout": "2s",
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5
}
----
// TEST[setup:library]
// TEST[s/"wait_for_completion_timeout": "2s"/"wait_for_completion_timeout": "0"/]
If the search doesn’ t finish within this period, the search becomes async. The
API returns:
* An `id` for the search.
* An `is_partial` value of `true`, indicating the search results are incomplete.
* An `is_running` value of `true`, indicating the search is still running in the
background.
For CSV, TSV, and TXT responses, the API returns these values in the respective
`Async-ID`, `Async-partial`, and `Async-running` HTTP headers instead.
[source,console-result]
----
{
"id": "FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=",
"is_partial": true,
"is_running": true,
"rows": [ ]
}
----
// TESTRESPONSE[s/FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=/$body.id/]
// TESTRESPONSE[s/"is_partial": true/"is_partial": $body.is_partial/]
// TESTRESPONSE[s/"is_running": true/"is_running": $body.is_running/]
To check the progress of an async search, use the search ID with the get async
SQL search status API.
[source,console]
----
GET _sql/async/status/FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=
----
// TEST[skip: no access to search ID]
If `is_running` and `is_partial` are `false`, the async search has finished with
complete results.
[source,console-result]
----
{
"id": "FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=",
"is_running": false,
"is_partial": false,
"expiration_time_in_millis": 1611690295000,
"completion_status": 200
}
----
// TESTRESPONSE[s/FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=/$body.id/]
// TESTRESPONSE[s/"expiration_time_in_millis": 1611690295000/"expiration_time_in_millis": $body.expiration_time_in_millis/]
To get the results, use the search ID with the get async SQL search API. If the
search is still running, specify how long you’ d like to wait using
`wait_for_completion_timeout`. You can also specify the response `format`.
[source,console]
----
GET _sql/async/FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=?wait_for_completion_timeout=2s&format=json
----
// TEST[skip: no access to search ID]
[discrete]
[[sql-async-retention]]
==== Change the search retention period
By default, {es} stores async SQL searches for five days. After this period,
{es} deletes the search and its results, even if the search is still running. To
change this retention period, use the `keep_alive` parameter.
[source,console]
----
POST _sql?format=json
{
"keep_alive": "2d",
"wait_for_completion_timeout": "2s",
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5
}
----
// TEST[setup:library]
You can use the get async SQL search API's `keep_alive` parameter to later
change the retention period. The new period starts after the request runs.
[source,console]
----
GET _sql/async/FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=?keep_alive=5d&wait_for_completion_timeout=2s&format=json
----
// TEST[skip: no access to search ID]
Use the delete async SQL search API to delete an async search before the
`keep_alive` period ends. If the search is still running, {es} cancels it.
[source,console]
----
DELETE _sql/async/delete/FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=
----
// TEST[skip: no access to search ID]
[discrete]
[[sql-store-searches]]
==== Store synchronous SQL searches
By default, {es} only stores async SQL searches. To save a synchronous search,
specify `wait_for_completion_timeout` and set `keep_on_completion` to `true`.
[source,console]
----
POST _sql?format=json
{
"keep_on_completion": true,
"wait_for_completion_timeout": "2s",
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 5
}
----
// TEST[setup:library]
If `is_partial` and `is_running` are `false`, the search was synchronous and
returned complete results.
[source,console-result]
----
{
"id": "Fnc5UllQdUVWU0NxRFNMbWxNYXplaFEaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQTo0NzA=",
"is_partial": false,
"is_running": false,
"rows": ...,
"columns": ...,
"cursor": ...
}
----
// TESTRESPONSE[s/Fnc5UllQdUVWU0NxRFNMbWxNYXplaFEaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQTo0NzA=/$body.id/]
// TESTRESPONSE[s/"rows": \.\.\./"rows": $body.rows/]
// TESTRESPONSE[s/"columns": \.\.\./"columns": $body.columns/]
// TESTRESPONSE[s/"cursor": \.\.\./"cursor": $body.cursor/]
You can get the same results later using the search ID with the get async SQL
search API.
Saved synchronous searches are still subject to the `keep_alive` retention
period. When this period ends, {es} deletes the search results. You can also
delete saved searches using the delete async SQL search API.
2017-12-13 23:19:31 +08:00
[[sql-rest-fields]]
2019-02-16 03:15:40 +08:00
=== Supported REST parameters
In addition to the `query` and `fetch_size`, a request a number of user-defined fields for specifying
the request time-outs or localization information (such as timezone).
The table below lists the supported parameters:
2021-04-19 21:15:12 +08:00
[cols="<m,<m,<5"]
2019-02-16 03:15:40 +08:00
|===
s|name
s|Default value
s|Description
|query
|Mandatory
|SQL query to execute
|fetch_size
|1000
|The maximum number of rows (or entries) to return in one response
|filter
|none
2021-02-19 07:57:19 +08:00
|Optional {es} Query DSL for additional <<sql-rest-filtering, filtering>>.
2019-02-16 03:15:40 +08:00
|request_timeout
|90s
|The timeout before the request fails.
|page_timeout
|45s
|The timeout before a pagination request fails.
2020-05-05 17:13:47 +08:00
|[[sql-rest-fields-timezone]]time_zone
2019-02-16 03:15:40 +08:00
|`Z` (or `UTC`)
|Time-zone in ISO 8601 used for executing the query on the server.
More information available https://docs.oracle.com/javase/8/docs/api/java/time/ZoneId.html[here].
2019-02-27 15:24:52 +08:00
|columnar
|false
|Return the results in a columnar fashion, rather than row-based fashion. Valid for `json`, `yaml`, `cbor` and `smile`.
2019-03-18 20:56:00 +08:00
|field_multi_value_leniency
|false
|Throw an exception when encountering multiple values for a field (default) or be lenient and return the first value from the list (without any guarantees of what that will be - typically the first in natural ascending order).
2019-05-10 19:19:26 +08:00
|index_include_frozen
|false
2021-05-13 00:54:20 +08:00
|Whether to include <<freeze-index-api, frozen-indices>> in the query execution or not (default).
2019-05-10 19:19:26 +08:00
2020-01-20 21:29:53 +08:00
|params
|none
|Optional list of parameters to replace question mark (`?`) placeholders inside the query.
2021-04-19 21:15:12 +08:00
|runtime_mappings
|none
|Defines one or more <<runtime-search-request,runtime fields>> in the search
request. These fields take precedence over mapped fields with the same name.
2021-07-07 21:07:03 +08:00
|keep_alive
|`5d`
a|Period {es} stores the search and its results. Must be greater than `1m` (one
minute).
When this period end, {es} deletes the search and its results, even if the
search is still running.
If `keep_on_completion` is `false`, {es} only stores searches that don't finish
within the `wait_for_completion_timeout`, regardless of this value.
|keep_on_completion
|`false`
|If `true`, {es} stores the search and its results. If `false`, {es} stores the
search and its results only if it doesn't finish before the
`wait_for_completion_timeout`.
|wait_for_completion_timeout
|None
a|Duration to wait for the search to finish. Defaults to no timeout, meaning the
request waits for complete results.
If you specify this parameter and the search finishes during the timeout, the
request returns complete results. If the search doesn't finish during this
period, the search becomes async. See <<sql-async>>.
2019-02-16 03:15:40 +08:00
|===
2019-02-27 15:24:52 +08:00
Do note that most parameters (outside the timeout and `columnar` ones) make sense only during the initial query - any follow-up pagination request only requires the `cursor` parameter as explained in the <<sql-pagination, pagination>> chapter.
2020-05-05 17:13:47 +08:00
That's because the query has already been executed and the calls are simply about returning the found results - thus the parameters are simply ignored.