DOC: Add examples to the SQL docs (#31633)
Significantly improve the example snippets in the documentation. The examples are part of the test suite and checked nightly. To help readability, the existing dataset was extended (test_emp renamed to emp plus library). Improve output of JDBC tests to be consistent with the CLI Add lenient flag to JDBC asserts to allow type widening (a long is equivalent to a integer as long as the value is the same).
This commit is contained in:
parent
4108722052
commit
de9e56aa01
|
@ -20,3 +20,8 @@ DESC table
|
|||
.Description
|
||||
|
||||
`DESC` and `DESCRIBE` are aliases to <<sql-syntax-show-columns>>.
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[describeTable]
|
||||
----
|
||||
|
|
|
@ -36,23 +36,26 @@ The general execution of `SELECT` is as follows:
|
|||
|
||||
As with a table, every output column of a `SELECT` has a name which can be either specified per column through the `AS` keyword :
|
||||
|
||||
[source,sql]
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
SELECT column AS c
|
||||
include-tagged::{sql-specs}/docs.csv-spec[selectColumnAlias]
|
||||
----
|
||||
|
||||
Note: `AS` is an optional keyword however it helps with the readability and in some case ambiguity of the query
|
||||
which is why it is recommended to specify it.
|
||||
|
||||
assigned by {es-sql} if no name is given:
|
||||
|
||||
[source,sql]
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
SELECT 1 + 1
|
||||
include-tagged::{sql-specs}/docs.csv-spec[selectInline]
|
||||
----
|
||||
|
||||
or if it's a simple column reference, use its name as the column name:
|
||||
|
||||
[source,sql]
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
SELECT col FROM table
|
||||
include-tagged::{sql-specs}/docs.csv-spec[selectColumn]
|
||||
----
|
||||
|
||||
[[sql-syntax-select-wildcard]]
|
||||
|
@ -61,11 +64,11 @@ SELECT col FROM table
|
|||
To select all the columns in the source, one can use `*`:
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{sql-specs}/select.sql-spec[wildcardWithOrder]
|
||||
--------------------------------------------------
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[wildcardWithOrder]
|
||||
----
|
||||
|
||||
which essentially returns all columsn found.
|
||||
which essentially returns all(top-level fields, sub-fields, such as multi-fields are ignored] columns found.
|
||||
|
||||
[[sql-syntax-from]]
|
||||
[float]
|
||||
|
@ -83,17 +86,30 @@ where:
|
|||
`table_name`::
|
||||
|
||||
Represents the name (optionally qualified) of an existing table, either a concrete or base one (actual index) or alias.
|
||||
|
||||
|
||||
If the table name contains special SQL characters (such as `.`,`-`,etc...) use double quotes to escape them:
|
||||
[source, sql]
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
SELECT ... FROM "some-table"
|
||||
include-tagged::{sql-specs}/docs.csv-spec[fromTableQuoted]
|
||||
----
|
||||
|
||||
The name can be a <<multi-index, pattern>> pointing to multiple indices (likely requiring quoting as mentioned above) with the restriction that *all* resolved concrete tables have **exact mapping**.
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[fromTablePatternQuoted]
|
||||
----
|
||||
|
||||
`alias`::
|
||||
A substitute name for the `FROM` item containing the alias. An alias is used for brevity or to eliminate ambiguity. When an alias is provided, it completely hides the actual name of the table and must be used in its place.
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[fromTableAlias]
|
||||
----
|
||||
|
||||
[[sql-syntax-where]]
|
||||
[float]
|
||||
==== WHERE Clause
|
||||
|
@ -111,6 +127,11 @@ where:
|
|||
|
||||
Represents an expression that evaluates to a `boolean`. Only the rows that match the condition (to `true`) are returned.
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[basicWhere]
|
||||
----
|
||||
|
||||
[[sql-syntax-group-by]]
|
||||
[float]
|
||||
==== GROUP BY
|
||||
|
@ -126,10 +147,80 @@ where:
|
|||
|
||||
`grouping_element`::
|
||||
|
||||
Represents an expression on which rows are being grouped _on_. It can be a column name, name or ordinal number of a column or an arbitrary expression of column values.
|
||||
Represents an expression on which rows are being grouped _on_. It can be a column name, alias or ordinal number of a column or an arbitrary expression of column values.
|
||||
|
||||
A common, group by column name:
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByColumn]
|
||||
----
|
||||
|
||||
Grouping by output ordinal:
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByOrdinal]
|
||||
----
|
||||
|
||||
Grouping by alias:
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByAlias]
|
||||
----
|
||||
|
||||
And grouping by column expression (typically used along-side an alias):
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByExpression]
|
||||
----
|
||||
|
||||
When a `GROUP BY` clause is used in a `SELECT`, _all_ output expressions must be either aggregate functions or expressions used for grouping or derivates of (otherwise there would be more than one possible value to return for each ungrouped column).
|
||||
|
||||
To wit:
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByAndAgg]
|
||||
----
|
||||
|
||||
Expressions over aggregates used in output:
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByAndAggExpression]
|
||||
----
|
||||
|
||||
Multiple aggregates used:
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByAndMultipleAggs]
|
||||
----
|
||||
|
||||
[[sql-syntax-group-by-implicit]]
|
||||
[float]
|
||||
===== Implicit Grouping
|
||||
|
||||
When an aggregation is used without an associated `GROUP BY`, an __implicit grouping__ is applied, meaning all selected rows are considered to form a single default, or implicit group.
|
||||
As such, the query emits only a single row (as there is only a single group).
|
||||
|
||||
A common example is counting the number of records:
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByImplicitCount]
|
||||
----
|
||||
|
||||
Of course, multiple aggregations can be applied:
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByImplicitMultipleAggs]
|
||||
----
|
||||
|
||||
[[sql-syntax-having]]
|
||||
[float]
|
||||
==== HAVING
|
||||
|
@ -147,13 +238,44 @@ where:
|
|||
|
||||
Represents an expression that evaluates to a `boolean`. Only groups that match the condition (to `true`) are returned.
|
||||
|
||||
Both `WHERE` and `HAVING` are used for filtering however there are several differences between them:
|
||||
Both `WHERE` and `HAVING` are used for filtering however there are several significant differences between them:
|
||||
|
||||
. `WHERE` works on individual *rows*, `HAVING` works on the *groups* created by ``GROUP BY``
|
||||
. `WHERE` is evaluated *before* grouping, `HAVING` is evaluated *after* grouping
|
||||
|
||||
Note that it is possible to have a `HAVING` clause without a ``GROUP BY``. In this case, an __implicit grouping__ is applied, meaning all selected rows are considered to form a single group and `HAVING` can be applied on any of the aggregate functions specified on this group. `
|
||||
As such a query emits only a single row (as there is only a single group), `HAVING` condition returns either one row (the group) or zero if the condition fails.
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByHaving]
|
||||
----
|
||||
|
||||
Further more, one can use multiple aggregate expressions inside `HAVING` even ones that are not used in the output (`SELECT`):
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByHavingMultiple]
|
||||
----
|
||||
|
||||
[[sql-syntax-having-group-by-implicit]]
|
||||
[float]
|
||||
===== Implicit Grouping
|
||||
|
||||
As indicated above, it is possible to have a `HAVING` clause without a ``GROUP BY``. In this case, the so-called <<sql-syntax-group-by-implicit, __implicit grouping__>> is applied, meaning all selected rows are considered to form a single group and `HAVING` can be applied on any of the aggregate functions specified on this group. `
|
||||
As such, the query emits only a single row (as there is only a single group) and `HAVING` condition returns either one row (the group) or zero if the condition fails.
|
||||
|
||||
In this example, `HAVING` matches:
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[groupByHavingImplicitMatch]
|
||||
----
|
||||
|
||||
//However `HAVING` can also not match, in which case an empty result is returned:
|
||||
//
|
||||
//["source","sql",subs="attributes,callouts,macros"]
|
||||
//----
|
||||
//include-tagged::{sql-specs}/docs.csv-spec[groupByHavingImplicitNoMatch]
|
||||
//----
|
||||
|
||||
|
||||
[[sql-syntax-order-by]]
|
||||
[float]
|
||||
|
@ -178,30 +300,10 @@ IMPORTANT: When used along-side, `GROUP BY` expression can point _only_ to the c
|
|||
|
||||
For example, the following query sorts by an arbitrary input field (`page_count`):
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /_xpack/sql?format=txt
|
||||
{
|
||||
"query": "SELECT * FROM library ORDER BY page_count DESC LIMIT 5"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:library]
|
||||
|
||||
which results in something like:
|
||||
|
||||
[source,text]
|
||||
--------------------------------------------------
|
||||
author | name | page_count | release_date
|
||||
-----------------+--------------------+---------------+------------------------
|
||||
Peter F. Hamilton|Pandora's Star |768 |2004-03-02T00:00:00.000Z
|
||||
Vernor Vinge |A Fire Upon the Deep|613 |1992-06-01T00:00:00.000Z
|
||||
Frank Herbert |Dune |604 |1965-06-01T00:00:00.000Z
|
||||
Alastair Reynolds|Revelation Space |585 |2000-03-15T00:00:00.000Z
|
||||
James S.A. Corey |Leviathan Wakes |561 |2011-06-02T00:00:00.000Z
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[s/\|/\\|/ s/\+/\\+/]
|
||||
// TESTRESPONSE[_cat]
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[orderByBasic]
|
||||
----
|
||||
|
||||
[[sql-syntax-order-by-score]]
|
||||
==== Order By Score
|
||||
|
@ -215,54 +317,18 @@ combined using the same rules as {es}'s
|
|||
|
||||
To sort based on the `score`, use the special function `SCORE()`:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /_xpack/sql?format=txt
|
||||
{
|
||||
"query": "SELECT SCORE(), * FROM library WHERE match(name, 'dune') ORDER BY SCORE() DESC"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:library]
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[orderByScore]
|
||||
----
|
||||
|
||||
Which results in something like:
|
||||
Note that you can return `SCORE()` by using a full-text search predicate in the `WHERE` clause.
|
||||
This is possible even if `SCORE()` is not used for sorting:
|
||||
|
||||
[source,text]
|
||||
--------------------------------------------------
|
||||
SCORE() | author | name | page_count | release_date
|
||||
---------------+---------------+-------------------+---------------+------------------------
|
||||
2.288635 |Frank Herbert |Dune |604 |1965-06-01T00:00:00.000Z
|
||||
1.8893257 |Frank Herbert |Dune Messiah |331 |1969-10-15T00:00:00.000Z
|
||||
1.6086555 |Frank Herbert |Children of Dune |408 |1976-04-21T00:00:00.000Z
|
||||
1.4005898 |Frank Herbert |God Emperor of Dune|454 |1981-05-28T00:00:00.000Z
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[s/\|/\\|/ s/\+/\\+/ s/\(/\\\(/ s/\)/\\\)/]
|
||||
// TESTRESPONSE[_cat]
|
||||
|
||||
Note that you can return `SCORE()` by adding it to the where clause. This
|
||||
is possible even if you are not sorting by `SCORE()`:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /_xpack/sql?format=txt
|
||||
{
|
||||
"query": "SELECT SCORE(), * FROM library WHERE match(name, 'dune') ORDER BY page_count DESC"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[setup:library]
|
||||
|
||||
[source,text]
|
||||
--------------------------------------------------
|
||||
SCORE() | author | name | page_count | release_date
|
||||
---------------+---------------+-------------------+---------------+------------------------
|
||||
2.288635 |Frank Herbert |Dune |604 |1965-06-01T00:00:00.000Z
|
||||
1.4005898 |Frank Herbert |God Emperor of Dune|454 |1981-05-28T00:00:00.000Z
|
||||
1.6086555 |Frank Herbert |Children of Dune |408 |1976-04-21T00:00:00.000Z
|
||||
1.8893257 |Frank Herbert |Dune Messiah |331 |1969-10-15T00:00:00.000Z
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[s/\|/\\|/ s/\+/\\+/ s/\(/\\\(/ s/\)/\\\)/]
|
||||
// TESTRESPONSE[_cat]
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[orderByScoreWithMatch]
|
||||
----
|
||||
|
||||
NOTE:
|
||||
Trying to return `score` from a non full-text queries will return the same value for all results, as
|
||||
|
@ -284,3 +350,10 @@ where
|
|||
count:: is a positive integer or zero indicating the maximum *possible* number of results being returned (as there might be less matches than the limit). If `0` is specified, no results are returned.
|
||||
|
||||
ALL:: indicates there is no limit and thus all results are being returned.
|
||||
|
||||
To return
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[limitBasic]
|
||||
----
|
|
@ -12,3 +12,8 @@ SHOW COLUMNS [ FROM | IN ] ? table
|
|||
.Description
|
||||
|
||||
List the columns in table and their data type (and other attributes).
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[showColumns]
|
||||
----
|
||||
|
|
|
@ -14,3 +14,34 @@ SHOW FUNCTIONS [ LIKE? pattern<1>? ]?
|
|||
.Description
|
||||
|
||||
List all the SQL functions and their type. The `LIKE` clause can be used to restrict the list of names to the given pattern.
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[showFunctions]
|
||||
----
|
||||
|
||||
The list of functions returned can be customized based on the pattern.
|
||||
|
||||
It can be an exact match:
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[showFunctionsLikeExact]
|
||||
----
|
||||
|
||||
A wildcard for exactly one character:
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[showFunctionsLikeChar]
|
||||
----
|
||||
|
||||
A wildcard matching zero or more characters:
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[showFunctionsLikeWildcard]
|
||||
----
|
||||
|
||||
Or of course, a variation of the above:
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[showFunctionsWithPattern]
|
||||
----
|
||||
|
|
|
@ -13,4 +13,36 @@ SHOW TABLES [ LIKE? pattern<1>? ]?
|
|||
|
||||
.Description
|
||||
|
||||
List the tables available to the current user and their type. The `LIKE` clause can be used to restrict the list of names to the given pattern.
|
||||
List the tables available to the current user and their type.
|
||||
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[showTables]
|
||||
----
|
||||
|
||||
The `LIKE` clause can be used to restrict the list of names to the given pattern.
|
||||
|
||||
The pattern can be an exact match:
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[showTablesLikeExact]
|
||||
----
|
||||
|
||||
Multiple chars:
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[showTablesLikeWildcard]
|
||||
----
|
||||
|
||||
A single char:
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[showTablesLikeOneChar]
|
||||
----
|
||||
|
||||
|
||||
Or a mixture of single and multiple chars:
|
||||
["source","sql",subs="attributes,callouts,macros"]
|
||||
----
|
||||
include-tagged::{sql-specs}/docs.csv-spec[showTablesLikeMixed]
|
||||
----
|
||||
|
|
|
@ -428,7 +428,7 @@ final class TypeConverter {
|
|||
case SMALLINT:
|
||||
case INTEGER:
|
||||
case BIGINT:
|
||||
return Float.valueOf((float) ((Number) val).longValue());
|
||||
return Float.valueOf(((Number) val).longValue());
|
||||
case REAL:
|
||||
case FLOAT:
|
||||
case DOUBLE:
|
||||
|
@ -447,7 +447,7 @@ final class TypeConverter {
|
|||
case SMALLINT:
|
||||
case INTEGER:
|
||||
case BIGINT:
|
||||
return Double.valueOf((double) ((Number) val).longValue());
|
||||
return Double.valueOf(((Number) val).longValue());
|
||||
case REAL:
|
||||
case FLOAT:
|
||||
case DOUBLE:
|
||||
|
|
|
@ -10,6 +10,8 @@ dependencies {
|
|||
|
||||
// JDBC testing dependencies
|
||||
compile project(path: xpackModule('sql:jdbc'), configuration: 'nodeps')
|
||||
|
||||
compile project(path: xpackModule('sql:sql-action'))
|
||||
compile "net.sourceforge.csvjdbc:csvjdbc:1.0.34"
|
||||
|
||||
// CLI testing dependencies
|
||||
|
@ -76,6 +78,7 @@ thirdPartyAudit.excludes = [
|
|||
subprojects {
|
||||
apply plugin: 'elasticsearch.standalone-rest-test'
|
||||
dependencies {
|
||||
|
||||
/* Since we're a standalone rest test we actually get transitive
|
||||
* dependencies but we don't really want them because they cause
|
||||
* all kinds of trouble with the jar hell checks. So we suppress
|
||||
|
|
|
@ -0,0 +1,90 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.qa.sql.nosecurity;
|
||||
|
||||
import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
|
||||
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.client.RestClient;
|
||||
import org.elasticsearch.xpack.qa.sql.jdbc.CsvTestUtils.CsvTestCase;
|
||||
import org.elasticsearch.xpack.qa.sql.jdbc.DataLoader;
|
||||
import org.elasticsearch.xpack.qa.sql.jdbc.JdbcAssert;
|
||||
import org.elasticsearch.xpack.qa.sql.jdbc.SpecBaseIntegrationTestCase;
|
||||
import org.elasticsearch.xpack.qa.sql.jdbc.SqlSpecTestCase;
|
||||
|
||||
import java.sql.Connection;
|
||||
import java.sql.ResultSet;
|
||||
import java.sql.SQLException;
|
||||
import java.util.List;
|
||||
|
||||
import static org.elasticsearch.xpack.qa.sql.jdbc.CsvTestUtils.csvConnection;
|
||||
import static org.elasticsearch.xpack.qa.sql.jdbc.CsvTestUtils.executeCsvQuery;
|
||||
import static org.elasticsearch.xpack.qa.sql.jdbc.CsvTestUtils.specParser;
|
||||
|
||||
/**
|
||||
* CSV test specification for DOC examples.
|
||||
* While we could use the existing tests, their purpose is to test corner-cases which
|
||||
* gets reflected in the dataset structure.
|
||||
* The doc tests while redundant, try to be expressive first and foremost and sometimes
|
||||
* the dataset isn't exactly convenient.
|
||||
*
|
||||
* Also looking around for the tests across the test files isn't trivial.
|
||||
*
|
||||
* That's not to say the two cannot be merged however that felt like too much of an effort
|
||||
* at this stage and, to not keep things stalling, started with this approach.
|
||||
*/
|
||||
public class JdbcDocCsvSpectIT extends SpecBaseIntegrationTestCase {
|
||||
|
||||
private final CsvTestCase testCase;
|
||||
|
||||
@Override
|
||||
protected String indexName() {
|
||||
return "library";
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void loadDataset(RestClient client) throws Exception {
|
||||
DataLoader.loadDocsDatasetIntoEs(client);
|
||||
}
|
||||
|
||||
@ParametersFactory(shuffle = false, argumentFormatting = SqlSpecTestCase.PARAM_FORMATTING)
|
||||
public static List<Object[]> readScriptSpec() throws Exception {
|
||||
Parser parser = specParser();
|
||||
return readScriptSpec("/docs.csv-spec", parser);
|
||||
}
|
||||
|
||||
public JdbcDocCsvSpectIT(String fileName, String groupName, String testName, Integer lineNumber, CsvTestCase testCase) {
|
||||
super(fileName, groupName, testName, lineNumber);
|
||||
this.testCase = testCase;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void assertResults(ResultSet expected, ResultSet elastic) throws SQLException {
|
||||
Logger log = logEsResultSet() ? logger : null;
|
||||
|
||||
//
|
||||
// uncomment this to printout the result set and create new CSV tests
|
||||
//
|
||||
//JdbcTestUtils.logLikeCLI(elastic, log);
|
||||
JdbcAssert.assertResultSets(expected, elastic, log, true);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected boolean logEsResultSet() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected final void doTest() throws Throwable {
|
||||
try (Connection csv = csvConnection(testCase.expectedResults); Connection es = esJdbc()) {
|
||||
|
||||
// pass the testName as table for debugging purposes (in case the underlying reader is missing)
|
||||
ResultSet expected = executeCsvQuery(csv, testName);
|
||||
ResultSet elasticResults = executeJdbcQuery(es, testCase.query);
|
||||
assertResults(expected, elasticResults);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -6,14 +6,13 @@
|
|||
package org.elasticsearch.xpack.qa.sql.jdbc;
|
||||
|
||||
import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
|
||||
|
||||
import org.elasticsearch.xpack.qa.sql.jdbc.CsvTestUtils.CsvTestCase;
|
||||
import org.elasticsearch.xpack.sql.jdbc.jdbc.JdbcConfiguration;
|
||||
|
||||
import java.sql.Connection;
|
||||
import java.sql.ResultSet;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Properties;
|
||||
|
||||
import static org.elasticsearch.xpack.qa.sql.jdbc.CsvTestUtils.csvConnection;
|
||||
import static org.elasticsearch.xpack.qa.sql.jdbc.CsvTestUtils.executeCsvQuery;
|
||||
|
@ -57,13 +56,4 @@ public abstract class CsvSpecTestCase extends SpecBaseIntegrationTestCase {
|
|||
assertResults(expected, elasticResults);
|
||||
}
|
||||
}
|
||||
|
||||
// make sure ES uses UTC (otherwise JDBC driver picks up the JVM timezone per spec/convention)
|
||||
@Override
|
||||
protected Properties connectionProperties() {
|
||||
Properties connectionProperties = new Properties();
|
||||
connectionProperties.setProperty(JdbcConfiguration.TIME_ZONE, "UTC");
|
||||
return connectionProperties;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -190,7 +190,7 @@ public final class CsvTestUtils {
|
|||
}
|
||||
|
||||
public static class CsvTestCase {
|
||||
String query;
|
||||
String expectedResults;
|
||||
public String query;
|
||||
public String expectedResults;
|
||||
}
|
||||
}
|
|
@ -32,18 +32,28 @@ public class DataLoader {
|
|||
|
||||
public static void main(String[] args) throws Exception {
|
||||
try (RestClient client = RestClient.builder(new HttpHost("localhost", 9200)).build()) {
|
||||
loadDatasetIntoEs(client);
|
||||
loadEmpDatasetIntoEs(client);
|
||||
Loggers.getLogger(DataLoader.class).info("Data loaded");
|
||||
}
|
||||
}
|
||||
|
||||
protected static void loadDatasetIntoEs(RestClient client) throws Exception {
|
||||
loadDatasetIntoEs(client, "test_emp");
|
||||
loadDatasetIntoEs(client, "test_emp_copy");
|
||||
loadEmpDatasetIntoEs(client);
|
||||
}
|
||||
|
||||
protected static void loadEmpDatasetIntoEs(RestClient client) throws Exception {
|
||||
loadEmpDatasetIntoEs(client, "test_emp");
|
||||
loadEmpDatasetIntoEs(client, "test_emp_copy");
|
||||
makeAlias(client, "test_alias", "test_emp", "test_emp_copy");
|
||||
makeAlias(client, "test_alias_emp", "test_emp", "test_emp_copy");
|
||||
}
|
||||
|
||||
public static void loadDocsDatasetIntoEs(RestClient client) throws Exception {
|
||||
loadEmpDatasetIntoEs(client, "emp");
|
||||
loadLibDatasetIntoEs(client, "library");
|
||||
makeAlias(client, "employees", "emp");
|
||||
}
|
||||
|
||||
private static void createString(String name, XContentBuilder builder) throws Exception {
|
||||
builder.startObject(name).field("type", "text")
|
||||
.startObject("fields")
|
||||
|
@ -51,7 +61,8 @@ public class DataLoader {
|
|||
.endObject()
|
||||
.endObject();
|
||||
}
|
||||
protected static void loadDatasetIntoEs(RestClient client, String index) throws Exception {
|
||||
|
||||
protected static void loadEmpDatasetIntoEs(RestClient client, String index) throws Exception {
|
||||
Request request = new Request("PUT", "/" + index);
|
||||
XContentBuilder createIndex = JsonXContent.contentBuilder().startObject();
|
||||
createIndex.startObject("settings");
|
||||
|
@ -151,6 +162,52 @@ public class DataLoader {
|
|||
client.performRequest(request);
|
||||
}
|
||||
|
||||
protected static void loadLibDatasetIntoEs(RestClient client, String index) throws Exception {
|
||||
Request request = new Request("PUT", "/" + index);
|
||||
XContentBuilder createIndex = JsonXContent.contentBuilder().startObject();
|
||||
createIndex.startObject("settings");
|
||||
{
|
||||
createIndex.field("number_of_shards", 1);
|
||||
createIndex.field("number_of_replicas", 1);
|
||||
}
|
||||
createIndex.endObject();
|
||||
createIndex.startObject("mappings");
|
||||
{
|
||||
createIndex.startObject("book");
|
||||
{
|
||||
createIndex.startObject("properties");
|
||||
{
|
||||
createString("name", createIndex);
|
||||
createString("author", createIndex);
|
||||
createIndex.startObject("release_date").field("type", "date").endObject();
|
||||
createIndex.startObject("page_count").field("type", "short").endObject();
|
||||
}
|
||||
createIndex.endObject();
|
||||
}
|
||||
createIndex.endObject();
|
||||
}
|
||||
createIndex.endObject().endObject();
|
||||
request.setJsonEntity(Strings.toString(createIndex));
|
||||
client.performRequest(request);
|
||||
|
||||
request = new Request("POST", "/" + index + "/book/_bulk");
|
||||
request.addParameter("refresh", "true");
|
||||
StringBuilder bulk = new StringBuilder();
|
||||
csvToLines("library", (titles, fields) -> {
|
||||
bulk.append("{\"index\":{\"_id\":\"" + fields.get(0) + "\"}}\n");
|
||||
bulk.append("{");
|
||||
for (int f = 0; f < titles.size(); f++) {
|
||||
if (f > 0) {
|
||||
bulk.append(",");
|
||||
}
|
||||
bulk.append('"').append(titles.get(f)).append("\":\"").append(fields.get(f)).append('"');
|
||||
}
|
||||
bulk.append("}\n");
|
||||
});
|
||||
request.setJsonEntity(bulk.toString());
|
||||
client.performRequest(request);
|
||||
}
|
||||
|
||||
protected static void makeAlias(RestClient client, String aliasName, String... indices) throws Exception {
|
||||
for (String index : indices) {
|
||||
client.performRequest(new Request("POST", "/" + index + "/_alias/" + aliasName));
|
||||
|
|
|
@ -10,13 +10,11 @@ import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
|
|||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.test.junit.annotations.TestLogging;
|
||||
import org.elasticsearch.xpack.qa.sql.jdbc.CsvTestUtils.CsvTestCase;
|
||||
import org.elasticsearch.xpack.sql.jdbc.jdbc.JdbcConfiguration;
|
||||
|
||||
import java.sql.Connection;
|
||||
import java.sql.ResultSet;
|
||||
import java.sql.SQLException;
|
||||
import java.util.List;
|
||||
import java.util.Properties;
|
||||
|
||||
import static org.elasticsearch.xpack.qa.sql.jdbc.CsvTestUtils.csvConnection;
|
||||
import static org.elasticsearch.xpack.qa.sql.jdbc.CsvTestUtils.executeCsvQuery;
|
||||
|
@ -65,12 +63,4 @@ public abstract class DebugCsvSpec extends SpecBaseIntegrationTestCase {
|
|||
assertResults(expected, elasticResults);
|
||||
}
|
||||
}
|
||||
|
||||
// make sure ES uses UTC (otherwise JDBC driver picks up the JVM timezone per spec/convention)
|
||||
@Override
|
||||
protected Properties connectionProperties() {
|
||||
Properties connectionProperties = new Properties();
|
||||
connectionProperties.setProperty(JdbcConfiguration.TIME_ZONE, "UTC");
|
||||
return connectionProperties;
|
||||
}
|
||||
}
|
|
@ -20,10 +20,20 @@ import java.util.Locale;
|
|||
import java.util.TimeZone;
|
||||
|
||||
import static java.lang.String.format;
|
||||
import static java.sql.Types.BIGINT;
|
||||
import static java.sql.Types.DOUBLE;
|
||||
import static java.sql.Types.FLOAT;
|
||||
import static java.sql.Types.INTEGER;
|
||||
import static java.sql.Types.REAL;
|
||||
import static java.sql.Types.SMALLINT;
|
||||
import static java.sql.Types.TINYINT;
|
||||
import static org.junit.Assert.assertEquals;
|
||||
import static org.junit.Assert.assertTrue;
|
||||
import static org.junit.Assert.fail;
|
||||
|
||||
/**
|
||||
* Utility class for doing JUnit-style asserts over JDBC.
|
||||
*/
|
||||
public class JdbcAssert {
|
||||
private static final Calendar UTC_CALENDAR = Calendar.getInstance(TimeZone.getTimeZone("UTC"), Locale.ROOT);
|
||||
|
||||
|
@ -32,14 +42,29 @@ public class JdbcAssert {
|
|||
}
|
||||
|
||||
public static void assertResultSets(ResultSet expected, ResultSet actual, Logger logger) throws SQLException {
|
||||
assertResultSets(expected, actual, logger, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* Assert the given result sets, potentially in a lenient way.
|
||||
* When lenient is specified, the type comparison of a column is widden to reach a common, compatible ground.
|
||||
* This means promoting integer types to long and floating types to double and comparing their values.
|
||||
* For example in a non-lenient, strict case a comparison between an int and a tinyint would fail, with lenient it will succeed as
|
||||
* long as the actual value is the same.
|
||||
*/
|
||||
public static void assertResultSets(ResultSet expected, ResultSet actual, Logger logger, boolean lenient) throws SQLException {
|
||||
try (ResultSet ex = expected; ResultSet ac = actual) {
|
||||
assertResultSetMetadata(ex, ac, logger);
|
||||
assertResultSetData(ex, ac, logger);
|
||||
assertResultSetMetadata(ex, ac, logger, lenient);
|
||||
assertResultSetData(ex, ac, logger, lenient);
|
||||
}
|
||||
}
|
||||
|
||||
// metadata doesn't consume a ResultSet thus it shouldn't close it
|
||||
public static void assertResultSetMetadata(ResultSet expected, ResultSet actual, Logger logger) throws SQLException {
|
||||
assertResultSetMetadata(expected, actual, logger, false);
|
||||
}
|
||||
|
||||
// metadata doesn't consume a ResultSet thus it shouldn't close it
|
||||
public static void assertResultSetMetadata(ResultSet expected, ResultSet actual, Logger logger, boolean lenient) throws SQLException {
|
||||
ResultSetMetaData expectedMeta = expected.getMetaData();
|
||||
ResultSetMetaData actualMeta = actual.getMetaData();
|
||||
|
||||
|
@ -81,8 +106,8 @@ public class JdbcAssert {
|
|||
}
|
||||
|
||||
// use the type not the name (timestamp with timezone returns spaces for example)
|
||||
int expectedType = expectedMeta.getColumnType(column);
|
||||
int actualType = actualMeta.getColumnType(column);
|
||||
int expectedType = typeOf(expectedMeta.getColumnType(column), lenient);
|
||||
int actualType = typeOf(actualMeta.getColumnType(column), lenient);
|
||||
|
||||
// since H2 cannot use a fixed timezone, the data is stored in UTC (and thus with timezone)
|
||||
if (expectedType == Types.TIMESTAMP_WITH_TIMEZONE) {
|
||||
|
@ -92,6 +117,7 @@ public class JdbcAssert {
|
|||
if (expectedType == Types.FLOAT && expected instanceof CsvResultSet) {
|
||||
expectedType = Types.REAL;
|
||||
}
|
||||
// when lenient is used, an int is equivalent to a short, etc...
|
||||
assertEquals("Different column type for column [" + expectedName + "] (" + JDBCType.valueOf(expectedType) + " != "
|
||||
+ JDBCType.valueOf(actualType) + ")", expectedType, actualType);
|
||||
}
|
||||
|
@ -99,12 +125,16 @@ public class JdbcAssert {
|
|||
|
||||
// The ResultSet is consumed and thus it should be closed
|
||||
public static void assertResultSetData(ResultSet expected, ResultSet actual, Logger logger) throws SQLException {
|
||||
assertResultSetData(expected, actual, logger, false);
|
||||
}
|
||||
|
||||
public static void assertResultSetData(ResultSet expected, ResultSet actual, Logger logger, boolean lenient) throws SQLException {
|
||||
try (ResultSet ex = expected; ResultSet ac = actual) {
|
||||
doAssertResultSetData(ex, ac, logger);
|
||||
doAssertResultSetData(ex, ac, logger, lenient);
|
||||
}
|
||||
}
|
||||
|
||||
private static void doAssertResultSetData(ResultSet expected, ResultSet actual, Logger logger) throws SQLException {
|
||||
private static void doAssertResultSetData(ResultSet expected, ResultSet actual, Logger logger, boolean lenient) throws SQLException {
|
||||
ResultSetMetaData metaData = expected.getMetaData();
|
||||
int columns = metaData.getColumnCount();
|
||||
|
||||
|
@ -118,10 +148,33 @@ public class JdbcAssert {
|
|||
}
|
||||
|
||||
for (int column = 1; column <= columns; column++) {
|
||||
Object expectedObject = expected.getObject(column);
|
||||
Object actualObject = actual.getObject(column);
|
||||
|
||||
int type = metaData.getColumnType(column);
|
||||
Class<?> expectedColumnClass = null;
|
||||
try {
|
||||
String columnClassName = metaData.getColumnClassName(column);
|
||||
|
||||
// fix for CSV which returns the shortName not fully-qualified name
|
||||
if (!columnClassName.contains(".")) {
|
||||
switch (columnClassName) {
|
||||
case "Timestamp":
|
||||
columnClassName = "java.sql.Timestamp";
|
||||
break;
|
||||
case "Int":
|
||||
columnClassName = "java.lang.Integer";
|
||||
break;
|
||||
default:
|
||||
columnClassName = "java.lang." + columnClassName;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
expectedColumnClass = Class.forName(columnClassName);
|
||||
} catch (ClassNotFoundException cnfe) {
|
||||
throw new SQLException(cnfe);
|
||||
}
|
||||
|
||||
Object expectedObject = expected.getObject(column);
|
||||
Object actualObject = lenient ? actual.getObject(column, expectedColumnClass) : actual.getObject(column);
|
||||
|
||||
String msg = format(Locale.ROOT, "Different result for column [" + metaData.getColumnName(column) + "], "
|
||||
+ "entry [" + (count + 1) + "]");
|
||||
|
@ -161,4 +214,20 @@ public class JdbcAssert {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the value of the given type either in a lenient fashion (widened) or strict.
|
||||
*/
|
||||
private static int typeOf(int columnType, boolean lenient) {
|
||||
if (lenient) {
|
||||
// integer upcast to long
|
||||
if (columnType == TINYINT || columnType == SMALLINT || columnType == INTEGER || columnType == BIGINT) {
|
||||
return BIGINT;
|
||||
}
|
||||
if (columnType == FLOAT || columnType == REAL || columnType == DOUBLE) {
|
||||
return REAL;
|
||||
}
|
||||
}
|
||||
|
||||
return columnType;
|
||||
}
|
||||
}
|
|
@ -6,10 +6,16 @@
|
|||
package org.elasticsearch.xpack.qa.sql.jdbc;
|
||||
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.xpack.sql.action.CliFormatter;
|
||||
import org.elasticsearch.xpack.sql.proto.ColumnInfo;
|
||||
|
||||
import java.sql.JDBCType;
|
||||
import java.sql.ResultSet;
|
||||
import java.sql.ResultSetMetaData;
|
||||
import java.sql.SQLException;
|
||||
import java.sql.Timestamp;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
public abstract class JdbcTestUtils {
|
||||
|
||||
|
@ -96,4 +102,36 @@ public abstract class JdbcTestUtils {
|
|||
}
|
||||
return buffer;
|
||||
}
|
||||
|
||||
public static void logLikeCLI(ResultSet rs, Logger logger) throws SQLException {
|
||||
ResultSetMetaData metaData = rs.getMetaData();
|
||||
int columns = metaData.getColumnCount();
|
||||
|
||||
List<ColumnInfo> cols = new ArrayList<>(columns);
|
||||
|
||||
for (int i = 1; i <= columns; i++) {
|
||||
cols.add(new ColumnInfo(metaData.getTableName(i), metaData.getColumnName(i), metaData.getColumnTypeName(i),
|
||||
JDBCType.valueOf(metaData.getColumnType(i)), metaData.getColumnDisplaySize(i)));
|
||||
}
|
||||
|
||||
|
||||
List<List<Object>> data = new ArrayList<>();
|
||||
|
||||
while (rs.next()) {
|
||||
List<Object> entry = new ArrayList<>(columns);
|
||||
for (int i = 1; i <= columns; i++) {
|
||||
Object value = rs.getObject(i);
|
||||
// timestamp to string is similar but not ISO8601 - fix it
|
||||
if (value instanceof Timestamp) {
|
||||
Timestamp ts = (Timestamp) value;
|
||||
value = ts.toInstant().toString();
|
||||
}
|
||||
entry.add(value);
|
||||
}
|
||||
data.add(entry);
|
||||
}
|
||||
|
||||
CliFormatter formatter = new CliFormatter(cols, data);
|
||||
logger.info("\n" + formatter.formatWithHeader(cols, data));
|
||||
}
|
||||
}
|
|
@ -8,8 +8,10 @@ package org.elasticsearch.xpack.qa.sql.jdbc;
|
|||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.client.Request;
|
||||
import org.elasticsearch.client.ResponseException;
|
||||
import org.elasticsearch.client.RestClient;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.SuppressForbidden;
|
||||
import org.elasticsearch.xpack.sql.jdbc.jdbc.JdbcConfiguration;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.Before;
|
||||
|
||||
|
@ -28,6 +30,7 @@ import java.util.LinkedHashMap;
|
|||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
import java.util.Properties;
|
||||
|
||||
/**
|
||||
* Tests that compare the Elasticsearch JDBC client to some other JDBC client
|
||||
|
@ -50,11 +53,19 @@ public abstract class SpecBaseIntegrationTestCase extends JdbcIntegrationTestCas
|
|||
|
||||
@Before
|
||||
public void setupTestDataIfNeeded() throws Exception {
|
||||
if (client().performRequest(new Request("HEAD", "/test_emp")).getStatusLine().getStatusCode() == 404) {
|
||||
DataLoader.loadDatasetIntoEs(client());
|
||||
if (client().performRequest(new Request("HEAD", "/" + indexName())).getStatusLine().getStatusCode() == 404) {
|
||||
loadDataset(client());
|
||||
}
|
||||
}
|
||||
|
||||
protected String indexName() {
|
||||
return "test_emp";
|
||||
}
|
||||
|
||||
protected void loadDataset(RestClient client) throws Exception {
|
||||
DataLoader.loadEmpDatasetIntoEs(client);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected boolean preserveIndicesUponCompletion() {
|
||||
return true;
|
||||
|
@ -95,6 +106,14 @@ public abstract class SpecBaseIntegrationTestCase extends JdbcIntegrationTestCas
|
|||
return statement.executeQuery(query);
|
||||
}
|
||||
|
||||
// TODO: use UTC for now until deciding on a strategy for handling date extraction
|
||||
@Override
|
||||
protected Properties connectionProperties() {
|
||||
Properties connectionProperties = new Properties();
|
||||
connectionProperties.setProperty(JdbcConfiguration.TIME_ZONE, "UTC");
|
||||
return connectionProperties;
|
||||
}
|
||||
|
||||
protected boolean logEsResultSet() {
|
||||
return false;
|
||||
}
|
||||
|
|
|
@ -7,14 +7,12 @@ package org.elasticsearch.xpack.qa.sql.jdbc;
|
|||
|
||||
import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
|
||||
|
||||
import org.elasticsearch.xpack.sql.jdbc.jdbc.JdbcConfiguration;
|
||||
import org.junit.ClassRule;
|
||||
|
||||
import java.sql.Connection;
|
||||
import java.sql.ResultSet;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Properties;
|
||||
|
||||
/**
|
||||
* Tests comparing sql queries executed against our jdbc client
|
||||
|
@ -67,12 +65,4 @@ public abstract class SqlSpecTestCase extends SpecBaseIntegrationTestCase {
|
|||
assertResults(expected, elasticResults);
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: use UTC for now until deciding on a strategy for handling date extraction
|
||||
@Override
|
||||
protected Properties connectionProperties() {
|
||||
Properties connectionProperties = new Properties();
|
||||
connectionProperties.setProperty(JdbcConfiguration.TIME_ZONE, "UTC");
|
||||
return connectionProperties;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,639 @@
|
|||
//
|
||||
// CSV spec used by the docs
|
||||
//
|
||||
|
||||
///////////////////////////////
|
||||
//
|
||||
// Describe table
|
||||
//
|
||||
///////////////////////////////
|
||||
|
||||
describeTable
|
||||
// tag::describeTable
|
||||
DESCRIBE emp;
|
||||
|
||||
column | type
|
||||
--------------------+---------------
|
||||
birth_date |TIMESTAMP
|
||||
dep |STRUCT
|
||||
dep.dep_id |VARCHAR
|
||||
dep.dep_name |VARCHAR
|
||||
dep.dep_name.keyword|VARCHAR
|
||||
dep.from_date |TIMESTAMP
|
||||
dep.to_date |TIMESTAMP
|
||||
emp_no |INTEGER
|
||||
first_name |VARCHAR
|
||||
first_name.keyword |VARCHAR
|
||||
gender |VARCHAR
|
||||
hire_date |TIMESTAMP
|
||||
languages |TINYINT
|
||||
last_name |VARCHAR
|
||||
last_name.keyword |VARCHAR
|
||||
salary |INTEGER
|
||||
|
||||
// end::describeTable
|
||||
;
|
||||
|
||||
//describeTableAlias
|
||||
// tag::describeTableAlias
|
||||
//DESCRIBE employee;
|
||||
|
||||
// column | type
|
||||
//---------------+---------------
|
||||
|
||||
// end::describeTableAlias
|
||||
//;
|
||||
|
||||
//
|
||||
// Show columns
|
||||
//
|
||||
showColumns
|
||||
// tag::showColumns
|
||||
SHOW COLUMNS IN emp;
|
||||
|
||||
column | type
|
||||
--------------------+---------------
|
||||
birth_date |TIMESTAMP
|
||||
dep |STRUCT
|
||||
dep.dep_id |VARCHAR
|
||||
dep.dep_name |VARCHAR
|
||||
dep.dep_name.keyword|VARCHAR
|
||||
dep.from_date |TIMESTAMP
|
||||
dep.to_date |TIMESTAMP
|
||||
emp_no |INTEGER
|
||||
first_name |VARCHAR
|
||||
first_name.keyword |VARCHAR
|
||||
gender |VARCHAR
|
||||
hire_date |TIMESTAMP
|
||||
languages |TINYINT
|
||||
last_name |VARCHAR
|
||||
last_name.keyword |VARCHAR
|
||||
salary |INTEGER
|
||||
|
||||
// end::showColumns
|
||||
;
|
||||
|
||||
//showColumnsInAlias
|
||||
// tag::showColumnsInAlias
|
||||
//SHOW COLUMNS FROM employee;
|
||||
|
||||
// column | type
|
||||
//---------------+---------------
|
||||
|
||||
// end::showColumnsInAlias
|
||||
//;
|
||||
|
||||
///////////////////////////////
|
||||
//
|
||||
// Show Tables
|
||||
//
|
||||
///////////////////////////////
|
||||
|
||||
showTables
|
||||
// tag::showTables
|
||||
SHOW TABLES;
|
||||
|
||||
name | type
|
||||
---------------+---------------
|
||||
emp |BASE TABLE
|
||||
employees |ALIAS
|
||||
library |BASE TABLE
|
||||
|
||||
// end::showTables
|
||||
;
|
||||
|
||||
showTablesLikeExact
|
||||
// tag::showTablesLikeExact
|
||||
SHOW TABLES LIKE 'emp';
|
||||
|
||||
name | type
|
||||
---------------+---------------
|
||||
emp |BASE TABLE
|
||||
|
||||
// end::showTablesLikeExact
|
||||
;
|
||||
|
||||
showTablesLikeWildcard
|
||||
// tag::showTablesLikeWildcard
|
||||
SHOW TABLES LIKE 'emp%';
|
||||
|
||||
name | type
|
||||
---------------+---------------
|
||||
emp |BASE TABLE
|
||||
employees |ALIAS
|
||||
|
||||
// end::showTablesLikeWildcard
|
||||
;
|
||||
|
||||
|
||||
showTablesLikeOneChar
|
||||
// tag::showTablesLikeOneChar
|
||||
SHOW TABLES LIKE 'em_';
|
||||
|
||||
name | type
|
||||
---------------+---------------
|
||||
emp |BASE TABLE
|
||||
|
||||
// end::showTablesLikeOneChar
|
||||
;
|
||||
|
||||
showTablesLikeMixed
|
||||
// tag::showTablesLikeMixed
|
||||
SHOW TABLES LIKE '%em_';
|
||||
|
||||
name | type
|
||||
---------------+---------------
|
||||
emp |BASE TABLE
|
||||
|
||||
// end::showTablesLikeMixed
|
||||
;
|
||||
|
||||
///////////////////////////////
|
||||
//
|
||||
// Show Functions
|
||||
//
|
||||
///////////////////////////////
|
||||
|
||||
showFunctions
|
||||
// tag::showFunctions
|
||||
SHOW FUNCTIONS;
|
||||
|
||||
name | type
|
||||
----------------+---------------
|
||||
AVG |AGGREGATE
|
||||
COUNT |AGGREGATE
|
||||
MAX |AGGREGATE
|
||||
MIN |AGGREGATE
|
||||
SUM |AGGREGATE
|
||||
STDDEV_POP |AGGREGATE
|
||||
VAR_POP |AGGREGATE
|
||||
PERCENTILE |AGGREGATE
|
||||
PERCENTILE_RANK |AGGREGATE
|
||||
SUM_OF_SQUARES |AGGREGATE
|
||||
SKEWNESS |AGGREGATE
|
||||
KURTOSIS |AGGREGATE
|
||||
DAY_OF_MONTH |SCALAR
|
||||
DAY |SCALAR
|
||||
DOM |SCALAR
|
||||
DAY_OF_WEEK |SCALAR
|
||||
DOW |SCALAR
|
||||
DAY_OF_YEAR |SCALAR
|
||||
DOY |SCALAR
|
||||
HOUR_OF_DAY |SCALAR
|
||||
HOUR |SCALAR
|
||||
MINUTE_OF_DAY |SCALAR
|
||||
MINUTE_OF_HOUR |SCALAR
|
||||
MINUTE |SCALAR
|
||||
SECOND_OF_MINUTE|SCALAR
|
||||
SECOND |SCALAR
|
||||
MONTH_OF_YEAR |SCALAR
|
||||
MONTH |SCALAR
|
||||
YEAR |SCALAR
|
||||
WEEK_OF_YEAR |SCALAR
|
||||
WEEK |SCALAR
|
||||
ABS |SCALAR
|
||||
ACOS |SCALAR
|
||||
ASIN |SCALAR
|
||||
ATAN |SCALAR
|
||||
ATAN2 |SCALAR
|
||||
CBRT |SCALAR
|
||||
CEIL |SCALAR
|
||||
CEILING |SCALAR
|
||||
COS |SCALAR
|
||||
COSH |SCALAR
|
||||
COT |SCALAR
|
||||
DEGREES |SCALAR
|
||||
E |SCALAR
|
||||
EXP |SCALAR
|
||||
EXPM1 |SCALAR
|
||||
FLOOR |SCALAR
|
||||
LOG |SCALAR
|
||||
LOG10 |SCALAR
|
||||
MOD |SCALAR
|
||||
PI |SCALAR
|
||||
POWER |SCALAR
|
||||
RADIANS |SCALAR
|
||||
RANDOM |SCALAR
|
||||
RAND |SCALAR
|
||||
ROUND |SCALAR
|
||||
SIGN |SCALAR
|
||||
SIGNUM |SCALAR
|
||||
SIN |SCALAR
|
||||
SINH |SCALAR
|
||||
SQRT |SCALAR
|
||||
TAN |SCALAR
|
||||
SCORE |SCORE
|
||||
|
||||
// end::showFunctions
|
||||
;
|
||||
|
||||
showFunctionsLikeExact
|
||||
// tag::showFunctionsLikeExact
|
||||
SHOW FUNCTIONS LIKE 'ABS';
|
||||
|
||||
name | type
|
||||
---------------+---------------
|
||||
ABS |SCALAR
|
||||
|
||||
// end::showFunctionsLikeExact
|
||||
;
|
||||
|
||||
showFunctionsLikeWildcard
|
||||
// tag::showFunctionsLikeWildcard
|
||||
SHOW FUNCTIONS LIKE 'A%';
|
||||
|
||||
name | type
|
||||
---------------+---------------
|
||||
AVG |AGGREGATE
|
||||
ABS |SCALAR
|
||||
ACOS |SCALAR
|
||||
ASIN |SCALAR
|
||||
ATAN |SCALAR
|
||||
ATAN2 |SCALAR
|
||||
// end::showFunctionsLikeWildcard
|
||||
;
|
||||
|
||||
showFunctionsLikeChar
|
||||
// tag::showFunctionsLikeChar
|
||||
SHOW FUNCTIONS LIKE 'A__';
|
||||
|
||||
name | type
|
||||
---------------+---------------
|
||||
AVG |AGGREGATE
|
||||
ABS |SCALAR
|
||||
// end::showFunctionsLikeChar
|
||||
;
|
||||
|
||||
showFunctionsWithPattern
|
||||
// tag::showFunctionsWithPattern
|
||||
SHOW FUNCTIONS '%DAY%';
|
||||
|
||||
name | type
|
||||
---------------+---------------
|
||||
DAY_OF_MONTH |SCALAR
|
||||
DAY |SCALAR
|
||||
DAY_OF_WEEK |SCALAR
|
||||
DAY_OF_YEAR |SCALAR
|
||||
HOUR_OF_DAY |SCALAR
|
||||
MINUTE_OF_DAY |SCALAR
|
||||
|
||||
// end::showFunctionsWithPattern
|
||||
;
|
||||
|
||||
///////////////////////////////
|
||||
//
|
||||
// Select
|
||||
//
|
||||
///////////////////////////////
|
||||
|
||||
selectColumnAlias
|
||||
// tag::selectColumnAlias
|
||||
SELECT 1 + 1 AS result
|
||||
|
||||
result
|
||||
---------------
|
||||
2
|
||||
|
||||
// end::selectColumnAlias
|
||||
;
|
||||
|
||||
selectInline
|
||||
// tag::selectInline
|
||||
SELECT 1 + 1;
|
||||
|
||||
(1 + 1)
|
||||
---------------
|
||||
2
|
||||
|
||||
// end::selectInline
|
||||
;
|
||||
|
||||
selectColumn
|
||||
// tag::selectColumn
|
||||
SELECT emp_no FROM emp LIMIT 1;
|
||||
|
||||
emp_no
|
||||
---------------
|
||||
10001
|
||||
|
||||
// end::selectColumn
|
||||
;
|
||||
|
||||
selectQualifiedColumn
|
||||
// tag::selectQualifiedColumn
|
||||
SELECT emp.emp_no FROM emp LIMIT 1;
|
||||
|
||||
emp_no
|
||||
---------------
|
||||
10001
|
||||
|
||||
// end::selectQualifiedColumn
|
||||
;
|
||||
|
||||
|
||||
wildcardWithOrder
|
||||
// tag::wildcardWithOrder
|
||||
SELECT * FROM emp LIMIT 1;
|
||||
|
||||
birth_date | emp_no | first_name | gender | hire_date | languages | last_name | salary
|
||||
--------------------+---------------+---------------+---------------+--------------------+---------------+---------------+---------------
|
||||
1953-09-02T00:00:00Z|10001 |Georgi |M |1986-06-26T00:00:00Z|2 |Facello |57305
|
||||
|
||||
// end::wildcardWithOrder
|
||||
;
|
||||
|
||||
fromTable
|
||||
// tag::fromTable
|
||||
SELECT * FROM emp LIMIT 1;
|
||||
|
||||
birth_date | emp_no | first_name | gender | hire_date | languages | last_name | salary
|
||||
--------------------+---------------+---------------+---------------+--------------------+---------------+---------------+---------------
|
||||
1953-09-02T00:00:00Z|10001 |Georgi |M |1986-06-26T00:00:00Z|2 |Facello |57305
|
||||
|
||||
|
||||
// end::fromTable
|
||||
;
|
||||
|
||||
fromTableQuoted
|
||||
// tag::fromTableQuoted
|
||||
SELECT * FROM "emp" LIMIT 1;
|
||||
|
||||
birth_date | emp_no | first_name | gender | hire_date | languages | last_name | salary
|
||||
--------------------+---------------+---------------+---------------+--------------------+---------------+---------------+---------------
|
||||
1953-09-02T00:00:00Z|10001 |Georgi |M |1986-06-26T00:00:00Z|2 |Facello |57305
|
||||
|
||||
// end::fromTableQuoted
|
||||
;
|
||||
|
||||
fromTableQuoted
|
||||
// tag::fromTablePatternQuoted
|
||||
SELECT emp_no FROM "e*p" LIMIT 1;
|
||||
|
||||
emp_no
|
||||
---------------
|
||||
10001
|
||||
|
||||
// end::fromTablePatternQuoted
|
||||
;
|
||||
|
||||
fromTableAlias
|
||||
// tag::fromTableAlias
|
||||
SELECT e.emp_no FROM emp AS e LIMIT 1;
|
||||
|
||||
emp_no
|
||||
-------------
|
||||
10001
|
||||
|
||||
// end::fromTableAlias
|
||||
;
|
||||
|
||||
basicWhere
|
||||
// tag::basicWhere
|
||||
SELECT last_name FROM emp WHERE emp_no = 10001;
|
||||
|
||||
last_name
|
||||
---------------
|
||||
Facello
|
||||
|
||||
// end::basicWhere
|
||||
;
|
||||
|
||||
///////////////////////////////
|
||||
//
|
||||
// Group By
|
||||
//
|
||||
///////////////////////////////
|
||||
|
||||
groupByColumn
|
||||
// tag::groupByColumn
|
||||
SELECT gender AS g FROM emp GROUP BY gender;
|
||||
|
||||
g
|
||||
---------------
|
||||
F
|
||||
M
|
||||
|
||||
// end::groupByColumn
|
||||
;
|
||||
|
||||
groupByOrdinal
|
||||
// tag::groupByOrdinal
|
||||
SELECT gender FROM emp GROUP BY 1;
|
||||
|
||||
gender
|
||||
---------------
|
||||
F
|
||||
M
|
||||
|
||||
// end::groupByOrdinal
|
||||
;
|
||||
|
||||
groupByAlias
|
||||
// tag::groupByAlias
|
||||
SELECT gender AS g FROM emp GROUP BY g;
|
||||
|
||||
g
|
||||
---------------
|
||||
F
|
||||
M
|
||||
|
||||
// end::groupByAlias
|
||||
;
|
||||
|
||||
groupByExpression
|
||||
// tag::groupByExpression
|
||||
SELECT languages + 1 AS l FROM emp GROUP BY l;
|
||||
|
||||
l
|
||||
---------------
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
|
||||
|
||||
// end::groupByExpression
|
||||
;
|
||||
|
||||
groupByAndAgg
|
||||
// tag::groupByAndAgg
|
||||
SELECT gender AS g, COUNT(*) AS c FROM emp GROUP BY gender;
|
||||
|
||||
g | c
|
||||
---------------+---------------
|
||||
F |37
|
||||
M |63
|
||||
|
||||
// end::groupByAndAgg
|
||||
;
|
||||
|
||||
groupByAndAggExpression
|
||||
// tag::groupByAndAggExpression
|
||||
SELECT gender AS g, ROUND(MIN(salary) / 100) AS salary FROM emp GROUP BY gender;
|
||||
|
||||
g | salary
|
||||
---------------+---------------
|
||||
F |260
|
||||
M |253
|
||||
|
||||
// end::groupByAndAggExpression
|
||||
;
|
||||
|
||||
groupByAndMultipleAggs
|
||||
// tag::groupByAndMultipleAggs
|
||||
SELECT gender AS g, KURTOSIS(salary) AS k, SKEWNESS(salary) AS s FROM emp GROUP BY gender;
|
||||
|
||||
g | k | s
|
||||
---------------+------------------+-------------------
|
||||
F |1.8427808415250482|0.04517149340491813
|
||||
M |2.259327644285826 |0.40268950715550333
|
||||
|
||||
// end::groupByAndMultipleAggs
|
||||
;
|
||||
|
||||
groupByImplicitCount
|
||||
// tag::groupByImplicitCount
|
||||
SELECT COUNT(*) AS count FROM emp;
|
||||
|
||||
count
|
||||
---------------
|
||||
100
|
||||
|
||||
// end::groupByImplicitCount
|
||||
;
|
||||
|
||||
///////////////////////////////
|
||||
//
|
||||
// Having
|
||||
//
|
||||
///////////////////////////////
|
||||
|
||||
groupByHaving
|
||||
// tag::groupByHaving
|
||||
SELECT languages AS l, COUNT(*) AS c FROM emp GROUP BY l HAVING c BETWEEN 15 AND 20;
|
||||
|
||||
l | c
|
||||
---------------+---------------
|
||||
1 |16
|
||||
2 |20
|
||||
4 |18
|
||||
|
||||
// end::groupByHaving
|
||||
;
|
||||
|
||||
groupByHavingMultiple
|
||||
// tag::groupByHavingMultiple
|
||||
SELECT MIN(salary) AS min, MAX(salary) AS max, MAX(salary) - MIN(salary) AS diff FROM emp GROUP BY languages HAVING diff - max % min > 0 AND AVG(salary) > 30000;
|
||||
|
||||
min | max | diff
|
||||
---------------+---------------+---------------
|
||||
25976 |73717 |47741
|
||||
29175 |73578 |44403
|
||||
26436 |74999 |48563
|
||||
27215 |74572 |47357
|
||||
25324 |73851 |48527
|
||||
|
||||
// end::groupByHavingMultiple
|
||||
;
|
||||
|
||||
groupByImplicitMultipleAggs
|
||||
// tag::groupByImplicitMultipleAggs
|
||||
SELECT MIN(salary) AS min, MAX(salary) AS max, AVG(salary) AS avg, COUNT(*) AS count FROM emp;
|
||||
|
||||
min | max | avg | count
|
||||
---------------+---------------+---------------+---------------
|
||||
25324 |74999 |48248 |100
|
||||
|
||||
// end::groupByImplicitMultipleAggs
|
||||
;
|
||||
|
||||
groupByHavingImplicitMatch
|
||||
// tag::groupByHavingImplicitMatch
|
||||
SELECT MIN(salary) AS min, MAX(salary) AS max FROM emp HAVING min > 25000;
|
||||
|
||||
min | max
|
||||
---------------+---------------
|
||||
25324 |74999
|
||||
|
||||
// end::groupByHavingImplicitMatch
|
||||
;
|
||||
|
||||
//groupByHavingImplicitNoMatch
|
||||
// tag::groupByHavingImplicitNoMatch
|
||||
//SELECT MIN(salary) AS min, MAX(salary) AS max FROM emp HAVING max > 75000;
|
||||
|
||||
// min | max
|
||||
//---------------+---------------
|
||||
|
||||
// end::groupByHavingImplicitNoMatch
|
||||
//;
|
||||
|
||||
///////////////////////////////
|
||||
//
|
||||
// Order by
|
||||
//
|
||||
///////////////////////////////
|
||||
|
||||
orderByBasic
|
||||
// tag::orderByBasic
|
||||
SELECT * FROM library ORDER BY page_count DESC LIMIT 5;
|
||||
|
||||
author | name | page_count | release_date
|
||||
-----------------+--------------------+---------------+--------------------
|
||||
Peter F. Hamilton|Pandora's Star |768 |2004-03-02T00:00:00Z
|
||||
Vernor Vinge |A Fire Upon the Deep|613 |1992-06-01T00:00:00Z
|
||||
Frank Herbert |Dune |604 |1965-06-01T00:00:00Z
|
||||
Alastair Reynolds|Revelation Space |585 |2000-03-15T00:00:00Z
|
||||
James S.A. Corey |Leviathan Wakes |561 |2011-06-02T00:00:00Z
|
||||
|
||||
|
||||
|
||||
// end::orderByBasic
|
||||
;
|
||||
|
||||
orderByScore
|
||||
// tag::orderByScore
|
||||
SELECT SCORE(), * FROM library WHERE match(name, 'dune') ORDER BY SCORE() DESC;
|
||||
|
||||
SCORE() | author | name | page_count | release_date
|
||||
---------------+---------------+-------------------+---------------+--------------------
|
||||
2.288635 |Frank Herbert |Dune |604 |1965-06-01T00:00:00Z
|
||||
1.8893257 |Frank Herbert |Dune Messiah |331 |1969-10-15T00:00:00Z
|
||||
1.6086555 |Frank Herbert |Children of Dune |408 |1976-04-21T00:00:00Z
|
||||
1.4005898 |Frank Herbert |God Emperor of Dune|454 |1981-05-28T00:00:00Z
|
||||
|
||||
// end::orderByScore
|
||||
;
|
||||
|
||||
orderByScoreWithMatch
|
||||
// tag::orderByScoreWithMatch
|
||||
SELECT SCORE(), * FROM library WHERE match(name, 'dune') ORDER BY page_count DESC;
|
||||
|
||||
SCORE() | author | name | page_count | release_date
|
||||
---------------+---------------+-------------------+---------------+--------------------
|
||||
2.288635 |Frank Herbert |Dune |604 |1965-06-01T00:00:00Z
|
||||
1.4005898 |Frank Herbert |God Emperor of Dune|454 |1981-05-28T00:00:00Z
|
||||
1.6086555 |Frank Herbert |Children of Dune |408 |1976-04-21T00:00:00Z
|
||||
1.8893257 |Frank Herbert |Dune Messiah |331 |1969-10-15T00:00:00Z
|
||||
|
||||
// end::orderByScoreWithMatch
|
||||
;
|
||||
|
||||
|
||||
///////////////////////////////
|
||||
//
|
||||
// Limit
|
||||
//
|
||||
///////////////////////////////
|
||||
|
||||
limitBasic
|
||||
// tag::limitBasic
|
||||
SELECT first_name, last_name, emp_no FROM emp LIMIT 1;
|
||||
|
||||
first_name | last_name | emp_no
|
||||
---------------+---------------+---------------
|
||||
Georgi |Facello |10001
|
||||
|
||||
// end::limitBasic
|
||||
;
|
|
@ -0,0 +1,25 @@
|
|||
name,author,release_date,page_count
|
||||
Leviathan Wakes,James S.A. Corey,2011-06-02T00:00:00Z,561
|
||||
Hyperion,Dan Simmons,1989-05-26T00:00:00Z,482
|
||||
Dune,Frank Herbert,1965-06-01T00:00:00Z,604
|
||||
Dune Messiah,Frank Herbert,1969-10-15T00:00:00Z,331
|
||||
Children of Dune,Frank Herbert,1976-04-21T00:00:00Z,408
|
||||
God Emperor of Dune,Frank Herbert,1981-05-28T00:00:00Z,454
|
||||
Consider Phlebas,Iain M. Banks,1987-04-23T00:00:00Z,471
|
||||
Pandora's Star,Peter F. Hamilton,2004-03-02T00:00:00Z,768
|
||||
Revelation Space,Alastair Reynolds,2000-03-15T00:00:00Z,585
|
||||
A Fire Upon the Deep,Vernor Vinge,1992-06-01T00:00:00Z,613
|
||||
Ender's Game,Orson Scott Card,1985-06-01T00:00:00Z,324
|
||||
1984,George Orwell,1985-06-01T00:00:00Z,328
|
||||
Fahrenheit 451,Ray Bradbury,1953-10-15T00:00:00Z,227
|
||||
Brave New World,Aldous Huxley,1932-06-01T00:00:00Z,268
|
||||
Foundation,Isaac Asimov,1951-06-01T00:00:00Z,224
|
||||
The Giver,Lois Lowry,1993-04-26T00:00:00Z,208
|
||||
Slaughterhouse-Five,Kurt Vonnegut,1969-06-01T00:00:00Z,275
|
||||
The Hitchhiker's Guide to the Galaxy,Douglas Adams,1979-10-12T00:00:00Z,180
|
||||
Snow Crash,Neal Stephenson,1992-06-01T00:00:00Z,470
|
||||
Neuromancer,William Gibson,1984-07-01T00:00:00Z,271
|
||||
The Handmaid's Tale,Margaret Atwood,1985-06-01T00:00:00Z,311
|
||||
Starship Troopers,Robert A. Heinlein,1959-12-01T00:00:00Z,335
|
||||
The Left Hand of Darkness,Ursula K. Le Guin,1969-06-01T00:00:00Z,304
|
||||
The Moon is a Harsh Mistress,Robert A. Heinlein,1966-04-01T00:00:00Z,288
|
|
|
@ -3,9 +3,7 @@
|
|||
//
|
||||
|
||||
wildcardWithOrder
|
||||
// tag::wildcardWithOrder
|
||||
SELECT * FROM test_emp ORDER BY emp_no;
|
||||
// end::wildcardWithOrder
|
||||
column
|
||||
SELECT last_name FROM "test_emp" ORDER BY emp_no;
|
||||
columnWithAlias
|
||||
|
|
Loading…
Reference in New Issue