Setting useNativeQueryExplain to true (#12936)

* Setting useNativeQueryExplain to true

* Update docs/querying/sql-query-context.md

Co-authored-by: Santosh Pingale <pingalesantosh@gmail.com>

* Fixing tests

* Fixing broken tests

Co-authored-by: Santosh Pingale <pingalesantosh@gmail.com>
This commit is contained in:
Karan Kumar 2022-08-24 17:39:55 +05:30 committed by GitHub
parent cfed036091
commit f7c6316992
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 37 additions and 22 deletions

View File

@ -1878,7 +1878,7 @@ The Druid SQL server is configured through the following properties on the Broke
|`druid.sql.planner.metadataSegmentCacheEnable`|Whether to keep a cache of published segments in broker. If true, broker polls coordinator in background to get segments from metadata store and maintains a local cache. If false, coordinator's REST API will be invoked when broker needs published segments info.|false| |`druid.sql.planner.metadataSegmentCacheEnable`|Whether to keep a cache of published segments in broker. If true, broker polls coordinator in background to get segments from metadata store and maintains a local cache. If false, coordinator's REST API will be invoked when broker needs published segments info.|false|
|`druid.sql.planner.metadataSegmentPollPeriod`|How often to poll coordinator for published segments list if `druid.sql.planner.metadataSegmentCacheEnable` is set to true. Poll period is in milliseconds. |60000| |`druid.sql.planner.metadataSegmentPollPeriod`|How often to poll coordinator for published segments list if `druid.sql.planner.metadataSegmentCacheEnable` is set to true. Poll period is in milliseconds. |60000|
|`druid.sql.planner.authorizeSystemTablesDirectly`|If true, Druid authorizes queries against any of the system schema tables (`sys` in SQL) as `SYSTEM_TABLE` resources which require `READ` access, in addition to permissions based content filtering.|false| |`druid.sql.planner.authorizeSystemTablesDirectly`|If true, Druid authorizes queries against any of the system schema tables (`sys` in SQL) as `SYSTEM_TABLE` resources which require `READ` access, in addition to permissions based content filtering.|false|
|`druid.sql.planner.useNativeQueryExplain`|If true, `EXPLAIN PLAN FOR` will return the explain plan as a JSON representation of equivalent native query(s), else it will return the original version of explain plan generated by Calcite. It can be overridden per query with `useNativeQueryExplain` context key.|false| |`druid.sql.planner.useNativeQueryExplain`|If true, `EXPLAIN PLAN FOR` will return the explain plan as a JSON representation of equivalent native query(s), else it will return the original version of explain plan generated by Calcite. It can be overridden per query with `useNativeQueryExplain` context key.|true|
|`druid.sql.planner.maxNumericInFilters`|Max limit for the amount of numeric values that can be compared for a string type dimension when the entire SQL WHERE clause of a query translates to an [OR](../querying/filters.md#or) of [Bound filter](../querying/filters.md#bound-filter). By default, Druid does not restrict the amount of numeric Bound Filters on String columns, although this situation may block other queries from running. Set this property to a smaller value to prevent Druid from running queries that have prohibitively long segment processing times. The optimal limit requires some trial and error; we recommend starting with 100. Users who submit a query that exceeds the limit of `maxNumericInFilters` should instead rewrite their queries to use strings in the `WHERE` clause instead of numbers. For example, `WHERE someString IN (123, 456)`. If this value is disabled, `maxNumericInFilters` set through query context is ignored.|`-1` (disabled)| |`druid.sql.planner.maxNumericInFilters`|Max limit for the amount of numeric values that can be compared for a string type dimension when the entire SQL WHERE clause of a query translates to an [OR](../querying/filters.md#or) of [Bound filter](../querying/filters.md#bound-filter). By default, Druid does not restrict the amount of numeric Bound Filters on String columns, although this situation may block other queries from running. Set this property to a smaller value to prevent Druid from running queries that have prohibitively long segment processing times. The optimal limit requires some trial and error; we recommend starting with 100. Users who submit a query that exceeds the limit of `maxNumericInFilters` should instead rewrite their queries to use strings in the `WHERE` clause instead of numbers. For example, `WHERE someString IN (123, 456)`. If this value is disabled, `maxNumericInFilters` set through query context is ignored.|`-1` (disabled)|
|`druid.sql.approxCountDistinct.function`|Implementation to use for the [`APPROX_COUNT_DISTINCT` function](../querying/sql-aggregations.md). Without extensions loaded, the only valid value is `APPROX_COUNT_DISTINCT_BUILTIN` (a HyperLogLog, or HLL, based implementation). If the [DataSketches extension](../development/extensions-core/datasketches-extension.md) is loaded, this can also be `APPROX_COUNT_DISTINCT_DS_HLL` (alternative HLL implementation) or `APPROX_COUNT_DISTINCT_DS_THETA`.<br><br>Theta sketches use significantly more memory than HLL sketches, so you should prefer one of the two HLL implementations.|APPROX_COUNT_DISTINCT_BUILTIN| |`druid.sql.approxCountDistinct.function`|Implementation to use for the [`APPROX_COUNT_DISTINCT` function](../querying/sql-aggregations.md). Without extensions loaded, the only valid value is `APPROX_COUNT_DISTINCT_BUILTIN` (a HyperLogLog, or HLL, based implementation). If the [DataSketches extension](../development/extensions-core/datasketches-extension.md) is loaded, this can also be `APPROX_COUNT_DISTINCT_DS_HLL` (alternative HLL implementation) or `APPROX_COUNT_DISTINCT_DS_THETA`.<br><br>Theta sketches use significantly more memory than HLL sketches, so you should prefer one of the two HLL implementations.|APPROX_COUNT_DISTINCT_BUILTIN|

View File

@ -42,7 +42,7 @@ Configure Druid SQL query planning using the parameters in the table below.
|`useGroupingSetForExactDistinct`|Whether to use grouping sets to execute queries with multiple exact distinct aggregations.|druid.sql.planner.useGroupingSetForExactDistinct on the Broker (default: false)| |`useGroupingSetForExactDistinct`|Whether to use grouping sets to execute queries with multiple exact distinct aggregations.|druid.sql.planner.useGroupingSetForExactDistinct on the Broker (default: false)|
|`useApproximateTopN`|Whether to use approximate [TopN queries](topnquery.md) when a SQL query could be expressed as such. If false, exact [GroupBy queries](groupbyquery.md) will be used instead.|druid.sql.planner.useApproximateTopN on the Broker (default: true)| |`useApproximateTopN`|Whether to use approximate [TopN queries](topnquery.md) when a SQL query could be expressed as such. If false, exact [GroupBy queries](groupbyquery.md) will be used instead.|druid.sql.planner.useApproximateTopN on the Broker (default: true)|
|`enableTimeBoundaryPlanning`|If true, SQL queries will get converted to TimeBoundary queries wherever possible. TimeBoundary queries are very efficient for min-max calculation on __time column in a datasource |druid.query.default.context.enableTimeBoundaryPlanning on the Broker (default: false)| |`enableTimeBoundaryPlanning`|If true, SQL queries will get converted to TimeBoundary queries wherever possible. TimeBoundary queries are very efficient for min-max calculation on __time column in a datasource |druid.query.default.context.enableTimeBoundaryPlanning on the Broker (default: false)|
|`useNativeQueryExplain`|If true, `EXPLAIN PLAN FOR` will return the explain plan as a JSON representation of equivalent native query(s), else it will return the original version of explain plan generated by Calcite.|druid.sql.planner.useNativeQueryExplain on the Broker (default: False)| |`useNativeQueryExplain`|If true, `EXPLAIN PLAN FOR` will return the explain plan as a JSON representation of equivalent native query(s), else it will return the original version of explain plan generated by Calcite.|`druid.sql.planner.useNativeQueryExplain` on the Broker (default: true)|
## Setting the query context ## Setting the query context
The query context parameters can be specified as a "context" object in the [JSON API](sql-api.md) or as a [JDBC connection properties object](sql-jdbc.md). The query context parameters can be specified as a "context" object in the [JSON API](sql-api.md) or as a [JDBC connection properties object](sql-jdbc.md).

View File

@ -75,7 +75,7 @@ public class PlannerConfig
private boolean authorizeSystemTablesDirectly = false; private boolean authorizeSystemTablesDirectly = false;
@JsonProperty @JsonProperty
private boolean useNativeQueryExplain = false; private boolean useNativeQueryExplain = true;
@JsonProperty @JsonProperty
private boolean forceExpressionVirtualColumns = false; private boolean forceExpressionVirtualColumns = false;

View File

@ -397,8 +397,9 @@ public class DruidAvaticaHandlerTest extends CalciteTestBase
ImmutableList.of( ImmutableList.of(
ImmutableMap.of( ImmutableMap.of(
"PLAN", "PLAN",
StringUtils.format("DruidQueryRel(query=[{\"queryType\":\"timeseries\",\"dataSource\":{\"type\":\"table\",\"name\":\"foo\"},\"intervals\":{\"type\":\"intervals\",\"intervals\":[\"-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z\"]},\"granularity\":{\"type\":\"all\"},\"aggregations\":[{\"type\":\"count\",\"name\":\"a0\"}],\"context\":{\"sqlQueryId\":\"%s\",\"sqlStringifyArrays\":false,\"sqlTimeZone\":\"America/Los_Angeles\"}}], signature=[{a0:LONG}])\n", StringUtils.format(
DUMMY_SQL_QUERY_ID "[{\"query\":{\"queryType\":\"timeseries\",\"dataSource\":{\"type\":\"table\",\"name\":\"foo\"},\"intervals\":{\"type\":\"intervals\",\"intervals\":[\"-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z\"]},\"granularity\":{\"type\":\"all\"},\"aggregations\":[{\"type\":\"count\",\"name\":\"a0\"}],\"context\":{\"sqlQueryId\":\"%s\",\"sqlStringifyArrays\":false,\"sqlTimeZone\":\"America/Los_Angeles\"}},\"signature\":[{\"name\":\"a0\",\"type\":\"LONG\"}]}]",
DUMMY_SQL_QUERY_ID
), ),
"RESOURCES", "RESOURCES",
"[{\"name\":\"foo\",\"type\":\"DATASOURCE\"}]" "[{\"name\":\"foo\",\"type\":\"DATASOURCE\"}]"

View File

@ -53,7 +53,9 @@ public class CalciteExplainQueryTest extends BaseCalciteQueryTest
final String resources = "[{\"name\":\"aview\",\"type\":\"VIEW\"}]"; final String resources = "[{\"name\":\"aview\",\"type\":\"VIEW\"}]";
testQuery( testQuery(
PlannerConfig.builder().useNativeQueryExplain(false).build(),
query, query,
CalciteTests.REGULAR_USER_AUTH_RESULT,
ImmutableList.of(), ImmutableList.of(),
ImmutableList.of( ImmutableList.of(
new Object[]{legacyExplanation, resources} new Object[]{legacyExplanation, resources}
@ -127,15 +129,15 @@ public class CalciteExplainQueryTest extends BaseCalciteQueryTest
testQuery( testQuery(
query, query,
ImmutableList.of(), ImmutableList.of(),
ImmutableList.of(new Object[]{legacyExplanation, resources}) ImmutableList.of(new Object[]{explanation, resources})
); );
testQuery( testQuery(
PLANNER_CONFIG_NATIVE_QUERY_EXPLAIN, PlannerConfig.builder().useNativeQueryExplain(false).build(),
query, query,
CalciteTests.REGULAR_USER_AUTH_RESULT, CalciteTests.REGULAR_USER_AUTH_RESULT,
ImmutableList.of(), ImmutableList.of(),
ImmutableList.of(new Object[]{explanation, resources}) ImmutableList.of(new Object[]{legacyExplanation, resources})
); );
} }
@ -145,15 +147,14 @@ public class CalciteExplainQueryTest extends BaseCalciteQueryTest
public void testExplainSelectStarWithOverrides() public void testExplainSelectStarWithOverrides()
{ {
Map<String, Object> useRegularExplainContext = new HashMap<>(QUERY_CONTEXT_DEFAULT); Map<String, Object> useRegularExplainContext = new HashMap<>(QUERY_CONTEXT_DEFAULT);
useRegularExplainContext.put(PlannerConfig.CTX_KEY_USE_NATIVE_QUERY_EXPLAIN, false); useRegularExplainContext.put(PlannerConfig.CTX_KEY_USE_NATIVE_QUERY_EXPLAIN, true);
Map<String, Object> useNativeQueryExplain = new HashMap<>(QUERY_CONTEXT_DEFAULT); Map<String, Object> legacyExplainContext = new HashMap<>(QUERY_CONTEXT_DEFAULT);
useNativeQueryExplain.put(PlannerConfig.CTX_KEY_USE_NATIVE_QUERY_EXPLAIN, true); legacyExplainContext.put(PlannerConfig.CTX_KEY_USE_NATIVE_QUERY_EXPLAIN, false);
// Skip vectorization since otherwise the "context" will change for each subtest. // Skip vectorization since otherwise the "context" will change for each subtest.
skipVectorize(); skipVectorize();
String legacyExplanation = "DruidQueryRel(query=[{\"queryType\":\"scan\",\"dataSource\":{\"type\":\"table\",\"name\":\"foo\"},\"intervals\":{\"type\":\"intervals\",\"intervals\":[\"-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z\"]},\"resultFormat\":\"compactedList\",\"columns\":[\"__time\",\"cnt\",\"dim1\",\"dim2\",\"dim3\",\"m1\",\"m2\",\"unique_dim1\"],\"legacy\":false,\"context\":{\"defaultTimeout\":300000,\"maxScatterGatherBytes\":9223372036854775807,\"sqlCurrentTimestamp\":\"2000-01-01T00:00:00Z\",\"sqlQueryId\":\"dummy\",\"vectorize\":\"false\",\"vectorizeVirtualColumns\":\"false\"},\"granularity\":{\"type\":\"all\"}}], signature=[{__time:LONG, dim1:STRING, dim2:STRING, dim3:STRING, cnt:LONG, m1:FLOAT, m2:DOUBLE, unique_dim1:COMPLEX<hyperUnique>}])\n";
String legacyExplanationWithContext = "DruidQueryRel(query=[{\"queryType\":\"scan\",\"dataSource\":{\"type\":\"table\",\"name\":\"foo\"},\"intervals\":{\"type\":\"intervals\",\"intervals\":[\"-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z\"]},\"resultFormat\":\"compactedList\",\"columns\":[\"__time\",\"cnt\",\"dim1\",\"dim2\",\"dim3\",\"m1\",\"m2\",\"unique_dim1\"],\"legacy\":false,\"context\":{\"defaultTimeout\":300000,\"maxScatterGatherBytes\":9223372036854775807,\"sqlCurrentTimestamp\":\"2000-01-01T00:00:00Z\",\"sqlQueryId\":\"dummy\",\"useNativeQueryExplain\":false,\"vectorize\":\"false\",\"vectorizeVirtualColumns\":\"false\"},\"granularity\":{\"type\":\"all\"}}], signature=[{__time:LONG, dim1:STRING, dim2:STRING, dim3:STRING, cnt:LONG, m1:FLOAT, m2:DOUBLE, unique_dim1:COMPLEX<hyperUnique>}])\n"; String legacyExplanationWithContext = "DruidQueryRel(query=[{\"queryType\":\"scan\",\"dataSource\":{\"type\":\"table\",\"name\":\"foo\"},\"intervals\":{\"type\":\"intervals\",\"intervals\":[\"-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z\"]},\"resultFormat\":\"compactedList\",\"columns\":[\"__time\",\"cnt\",\"dim1\",\"dim2\",\"dim3\",\"m1\",\"m2\",\"unique_dim1\"],\"legacy\":false,\"context\":{\"defaultTimeout\":300000,\"maxScatterGatherBytes\":9223372036854775807,\"sqlCurrentTimestamp\":\"2000-01-01T00:00:00Z\",\"sqlQueryId\":\"dummy\",\"useNativeQueryExplain\":false,\"vectorize\":\"false\",\"vectorizeVirtualColumns\":\"false\"},\"granularity\":{\"type\":\"all\"}}], signature=[{__time:LONG, dim1:STRING, dim2:STRING, dim3:STRING, cnt:LONG, m1:FLOAT, m2:DOUBLE, unique_dim1:COMPLEX<hyperUnique>}])\n";
String explanation = "[{" String explanation = "[{"
+ "\"query\":{\"queryType\":\"scan\"," + "\"query\":{\"queryType\":\"scan\","
@ -182,14 +183,14 @@ public class CalciteExplainQueryTest extends BaseCalciteQueryTest
String resources = "[{\"name\":\"foo\",\"type\":\"DATASOURCE\"}]"; String resources = "[{\"name\":\"foo\",\"type\":\"DATASOURCE\"}]";
// Test when default config and no overrides // Test when default config and no overrides
testQuery(sql, ImmutableList.of(), ImmutableList.of(new Object[]{legacyExplanation, resources})); testQuery(sql, ImmutableList.of(), ImmutableList.of(new Object[]{explanation, resources}));
// Test when default config and useNativeQueryExplain is overridden in the context // Test when default config and useNativeQueryExplain is overridden in the context
testQuery( testQuery(
sql, sql,
useNativeQueryExplain, legacyExplainContext,
ImmutableList.of(), ImmutableList.of(),
ImmutableList.of(new Object[]{explanationWithContext, resources}) ImmutableList.of(new Object[]{legacyExplanationWithContext, resources})
); );
// Test when useNativeQueryExplain enabled by default and no overrides // Test when useNativeQueryExplain enabled by default and no overrides
@ -208,7 +209,7 @@ public class CalciteExplainQueryTest extends BaseCalciteQueryTest
sql, sql,
CalciteTests.REGULAR_USER_AUTH_RESULT, CalciteTests.REGULAR_USER_AUTH_RESULT,
ImmutableList.of(), ImmutableList.of(),
ImmutableList.of(new Object[]{legacyExplanationWithContext, resources}) ImmutableList.of(new Object[]{explanationWithContext, resources})
); );
} }
@ -242,7 +243,9 @@ public class CalciteExplainQueryTest extends BaseCalciteQueryTest
final String resources = "[{\"name\":\"foo\",\"type\":\"DATASOURCE\"}]"; final String resources = "[{\"name\":\"foo\",\"type\":\"DATASOURCE\"}]";
testQuery( testQuery(
PlannerConfig.builder().useNativeQueryExplain(false).build(),
query, query,
CalciteTests.REGULAR_USER_AUTH_RESULT,
ImmutableList.of(), ImmutableList.of(),
ImmutableList.of( ImmutableList.of(
new Object[]{legacyExplanation, resources} new Object[]{legacyExplanation, resources}

View File

@ -611,7 +611,7 @@ public class CalciteInsertDmlTest extends CalciteIngestionDmlTest
// Use testQuery for EXPLAIN (not testIngestionQuery). // Use testQuery for EXPLAIN (not testIngestionQuery).
testQuery( testQuery(
new PlannerConfig(), PlannerConfig.builder().useNativeQueryExplain(false).build(),
ImmutableMap.of("sqlQueryId", "dummy"), ImmutableMap.of("sqlQueryId", "dummy"),
Collections.emptyList(), Collections.emptyList(),
StringUtils.format( StringUtils.format(

View File

@ -619,7 +619,7 @@ public class CalciteReplaceDmlTest extends CalciteIngestionDmlTest
// Use testQuery for EXPLAIN (not testIngestionQuery). // Use testQuery for EXPLAIN (not testIngestionQuery).
testQuery( testQuery(
new PlannerConfig(), PlannerConfig.builder().useNativeQueryExplain(false).build(),
ImmutableMap.of("sqlQueryId", "dummy"), ImmutableMap.of("sqlQueryId", "dummy"),
Collections.emptyList(), Collections.emptyList(),
StringUtils.format( StringUtils.format(

View File

@ -45,6 +45,7 @@ import org.apache.druid.segment.column.RowSignature;
import org.apache.druid.segment.virtual.ExpressionVirtualColumn; import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
import org.apache.druid.sql.SqlPlanningException; import org.apache.druid.sql.SqlPlanningException;
import org.apache.druid.sql.calcite.filtration.Filtration; import org.apache.druid.sql.calcite.filtration.Filtration;
import org.apache.druid.sql.calcite.planner.PlannerConfig;
import org.apache.druid.sql.calcite.planner.PlannerContext; import org.apache.druid.sql.calcite.planner.PlannerContext;
import org.apache.druid.sql.calcite.util.CalciteTests; import org.apache.druid.sql.calcite.util.CalciteTests;
import org.joda.time.DateTime; import org.joda.time.DateTime;
@ -545,7 +546,9 @@ public class CalciteSelectQueryTest extends BaseCalciteQueryTest
final String resources = "[]"; final String resources = "[]";
testQuery( testQuery(
PlannerConfig.builder().useNativeQueryExplain(false).build(),
query, query,
CalciteTests.REGULAR_USER_AUTH_RESULT,
ImmutableList.of(), ImmutableList.of(),
ImmutableList.of( ImmutableList.of(
new Object[]{ new Object[]{
@ -1286,7 +1289,9 @@ public class CalciteSelectQueryTest extends BaseCalciteQueryTest
final String resources = "[{\"name\":\"foo\",\"type\":\"DATASOURCE\"}]"; final String resources = "[{\"name\":\"foo\",\"type\":\"DATASOURCE\"}]";
testQuery( testQuery(
PlannerConfig.builder().useNativeQueryExplain(false).build(),
query, query,
CalciteTests.REGULAR_USER_AUTH_RESULT,
ImmutableList.of(), ImmutableList.of(),
ImmutableList.of( ImmutableList.of(
new Object[]{ new Object[]{

View File

@ -109,7 +109,6 @@ import javax.ws.rs.core.MultivaluedMap;
import javax.ws.rs.core.Response; import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.Status; import javax.ws.rs.core.Response.Status;
import javax.ws.rs.core.StreamingOutput; import javax.ws.rs.core.StreamingOutput;
import java.io.ByteArrayOutputStream; import java.io.ByteArrayOutputStream;
import java.io.IOException; import java.io.IOException;
import java.nio.charset.StandardCharsets; import java.nio.charset.StandardCharsets;
@ -1185,7 +1184,12 @@ public class SqlResourceTest extends CalciteTestBase
@Test @Test
public void testExplainCountStar() throws Exception public void testExplainCountStar() throws Exception
{ {
Map<String, Object> queryContext = ImmutableMap.of(PlannerContext.CTX_SQL_QUERY_ID, DUMMY_SQL_QUERY_ID); Map<String, Object> queryContext = ImmutableMap.of(
PlannerContext.CTX_SQL_QUERY_ID,
DUMMY_SQL_QUERY_ID,
PlannerConfig.CTX_KEY_USE_NATIVE_QUERY_EXPLAIN,
"false"
);
final List<Map<String, Object>> rows = doPost( final List<Map<String, Object>> rows = doPost(
new SqlQuery( new SqlQuery(
"EXPLAIN PLAN FOR SELECT COUNT(*) AS cnt FROM druid.foo", "EXPLAIN PLAN FOR SELECT COUNT(*) AS cnt FROM druid.foo",
@ -1203,8 +1207,10 @@ public class SqlResourceTest extends CalciteTestBase
ImmutableMap.<String, Object>of( ImmutableMap.<String, Object>of(
"PLAN", "PLAN",
StringUtils.format( StringUtils.format(
"DruidQueryRel(query=[{\"queryType\":\"timeseries\",\"dataSource\":{\"type\":\"table\",\"name\":\"foo\"},\"intervals\":{\"type\":\"intervals\",\"intervals\":[\"-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z\"]},\"granularity\":{\"type\":\"all\"},\"aggregations\":[{\"type\":\"count\",\"name\":\"a0\"}],\"context\":{\"sqlQueryId\":\"%s\"}}], signature=[{a0:LONG}])\n", "DruidQueryRel(query=[{\"queryType\":\"timeseries\",\"dataSource\":{\"type\":\"table\",\"name\":\"foo\"},\"intervals\":{\"type\":\"intervals\",\"intervals\":[\"-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z\"]},\"granularity\":{\"type\":\"all\"},\"aggregations\":[{\"type\":\"count\",\"name\":\"a0\"}],\"context\":{\"sqlQueryId\":\"%s\",\"%s\":\"%s\"}}], signature=[{a0:LONG}])\n",
DUMMY_SQL_QUERY_ID DUMMY_SQL_QUERY_ID,
PlannerConfig.CTX_KEY_USE_NATIVE_QUERY_EXPLAIN,
"false"
), ),
"RESOURCES", "RESOURCES",
"[{\"name\":\"foo\",\"type\":\"DATASOURCE\"}]" "[{\"name\":\"foo\",\"type\":\"DATASOURCE\"}]"