Deprecate IntervalChunkingQueryRunner (#6591)

* Deprecate IntervalChunkingQueryRunner

* add doc

* deprecate metric

* fix doc
This commit is contained in:
Jihoon Son 2018-11-13 14:33:27 -08:00 committed by Fangjin Yang
parent 80173b5d29
commit cdae2fe7b5
14 changed files with 21 additions and 6 deletions

View File

@ -1222,7 +1222,7 @@ Druid broker can optionally retry queries internally for transient errors.
#### Processing
The broker uses processing configs for nested groupBy queries. And, optionally, Long-interval queries (of any type) can be broken into shorter interval queries and processed in parallel inside this thread pool. For more details, see "chunkPeriod" in [Query Context](../querying/query-context.html) doc.
The broker uses processing configs for nested groupBy queries. And, if you use groupBy v1, long-interval queries (of any type) can be broken into shorter interval queries and processed in parallel inside this thread pool. For more details, see "chunkPeriod" in [Query Context](../querying/query-context.html) doc.
|Property|Description|Default|
|--------|-----------|-------|

View File

@ -53,7 +53,7 @@ Available Metrics
|`query/node/bytes`|number of bytes returned from querying individual historical/realtime nodes.|id, status, server.| |
|`query/node/ttfb`|Time to first byte. Milliseconds elapsed until broker starts receiving the response from individual historical/realtime nodes.|id, status, server.|< 1s|
|`query/node/backpressure`|Milliseconds that the channel to this node has spent suspended due to backpressure.|id, status, server.| |
|`query/intervalChunk/time`|Only emitted if interval chunking is enabled. Milliseconds required to query an interval chunk.|id, status, chunkInterval (if interval chunking is enabled).|< 1s|
|`query/intervalChunk/time`|Only emitted if interval chunking is enabled. Milliseconds required to query an interval chunk. This metric is deprecated and will be removed in the future because interval chunking is deprecated. See [Query Context](../querying/query-context.html).|id, status, chunkInterval (if interval chunking is enabled).|< 1s|
|`query/count`|number of total queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|`query/success/count`|number of queries successfully processed|This metric is only available if the QueryCountStatsMonitor module is included.||
|`query/failed/count`|number of failed queries|This metric is only available if the QueryCountStatsMonitor module is included.||

View File

@ -37,7 +37,7 @@ The query context is used for various query configuration parameters. The follow
|populateResultLevelCache | `false` | Flag indicating whether to save the results of the query to the result level cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses druid.broker.cache.populateCache to determine whether or not to save the results of this query to the query cache |
|bySegment | `false` | Return "by segment" results. Primarily used for debugging, setting it to `true` returns results associated with the data segment they came from |
|finalize | `true` | Flag indicating whether to "finalize" aggregation results. Primarily used for debugging. For instance, the `hyperUnique` aggregator will return the full HyperLogLog sketch instead of the estimated cardinality when this flag is set to `false` |
|chunkPeriod | `P0D` (off) | At the broker node level, long interval queries (of any type) may be broken into shorter interval queries to parallelize merging more than normal. Broken up queries will use a larger share of cluster resources, but may be able to complete faster as a result. Use ISO 8601 periods. For example, if this property is set to `P1M` (one month), then a query covering a year would be broken into 12 smaller queries. The broker uses its query processing executor service to initiate processing for query chunks, so make sure "druid.processing.numThreads" is configured appropriately on the broker. [groupBy queries](groupbyquery.html) do not support chunkPeriod by default, although they do if using the legacy "v1" engine. |
|chunkPeriod | `P0D` (off) | At the broker node level, long interval queries (of any type) may be broken into shorter interval queries to parallelize merging more than normal. Broken up queries will use a larger share of cluster resources, but, if you use groupBy "v1, it may be able to complete faster as a result. Use ISO 8601 periods. For example, if this property is set to `P1M` (one month), then a query covering a year would be broken into 12 smaller queries. The broker uses its query processing executor service to initiate processing for query chunks, so make sure "druid.processing.numThreads" is configured appropriately on the broker. [groupBy queries](groupbyquery.html) do not support chunkPeriod by default, although they do if using the legacy "v1" engine. This context is deprecated since it's only useful for groupBy "v1", and will be removed in the future releases.|
|maxScatterGatherBytes| `druid.server.http.maxScatterGatherBytes` | Maximum number of bytes gathered from data nodes such as historicals and realtime processes to execute a query. This parameter can be used to further reduce `maxScatterGatherBytes` limit at query time. See [broker configuration](../configuration/index.html#broker) for more details.|
|maxQueuedBytes | `druid.broker.http.maxQueuedBytes` | Maximum number of bytes queued per query before exerting backpressure on the channel to the data server. Similar to `maxScatterGatherBytes`, except unlike that configuration, this one will trigger backpressure rather than query failure. Zero means disabled.|
|serializeDateTimeAsLong| `false` | If true, DateTime is serialized as long in the result returned by broker and the data transportation between broker and compute node|

View File

@ -225,6 +225,7 @@ public class DefaultQueryMetrics<QueryType extends Query<?>> implements QueryMet
return reportMillisTimeMetric("query/segmentAndCache/time", timeNs);
}
@Deprecated
@Override
public QueryMetrics<QueryType> reportIntervalChunkTime(long timeNs)
{

View File

@ -40,7 +40,10 @@ import java.util.Map;
import java.util.concurrent.ExecutorService;
/**
* This class is deprecated and will removed in the future.
* See https://github.com/apache/incubator-druid/pull/4004#issuecomment-284171911 for details about deprecation.
*/
@Deprecated
public class IntervalChunkingQueryRunner<T> implements QueryRunner<T>
{
private final QueryRunner<T> baseRunner;

View File

@ -26,6 +26,11 @@ import org.apache.druid.java.util.emitter.service.ServiceEmitter;
import java.util.concurrent.ExecutorService;
/**
* This class is deprecated and will removed in the future.
* See https://github.com/apache/incubator-druid/pull/4004#issuecomment-284171911 for details about deprecation.
*/
@Deprecated
public class IntervalChunkingQueryRunnerDecorator
{
private final ExecutorService executor;

View File

@ -35,6 +35,7 @@ public class QueryContexts
public static final String MAX_SCATTER_GATHER_BYTES_KEY = "maxScatterGatherBytes";
public static final String MAX_QUEUED_BYTES_KEY = "maxQueuedBytes";
public static final String DEFAULT_TIMEOUT_KEY = "defaultTimeout";
@Deprecated
public static final String CHUNK_PERIOD_KEY = "chunkPeriod";
public static final boolean DEFAULT_BY_SEGMENT = false;
@ -132,6 +133,7 @@ public class QueryContexts
return parseInt(query, PRIORITY_KEY, defaultValue);
}
@Deprecated
public static <T> String getChunkPeriod(Query<T> query)
{
return query.getContextValue(CHUNK_PERIOD_KEY, "P0D");

View File

@ -34,8 +34,7 @@ public class DefaultGroupByQueryMetricsFactory implements GroupByQueryMetricsFac
/**
* Should be used only in tests, directly or indirectly (via {@link
* GroupByQueryQueryToolChest#GroupByQueryQueryToolChest(org.apache.druid.query.groupby.strategy.GroupByStrategySelector,
* org.apache.druid.query.IntervalChunkingQueryRunnerDecorator)}).
* GroupByQueryQueryToolChest#GroupByQueryQueryToolChest}).
*/
@VisibleForTesting
public static GroupByQueryMetricsFactory instance()

View File

@ -86,6 +86,7 @@ public class GroupByQueryQueryToolChest extends QueryToolChest<Row, GroupByQuery
public static final String GROUP_BY_MERGE_KEY = "groupByMerge";
private final GroupByStrategySelector strategySelector;
@Deprecated
private final IntervalChunkingQueryRunnerDecorator intervalChunkingQueryRunnerDecorator;
private final GroupByQueryMetricsFactory queryMetricsFactory;

View File

@ -69,6 +69,7 @@ public class SearchQueryQueryToolChest extends QueryToolChest<Result<SearchResul
};
private final SearchQueryConfig config;
@Deprecated
private final IntervalChunkingQueryRunnerDecorator intervalChunkingQueryRunnerDecorator;
private final SearchQueryMetricsFactory queryMetricsFactory;

View File

@ -78,6 +78,7 @@ public class SelectQueryQueryToolChest extends QueryToolChest<Result<SelectResul
};
private final ObjectMapper jsonMapper;
@Deprecated
private final IntervalChunkingQueryRunnerDecorator intervalChunkingQueryRunnerDecorator;
private final SelectQueryMetricsFactory queryMetricsFactory;

View File

@ -34,7 +34,7 @@ public class DefaultTimeseriesQueryMetricsFactory implements TimeseriesQueryMetr
/**
* Should be used only in tests, directly or indirectly (via {@link
* TimeseriesQueryQueryToolChest#TimeseriesQueryQueryToolChest(org.apache.druid.query.IntervalChunkingQueryRunnerDecorator)}).
* TimeseriesQueryQueryToolChest#TimeseriesQueryQueryToolChest}).
*/
@VisibleForTesting
public static TimeseriesQueryMetricsFactory instance()

View File

@ -73,6 +73,7 @@ public class TimeseriesQueryQueryToolChest extends QueryToolChest<Result<Timeser
{
};
@Deprecated
private final IntervalChunkingQueryRunnerDecorator intervalChunkingQueryRunnerDecorator;
private final TimeseriesQueryMetricsFactory queryMetricsFactory;

View File

@ -70,6 +70,7 @@ public class TopNQueryQueryToolChest extends QueryToolChest<Result<TopNResultVal
};
private final TopNQueryConfig config;
@Deprecated
private final IntervalChunkingQueryRunnerDecorator intervalChunkingQueryRunnerDecorator;
private final TopNQueryMetricsFactory queryMetricsFactory;