44 KiB
id | title | sidebar_label |
---|---|---|
reference | SQL-based ingestion reference | Reference |
:::info
This page describes SQL-based batch ingestion using the druid-multi-stage-query
extension, new in Druid 24.0. Refer to the ingestion methods table to determine which
ingestion method is right for you.
:::
SQL reference
This topic is a reference guide for the multi-stage query architecture in Apache Druid. For examples of real-world usage, refer to the Examples page.
INSERT
and REPLACE
load data into a Druid datasource from either an external input source, or from another
datasource. When loading from an external datasource, you typically must provide the kind of input source,
the data format, and the schema (signature) of the input file. Druid provides table functions to allow you to
specify the external file. There are two kinds. EXTERN
works with the JSON-serialized specs for the three
items, using the same JSON you would use in native ingest. A set of other, input-source-specific functions
use SQL syntax to specify the format and the input schema. There is one function for each input source. The
input-source-specific functions allow you to use SQL query parameters to specify the set of files (or URIs),
making it easy to reuse the same SQL statement for each ingest: just specify the set of files to use each time.
EXTERN
Function
Use the EXTERN
function to read external data. The function has two variations.
Function variation 1, with the input schema expressed as JSON:
SELECT
<column>
FROM TABLE(
EXTERN(
'<Druid input source>',
'<Druid input format>',
'<row signature>'
)
)
EXTERN
consists of the following parts:
- Any Druid input source as a JSON-encoded string.
- Any Druid input format as a JSON-encoded string.
- A row signature, as a JSON-encoded array of column descriptors. Each column descriptor must have a
name
and atype
. The type can bestring
,long
,double
, orfloat
. This row signature is used to map the external data into the SQL layer.
Variation 2, with the input schema expressed in SQL using an EXTEND
clause. (See the next
section for more detail on EXTEND
). This format also uses named arguments to make the
SQL a bit easier to read:
SELECT
<column>
FROM TABLE(
EXTERN(
inputSource => '<Druid input source>',
inputFormat => '<Druid input format>'
)) (<columns>)
The input source and format are as above. The columns are expressed as in a SQL CREATE TABLE
.
Example: (timestamp VARCHAR, metricType VARCHAR, value BIGINT)
. The optional EXTEND
keyword
can precede the column list: EXTEND (timestamp VARCHAR...)
.
For more information, see Read external data with EXTERN.
INSERT
Use the INSERT
statement to insert data.
Unlike standard SQL, INSERT
loads data into the target table according to column name, not positionally. If necessary,
use AS
in your SELECT
column list to assign the correct names. Do not rely on their positions within the SELECT
clause.
Statement format:
INSERT INTO <table name>
< SELECT query >
PARTITIONED BY <time frame>
[ CLUSTERED BY <column list> ]
INSERT consists of the following parts:
- Optional context parameters.
- An
INSERT INTO <dataSource>
clause at the start of your query, such asINSERT INTO your-table
. - A clause for the data you want to insert, such as
SELECT ... FROM ...
. You can useEXTERN
to reference external tables usingFROM TABLE(EXTERN(...))
. - A PARTITIONED BY clause, such as
PARTITIONED BY DAY
. - An optional CLUSTERED BY clause.
For more information, see Load data with INSERT.
REPLACE
You can use the REPLACE
function to replace all or some of the data.
Unlike standard SQL, REPLACE
loads data into the target table according to column name, not positionally. If necessary,
use AS
in your SELECT
column list to assign the correct names. Do not rely on their positions within the SELECT
clause.
REPLACE
all data
Function format to replace all data:
REPLACE INTO <target table>
OVERWRITE ALL
< SELECT query >
PARTITIONED BY <time granularity>
[ CLUSTERED BY <column list> ]
REPLACE
specific time ranges
Function format to replace specific time ranges:
REPLACE INTO <target table>
OVERWRITE WHERE __time >= TIMESTAMP '<lower bound>' AND __time < TIMESTAMP '<upper bound>'
< SELECT query >
PARTITIONED BY <time granularity>
[ CLUSTERED BY <column list> ]
REPLACE
consists of the following parts:
- Optional context parameters.
- A
REPLACE INTO <dataSource>
clause at the start of your query, such asREPLACE INTO "your-table".
- An OVERWRITE clause after the datasource, either OVERWRITE ALL or OVERWRITE WHERE:
- OVERWRITE ALL replaces the entire existing datasource with the results of the query.
- OVERWRITE WHERE drops the time segments that match the condition you set. Conditions are based on the
__time
column and use the format__time [< > = <= >=] TIMESTAMP
. Use them with AND, OR, and NOT between them, inclusive of the timestamps specified. No other expressions or functions are valid in OVERWRITE.
- A clause for the actual data you want to use for the replacement.
- A PARTITIONED BY clause, such as
PARTITIONED BY DAY
. - An optional CLUSTERED BY clause.
For more information, see Overwrite data with REPLACE.
PARTITIONED BY
The PARTITIONED BY <time granularity>
clause is required for INSERT and REPLACE. See
Partitioning for details.
The following granularity arguments are accepted:
- Time unit keywords:
HOUR
,DAY
,MONTH
, orYEAR
. Equivalent toFLOOR(__time TO TimeUnit)
. - Time units as ISO 8601 period strings: :
'PT1H'
, 'P1D
, etc. (Druid 26.0 and later.) TIME_FLOOR(__time, 'granularity_string')
, where granularity_string is one of the ISO 8601 periods listed below. The first argument must be__time
.FLOOR(__time TO TimeUnit)
, whereTimeUnit
is any unit supported by the FLOOR function. The first argument must be__time
.ALL
orALL TIME
, which effectively disables time partitioning by placing all data in a single time chunk. To use LIMIT or OFFSET at the outer level of yourINSERT
orREPLACE
query, you must setPARTITIONED BY
toALL
orALL TIME
.
Earlier versions required the TIME_FLOOR
notation to specify a granularity other than the keywords.
In the current version, the string constant provides a simpler equivalent solution.
The following ISO 8601 periods are supported for TIME_FLOOR
and the string constant:
- PT1S
- PT1M
- PT5M
- PT10M
- PT15M
- PT30M
- PT1H
- PT6H
- P1D
- P1W*
- P1M
- P3M
- P1Y
For more information about partitioning, see Partitioning.
*Avoid partitioning by week, P1W
, because weeks don't align neatly with months and years, making it difficult to partition by coarser granularities later.
CLUSTERED BY
The CLUSTERED BY <column list>
clause is optional for INSERT and REPLACE. It accepts a list of
column names or expressions. Druid's segment generation only supports ascending order, so an INSERT
or REPLACE
query with
CLUSTERED BY
columns in DESC
ordering is not allowed.
For more information about clustering, see Clustering.
Context parameters
In addition to the Druid SQL context parameters, the multi-stage query task engine accepts certain context parameters that are specific to it.
Use context parameters alongside your queries to customize the behavior of the query. If you're using the API, include the context parameters in the query context when you submit a query:
{
"query": "SELECT 1 + 1",
"context": {
"<key>": "<value>",
"maxNumTasks": 3
}
}
If you're using the web console, you can specify the context parameters through various UI options.
The following table lists the context parameters for the MSQ task engine:
Parameter | Description | Default value |
---|---|---|
maxNumTasks |
SELECT, INSERT, REPLACE The maximum total number of tasks to launch, including the controller task. The lowest possible value for this setting is 2: one controller and one worker. All tasks must be able to launch simultaneously. If they cannot, the query returns a TaskStartTimeout error code after approximately 10 minutes.May also be provided as numTasks . If both are present, maxNumTasks takes priority. |
2 |
taskAssignment |
SELECT, INSERT, REPLACE Determines how many tasks to use. Possible values include:
|
max |
finalizeAggregations |
SELECT, INSERT, REPLACE Determines the type of aggregation to return. If true, Druid finalizes the results of complex aggregations that directly appear in query results. If false, Druid returns the aggregation's intermediate type rather than finalized type. This parameter is useful during ingestion, where it enables storing sketches directly in Druid tables. For more information about aggregations, see SQL aggregation functions. |
true |
sqlJoinAlgorithm |
SELECT, INSERT, REPLACE Algorithm to use for JOIN. Use broadcast (the default) for broadcast hash join or sortMerge for sort-merge join. Affects all JOIN operations in the query. This is a hint to the MSQ engine and the actual joins in the query may proceed in a different way than specified. See Joins for more details. |
broadcast |
rowsInMemory |
INSERT or REPLACE Maximum number of rows to store in memory at once before flushing to disk during the segment generation process. Ignored for non-INSERT queries. In most cases, use the default value. You may need to override the default if you run into one of the known issues around memory usage. |
100,000 |
segmentSortOrder |
INSERT or REPLACE Normally, Druid sorts rows in individual segments using __time first, followed by the CLUSTERED BY clause. When you set segmentSortOrder , Druid sorts rows in segments using this column list first, followed by the CLUSTERED BY order.You provide the column list as comma-separated values or as a JSON array in string form. If your query includes __time , then this list must begin with __time . For example, consider an INSERT query that uses CLUSTERED BY country and has segmentSortOrder set to __time,city . Within each time chunk, Druid assigns rows to segments based on country , and then within each of those segments, Druid sorts those rows by __time first, then city , then country . |
empty list |
maxParseExceptions |
SELECT, INSERT, REPLACE Maximum number of parse exceptions that are ignored while executing the query before it stops with TooManyWarningsFault . To ignore all the parse exceptions, set the value to -1. |
0 |
rowsPerSegment |
INSERT or REPLACE The number of rows per segment to target. The actual number of rows per segment may be somewhat higher or lower than this number. In most cases, use the default. For general information about sizing rows per segment, see Segment Size Optimization. |
3,000,000 |
indexSpec |
INSERT or REPLACE An indexSpec to use when generating segments. May be a JSON string or object. See Front coding for details on configuring an indexSpec with front coding. |
See indexSpec . |
durableShuffleStorage |
SELECT, INSERT, REPLACE Whether to use durable storage for shuffle mesh. To use this feature, configure the durable storage at the server level using druid.msq.intermediate.storage.enable=true ). If these properties are not configured, any query with the context variable durableShuffleStorage=true fails with a configuration error. |
false |
faultTolerance |
SELECT, INSERT, REPLACE Whether to turn on fault tolerance mode or not. Failed workers are retried based on Limits. Cannot be used when durableShuffleStorage is explicitly set to false. |
false |
selectDestination |
SELECT Controls where the final result of the select query is written. Use taskReport (the default) to write select results to the task report. This is not scalable since task reports size explodes for large results Use durableStorage to write results to durable storage location. For large results sets, its recommended to use durableStorage . To configure durable storage see this section. |
taskReport |
waitTillSegmentsLoad |
INSERT, REPLACE If set, the ingest query waits for the generated segment to be loaded before exiting, else the ingest query exits without waiting. The task and live reports contain the information about the status of loading segments if this flag is set. This will ensure that any future queries made after the ingestion exits will include results from the ingestion. The drawback is that the controller task will stall till the segments are loaded. |
false |
Joins
Joins in multi-stage queries use one of two algorithms based on what you set the context parameter sqlJoinAlgorithm
to:
If you omit this context parameter, the MSQ task engine uses broadcast since it's the default join algorithm. The context parameter applies to the entire SQL statement, so you can't mix different join algorithms in the same query.
sqlJoinAlgorithm
is a hint to the planner to execute the join in the specified manner. The planner can decide to ignore
the hint if it deduces that the specified algorithm can be detrimental to the performance of the join beforehand. This intelligence
is very limited as of now, and the sqlJoinAlgorithm
set would be respected in most cases, therefore the user should set it
appropriately. See the advantages and the drawbacks for the broadcast and the sort-merge join to
determine which join to use beforehand.
Broadcast
The default join algorithm for multi-stage queries is a broadcast hash join, which is similar to how joins are executed with native queries.
To use broadcast joins, either omit the sqlJoinAlgorithm
or set it to broadcast
.
For a broadcast join, any adjacent joins are flattened into a structure with a "base" input (the bottom-leftmost one) and other leaf inputs (the rest). Next, any subqueries that are inputs the join (either base or other leafs) are planned into independent stages. Then, the non-base leaf inputs are all connected as broadcast inputs to the "base" stage.
Together, all of these non-base leaf inputs must not exceed the limit on broadcast table footprint. There is no limit on the size of the base (leftmost) input.
Only LEFT JOIN, INNER JOIN, and CROSS JOIN are supported with broadcast
.
Join conditions, if present, must be equalities. It is not necessary to include a join condition; for example,
CROSS JOIN
and comma join do not require join conditions.
The following example has a single join chain where orders
is the base input while products
and
customers
are non-base leaf inputs. The broadcast inputs (products
and customers
) must fall under the limit on broadcast table footprint, but the base orders
input
can be unlimited in size.
The query reads products
and customers
and then broadcasts both to
the stage that reads orders
. That stage loads the broadcast inputs (products
and customers
) in memory and walks
through orders
row by row. The results are aggregated and written to the table orders_enriched
.
REPLACE INTO orders_enriched
OVERWRITE ALL
SELECT
orders.__time,
products.name AS product_name,
customers.name AS customer_name,
SUM(orders.amount) AS amount
FROM orders
LEFT JOIN products ON orders.product_id = products.id
LEFT JOIN customers ON orders.customer_id = customers.id
GROUP BY 1, 2
PARTITIONED BY HOUR
CLUSTERED BY product_name
Sort-merge
You can use the sort-merge join algorithm to make queries more scalable at the cost of performance. If your goal is performance, consider broadcast joins. There are various scenarios where broadcast join would return a BroadcastTablesTooLarge
error, but a sort-merge join would succeed.
To use the sort-merge join algorithm, set the context parameter sqlJoinAlgorithm
to sortMerge
.
In a sort-merge join, each pairwise join is planned into its own stage with two inputs. The two inputs are partitioned and sorted using a hash partitioning on the same key.
When using the sort-merge algorithm, keep the following in mind:
-
There is no limit on the overall size of either input, so sort-merge is a good choice for performing a join of two large inputs or for performing a self-join of a large input with itself.
-
There is a limit on the amount of data associated with each individual key. If both sides of the join exceed this limit, the query returns a
TooManyRowsWithSameKey
error. If only one side exceeds the limit, the query does not return this error. -
Join conditions are optional but must be equalities if they are present. For example,
CROSS JOIN
and comma join do not require join conditions. -
All join types are supported with
sortMerge
: LEFT, RIGHT, INNER, FULL, and CROSS.
The following example runs using a single sort-merge join stage that receives eventstream
(partitioned on user_id
) and users
(partitioned on id
) as inputs. There is no limit on the size of either input.
REPLACE INTO eventstream_enriched
OVERWRITE ALL
SELECT
eventstream.__time,
eventstream.user_id,
eventstream.event_type,
eventstream.event_details,
users.signup_date AS user_signup_date
FROM eventstream
LEFT JOIN users ON eventstream.user_id = users.id
PARTITIONED BY HOUR
CLUSTERED BY user
The context parameter that sets sqlJoinAlgorithm
to sortMerge
is not shown in the above example.
Durable storage
SQL-based ingestion supports using durable storage to store intermediate files temporarily. Enabling it can improve reliability. For more information, see Durable storage.
Durable storage configurations
Durable storage is supported on Amazon S3 storage and Microsoft's Azure storage. There are a few common configurations that controls the behavior for both the services as documented below. Apart from the common configurations, there are a few properties specific to each storage that must be set.
Common properties to configure the behavior of durable storage
Parameter | Default | Description |
---|---|---|
druid.msq.intermediate.storage.enable |
false | Whether to enable durable storage for the cluster. Set it to true to enable durable storage. For more information about enabling durable storage, see Durable storage. |
druid.msq.intermediate.storage.type |
n/a | Required. The type of storage to use. Set it to s3 for S3 and azure for Azure |
druid.msq.intermediate.storage.tempDir |
n/a | Required. Directory path on the local disk to store temporary files required while uploading and downloading the data |
druid.msq.intermediate.storage.maxRetry |
10 | Optional. Defines the max number times to attempt S3 API calls to avoid failures due to transient errors. |
druid.msq.intermediate.storage.chunkSize |
100MiB | Optional. Defines the size of each chunk to temporarily store in druid.msq.intermediate.storage.tempDir . The chunk size must be between 5 MiB and 5 GiB. A large chunk size reduces the API calls made to the durable storage, however it requires more disk space to store the temporary chunks. Druid uses a default of 100MiB if the value is not provided. |
Following properties need to be set in addition to the common properties to enable durable storage on S3
Parameter | Default | Description |
---|---|---|
druid.msq.intermediate.storage.bucket |
n/a | Required. The S3 bucket where the files are uploaded to and download from |
druid.msq.intermediate.storage.prefix |
n/a | Required. Path prepended to all the paths uploaded to the bucket to namespace the connector's files. Provide a unique value for the prefix and do not share the same prefix between different clusters. If the location includes other files or directories, then they might get cleaned up as well. |
Following properties must be set in addition to the common properties to enable durable storage on Azure.
Parameter | Default | Description |
---|---|---|
druid.msq.intermediate.storage.container |
n/a | Required. The Azure container where the files are uploaded to and downloaded from. |
druid.msq.intermediate.storage.prefix |
n/a | Required. Path prepended to all the paths uploaded to the container to namespace the connector's files. Provide a unique value for the prefix and do not share the same prefix between different clusters. If the location includes other files or directories, then they might get cleaned up as well. |
Durable storage creates files on the remote storage and is cleaned up once the job no longer requires those files. However, due to failures causing abrupt exit of the tasks, these files might not get cleaned up. Therefore, there are certain properties that you configure on the Overlord specifically to clean up intermediate files for the tasks that have completed and would no longer require these files:
Parameter | Default | Description |
---|---|---|
druid.msq.intermediate.storage.cleaner.enabled |
false | Optional. Whether durable storage cleaner should be enabled for the cluster. |
druid.msq.intermediate.storage.cleaner.delaySeconds |
86400 | Optional. The delay (in seconds) after the last run post which the durable storage cleaner would clean the outputs. |
Limits
Knowing the limits for the MSQ task engine can help you troubleshoot any errors that you encounter. Many of the errors occur as a result of reaching a limit.
The following table lists query limits:
Limit | Value | Error if exceeded |
---|---|---|
Size of an individual row written to a frame. Row size when written to a frame may differ from the original row size. | 1 MB | RowTooLarge |
Number of segment-granular time chunks encountered during ingestion. | 5,000 | TooManyBuckets |
Number of input files/segments per worker. | 10,000 | TooManyInputFiles |
Number of output partitions for any one stage. Number of segments generated during ingestion. | 25,000 | TooManyPartitions |
Number of output columns for any one stage. | 2,000 | TooManyColumns |
Number of cluster by columns that can appear in a stage | 1,500 | TooManyClusteredByColumns |
Number of workers for any one stage. | Hard limit is 1,000. Memory-dependent soft limit may be lower. | TooManyWorkers |
Maximum memory occupied by broadcasted tables. | 30% of each processor memory bundle. | BroadcastTablesTooLarge |
Maximum memory occupied by buffered data during sort-merge join. Only relevant when sqlJoinAlgorithm is sortMerge . |
10 MB | TooManyRowsWithSameKey |
Maximum relaunch attempts per worker. Initial run is not a relaunch. The worker will be spawned 1 + workerRelaunchLimit times before the job fails. |
2 | TooManyAttemptsForWorker |
Maximum relaunch attempts for a job across all workers. | 100 | TooManyAttemptsForJob |
Error codes
The following table describes error codes you may encounter in the multiStageQuery.payload.status.errorReport.error.errorCode
field:
Code | Meaning | Additional fields |
---|---|---|
BroadcastTablesTooLarge |
The size of the broadcast tables used in the right hand side of the join exceeded the memory reserved for them in a worker task. Try increasing the peon memory or reducing the size of the broadcast tables. |
maxBroadcastTablesSize : Memory reserved for the broadcast tables, measured in bytes. |
Canceled |
The query was canceled. Common reasons for cancellation:
|
|
CannotParseExternalData |
A worker task could not parse data from an external datasource. | errorMessage : More details on why parsing failed. |
ColumnNameRestricted |
The query uses a restricted column name. | columnName : The restricted column name. |
ColumnTypeNotSupported |
The column type is not supported. This can be because:
|
columnName : The column name with an unsupported type.columnType : The unknown column type. |
InsertCannotAllocateSegment |
The controller task could not allocate a new segment ID due to conflict with existing segments or pending segments. Common reasons for such conflicts:
Use REPLACE to overwrite the existing data or if the error contains the allocatedInterval then alternatively rerun the INSERT job with the mentioned granularity to append to existing data. Note that it might not always be possible to append to the existing data using INSERT and can only be done if allocatedInterval is present. |
dataSource interval : The interval for the attempted new segment allocation. allocatedInterval : The incorrect interval allocated by the overlord. It can be null |
InsertCannotBeEmpty |
An INSERT or REPLACE query did not generate any output rows in a situation where output rows are required for success. This can happen for INSERT or REPLACE queries with PARTITIONED BY set to something other than ALL or ALL TIME . |
dataSource |
InsertLockPreempted |
An INSERT or REPLACE query was canceled by a higher-priority ingestion job, such as a real-time ingestion task. | |
InsertTimeNull |
An INSERT or REPLACE query encountered a null timestamp in the __time field.This can happen due to using an expression like TIME_PARSE(timestamp) AS __time with a timestamp that cannot be parsed. (TIME_PARSE returns null when it cannot parse a timestamp.) In this case, try parsing your timestamps using a different function or pattern. Or, if your timestamps may genuinely be null, consider using COALESCE to provide a default value. One option is CURRENT_TIMESTAMP , which represents the start time of the job. |
|
InsertTimeOutOfBounds |
A REPLACE query generated a timestamp outside the bounds of the TIMESTAMP parameter for your OVERWRITE WHERE clause. To avoid this error, verify that the you specified is valid. |
interval : time chunk interval corresponding to the out-of-bounds timestamp |
InvalidNullByte |
A string column included a null byte. Null bytes in strings are not permitted. | source : The source that included the null byte </br /> rowNumber : The row number (1-indexed) that included the null byte column : The column that included the null byte value : Actual string containing the null byte position : Position (1-indexed) of occurrence of null byte |
QueryNotSupported |
QueryKit could not translate the provided native query to a multi-stage query. This can happen if the query uses features that aren't supported, like GROUPING SETS. |
|
QueryRuntimeError |
MSQ uses the native query engine to run the leaf stages. This error tells MSQ that error is in native query runtime. Since this is a generic error, the user needs to look at logs for the error message and stack trace to figure out the next course of action. If the user is stuck, consider raising a github issue for assistance. |
baseErrorMessage error message from the native query runtime. |
RowTooLarge |
The query tried to process a row that was too large to write to a single frame. See the Limits table for specific limits on frame size. Note that the effective maximum row size is smaller than the maximum frame size due to alignment considerations during frame writing. | maxFrameSize : The limit on the frame size. |
TaskStartTimeout |
Unable to launch pendingTasks worker out of total totalTasks workers tasks within timeout seconds of the last successful worker launch.There may be insufficient available slots to start all the worker tasks simultaneously. Try splitting up your query into smaller chunks using a smaller value of maxNumTasks . Another option is to increase capacity. |
pendingTasks : Number of tasks not yet started.totalTasks : The number of tasks attempted to launch.timeout : Timeout, in milliseconds, that was exceeded. |
TooManyAttemptsForJob |
Total relaunch attempt count across all workers exceeded max relaunch attempt limit. See the Limits table for the specific limit. | maxRelaunchCount : Max number of relaunches across all the workers defined in the Limits section. currentRelaunchCount : current relaunch counter for the job across all workers. taskId : Latest task id which failed rootErrorMessage : Error message of the latest failed task. |
TooManyAttemptsForWorker |
Worker exceeded maximum relaunch attempt count as defined in the Limits section. | maxPerWorkerRelaunchCount : Max number of relaunches allowed per worker as defined in the Limits section. workerNumber : the worker number for which the task failed taskId : Latest task id which failed rootErrorMessage : Error message of the latest failed task. |
TooManyBuckets |
Exceeded the maximum number of partition buckets for a stage (5,000 partition buckets). < br />Partition buckets are created for each PARTITIONED BY time chunk for INSERT and REPLACE queries. The most common reason for this error is that your PARTITIONED BY is too narrow relative to your data. |
maxBuckets : The limit on partition buckets. |
TooManyInputFiles |
Exceeded the maximum number of input files or segments per worker (10,000 files or segments). If you encounter this limit, consider adding more workers, or breaking up your query into smaller queries that process fewer files or segments per query. |
numInputFiles : The total number of input files/segments for the stage.maxInputFiles : The maximum number of input files/segments per worker per stage.minNumWorker : The minimum number of workers required for a successful run. |
TooManyPartitions |
Exceeded the maximum number of partitions for a stage (25,000 partitions). This can occur with INSERT or REPLACE statements that generate large numbers of segments, since each segment is associated with a partition. If you encounter this limit, consider breaking up your INSERT or REPLACE statement into smaller statements that process less data per statement. |
maxPartitions : The limit on partitions which was exceeded |
TooManyClusteredByColumns |
Exceeded the maximum number of clustering columns for a stage (1,500 columns). This can occur with CLUSTERED BY , ORDER BY , or GROUP BY with a large number of columns. |
numColumns : The number of columns requested.maxColumns : The limit on columns which was exceeded.stage : The stage number exceeding the limit |
TooManyRowsWithSameKey |
The number of rows for a given key exceeded the maximum number of buffered bytes on both sides of a join. See the Limits table for the specific limit. Only occurs when join is executed via the sort-merge join algorithm. | key : The key that had a large number of rows.numBytes : Number of bytes buffered, which may include other keys.maxBytes : Maximum number of bytes buffered. |
TooManyColumns |
Exceeded the maximum number of columns for a stage (2,000 columns). | numColumns : The number of columns requested.maxColumns : The limit on columns which was exceeded. |
TooManyWarnings |
Exceeded the maximum allowed number of warnings of a particular type. | rootErrorCode : The error code corresponding to the exception that exceeded the required limit. maxWarnings : Maximum number of warnings that are allowed for the corresponding rootErrorCode . |
TooManyWorkers |
Exceeded the maximum number of simultaneously-running workers. See the Limits table for more details. | workers : The number of simultaneously running workers that exceeded a hard or soft limit. This may be larger than the number of workers in any one stage if multiple stages are running simultaneously. maxWorkers : The hard or soft limit on workers that was exceeded. If this is lower than the hard limit (1,000 workers), then you can increase the limit by adding more memory to each task. |
NotEnoughMemory |
Insufficient memory to launch a stage. | suggestedServerMemory : Suggested number of bytes of memory to allocate to a given process. serverMemory : The number of bytes of memory available to a single process.usableMemory : The number of usable bytes of memory for a single process.serverWorkers : The number of workers running in a single process.serverThreads : The number of threads in a single process. |
NotEnoughTemporaryStorage |
Insufficient temporary storage configured to launch a stage. This limit is set by the property druid.indexer.task.tmpStorageBytesPerTask . This property should be increased to the minimum suggested limit to resolve this. |
suggestedMinimumStorage : Suggested number of bytes of temporary storage space to allocate to a given process. configuredTemporaryStorage : The number of bytes of storage currently configured. |
WorkerFailed |
A worker task failed unexpectedly. | errorMsg workerTaskId : The ID of the worker task. |
WorkerRpcFailed |
A remote procedure call to a worker task failed and could not recover. | workerTaskId : the id of the worker task |
UnknownError |
All other errors. | message |