--- id: reference title: SQL-based ingestion reference sidebar_label: Reference --- > This page describes SQL-based batch ingestion using the [`druid-multi-stage-query`](../multi-stage-query/index.md) > extension, new in Druid 24.0. Refer to the [ingestion methods](../ingestion/index.md#batch) table to determine which > ingestion method is right for you. ## SQL reference This topic is a reference guide for the multi-stage query architecture in Apache Druid. For examples of real-world usage, refer to the [Examples](examples.md) page. `INSERT` and `REPLACE` load data into a Druid datasource from either an external input source, or from another datasource. When loading from an external datasource, you typically must provide the kind of input source, the data format, and the schema (signature) of the input file. Druid provides *table functions* to allow you to specify the external file. There are two kinds. `EXTERN` works with the JSON-serialized specs for the three items, using the same JSON you would use in native ingest. A set of other, input-source-specific functions use SQL syntax to specify the format and the input schema. There is one function for each input source. The input-source-specific functions allow you to use SQL query parameters to specify the set of files (or URIs), making it easy to reuse the same SQL statement for each ingest: just specify the set of files to use each time. ### `EXTERN` Function Use the `EXTERN` function to read external data. The function has two variations. Function variation 1, with the input schema expressed as JSON: ```sql SELECT FROM TABLE( EXTERN( '', '', '' ) ) ``` `EXTERN` consists of the following parts: 1. Any [Druid input source](../ingestion/input-sources.md) as a JSON-encoded string. 2. Any [Druid input format](../ingestion/data-formats.md) as a JSON-encoded string. 3. A row signature, as a JSON-encoded array of column descriptors. Each column descriptor must have a `name` and a `type`. The type can be `string`, `long`, `double`, or `float`. This row signature is used to map the external data into the SQL layer. Variation 2, with the input schema expressed in SQL using an `EXTEND` clause. (See the next section for more detail on `EXTEND`). This format also uses named arguments to make the SQL a bit easier to read: ```sql SELECT FROM TABLE( EXTERN( inputSource => '', inputFormat => '' )) () ``` The input source and format are as above. The columns are expressed as in a SQL `CREATE TABLE`. Example: `(timestamp VARCHAR, metricType VARCHAR, value BIGINT)`. The optional `EXTEND` keyword can precede the column list: `EXTEND (timestamp VARCHAR...)`. For more information, see [Read external data with EXTERN](concepts.md#read-external-data-with-extern). ### `INSERT` Use the `INSERT` statement to insert data. Unlike standard SQL, `INSERT` loads data into the target table according to column name, not positionally. If necessary, use `AS` in your `SELECT` column list to assign the correct names. Do not rely on their positions within the SELECT clause. Statement format: ```sql INSERT INTO < SELECT query > PARTITIONED BY