Gian Merlino d4967c38f8
Various documentation updates. (#13107)
* Various documentation updates.

1) Split out "data management" from "ingestion". Break it into thematic pages.

2) Move "SQL-based ingestion" into the Ingestion category. Adjust content so
   all conceptual content is in concepts.md and all syntax content is in reference.md.
   Shorten the known issues page to the most interesting ones.

3) Add SQL-based ingestion to the ingestion method comparison page. Remove the
   index task, since index_parallel is just as good when maxNumConcurrentSubTasks: 1.

4) Rename various mentions of "Druid console" to "web console".

5) Add additional information to ingestion/partitioning.md.

6) Remove a mention of Tranquility.

7) Remove a note about upgrading to Druid 0.10.1.

8) Remove no-longer-relevant task types from ingestion/tasks.md.

9) Move ingestion/native-batch-firehose.md to the hidden section. It was previously deprecated.

10) Move ingestion/native-batch-simple-task.md to the hidden section. It is still linked in some
    places, but it isn't very useful compared to index_parallel, so it shouldn't take up space
    in the sidebar.

11) Make all br tags self-closing.

12) Certain other cosmetic changes.

13) Update to node-sass 7.

* make travis use node12 for docs

Co-authored-by: Vadim Ogievetsky <vadim@ogievetsky.com>
2022-09-16 21:58:11 -07:00

3.8 KiB

id title
update Data updates

Overwrite

Apache Druid stores data partitioned by time chunk and supports overwriting existing data using time ranges. Data outside the replacement time range is not touched. Overwriting of existing data is done using the same mechanisms as batch ingestion.

For example:

In both cases, Druid's atomic update mechanism ensures that queries will flip seamlessly from the old data to the new data on a time-chunk-by-time-chunk basis.

Ingestion and overwriting cannot run concurrently for the same time range of the same datasource. While an overwrite job is ongoing for a particular time range of a datasource, new ingestions for that time range are queued up. Ingestions for other time ranges proceed as normal. Read-only queries also proceed as normal, using the pre-existing version of the data.

Druid does not support single-record updates by primary key.

Reindex

Reindexing is an overwrite of existing data where the source of new data is the existing data itself. It is used to perform schema changes, repartition data, filter out unwanted data, enrich existing data, and so on. This behaves just like any other overwrite with regard to atomic updates and locking.

With native batch, use the druid input source. If needed, transformSpec can be used to filter or modify data during the reindexing job.

With SQL, use REPLACE <table> OVERWRITE with `SELECT ... FROM

`. (Druid does not have `UPDATE` or `ALTER TABLE` statements.) Any SQL SELECT query can be used to filter, modify, or enrich the data during the reindexing job.

Rolled-up datasources

Rolled-up datasources can be effectively updated using appends, without rewrites. When you append a row that has an identical set of dimensions to an existing row, queries that use aggregation operators automatically combine those two rows together at query time.

Compaction or automatic compaction can be used to physically combine these matching rows together later on, by rewriting segments in the background.

Lookups

If you have a dimension where values need to be updated frequently, try first using lookups. A classic use case of lookups is when you have an ID dimension stored in a Druid segment, and want to map the ID dimension to a human-readable string that may need to be updated periodically.