mirror of https://github.com/apache/druid.git
add links to release notes, light refactor of landing page (#11051)
* add links to release notes, light refactor of landing page * Update docs/design/index.md
This commit is contained in:
parent
3b9dad4c9e
commit
cf2cde1d2d
|
@ -22,79 +22,74 @@ title: "Introduction to Apache Druid"
|
||||||
~ under the License.
|
~ under the License.
|
||||||
-->
|
-->
|
||||||
|
|
||||||
## What is Druid?
|
Apache Druid is a real-time analytics database designed for fast slice-and-dice analytics ("[OLAP](http://en.wikipedia.org/wiki/Online_analytical_processing)" queries) on large data sets. Most often, Druid powers use cases where real-time ingestion, fast query performance, and high uptime are important.
|
||||||
|
|
||||||
Apache Druid is a real-time analytics database designed for fast slice-and-dice analytics
|
Druid is commonly used as the database backend for GUIs of analytical applications, or for highly-concurrent APIs that need fast aggregations. Druid works best with event-oriented data.
|
||||||
("[OLAP](http://en.wikipedia.org/wiki/Online_analytical_processing)" queries) on large data sets. Druid is most often
|
|
||||||
used as a database for powering use cases where real-time ingest, fast query performance, and high uptime are important.
|
|
||||||
As such, Druid is commonly used for powering GUIs of analytical applications, or as a backend for highly-concurrent APIs
|
|
||||||
that need fast aggregations. Druid works best with event-oriented data.
|
|
||||||
|
|
||||||
Common application areas for Druid include:
|
Common application areas for Druid include:
|
||||||
|
|
||||||
- Clickstream analytics (web and mobile analytics)
|
- Clickstream analytics including web and mobile analytics
|
||||||
- Network telemetry analytics (network performance monitoring)
|
- Network telemetry analytics including network performance monitoring
|
||||||
- Server metrics storage
|
- Server metrics storage
|
||||||
- Supply chain analytics (manufacturing metrics)
|
- Supply chain analytics including manufacturing metrics
|
||||||
- Application performance metrics
|
- Application performance metrics
|
||||||
- Digital marketing/advertising analytics
|
- Digital marketing/advertising analytics
|
||||||
- Business intelligence/OLAP
|
- Business intelligence/OLAP
|
||||||
|
|
||||||
|
## Key features of Druid
|
||||||
|
|
||||||
Druid's core architecture combines ideas from data warehouses, timeseries databases, and logsearch systems. Some of
|
Druid's core architecture combines ideas from data warehouses, timeseries databases, and logsearch systems. Some of
|
||||||
Druid's key features are:
|
Druid's key features are:
|
||||||
|
|
||||||
1. **Columnar storage format.** Druid uses column-oriented storage, meaning it only needs to load the exact columns
|
1. **Columnar storage format.** Druid uses column-oriented storage. This means it only loads the exact columns
|
||||||
needed for a particular query. This gives a huge speed boost to queries that only hit a few columns. In addition, each
|
needed for a particular query. This greatly improves speed for queries that retrieve only a few columns. Additionally, to support fast scans and aggregations, Druid optimizes column storage for each column according to its data type.
|
||||||
column is stored optimized for its particular data type, which supports fast scans and aggregations.
|
2. **Scalable distributed system.** Typical Druid deployments span clusters ranging from tens to hundreds of servers. Druid can ingest data at the rate of millions of records per second while retaining trillions of records and maintaining query latencies ranging from the sub-second to a few seconds.
|
||||||
2. **Scalable distributed system.** Druid is typically deployed in clusters of tens to hundreds of servers, and can
|
3. **Massively parallel processing.** Druid can process each query in parallel across the entire cluster.
|
||||||
offer ingest rates of millions of records/sec, retention of trillions of records, and query latencies of sub-second to a
|
4. **Realtime or batch ingestion.** Druid can ingest data either real-time or in batches. Ingested data is immediately available for
|
||||||
few seconds.
|
querying.
|
||||||
3. **Massively parallel processing.** Druid can process a query in parallel across the entire cluster.
|
5. **Self-healing, self-balancing, easy to operate.** As an operator, you add servers to scale out or
|
||||||
4. **Realtime or batch ingestion.** Druid can ingest data either real-time (ingested data is immediately available for
|
remove servers to scale down. The Druid cluster re-balances itself automatically in the background without any downtime. If a
|
||||||
querying) or in batches.
|
Druid server fails, the system automatically routes data around the damage until the server can be replaced. Druid
|
||||||
5. **Self-healing, self-balancing, easy to operate.** As an operator, to scale the cluster out or in, simply add or
|
is designed to run continuously without planned downtime for any reason. This is true for configuration changes and software
|
||||||
remove servers and the cluster will rebalance itself automatically, in the background, without any downtime. If any
|
|
||||||
Druid servers fail, the system will automatically route around the damage until those servers can be replaced. Druid
|
|
||||||
is designed to run 24/7 with no need for planned downtimes for any reason, including configuration changes and software
|
|
||||||
updates.
|
updates.
|
||||||
6. **Cloud-native, fault-tolerant architecture that won't lose data.** Once Druid has ingested your data, a copy is
|
6. **Cloud-native, fault-tolerant architecture that won't lose data.** After ingestion, Druid safely stores a copy of your data in [deep storage](architecture.md#deep-storage). Deep storage is typically cloud storage, HDFS, or a shared filesystem. You can recover your data from deep storage even in the unlikely case that all Druid servers fail. For a limited failure that affects only a few Druid servers, replication ensures that queries are still possible during system recoveries.
|
||||||
stored safely in [deep storage](architecture.md#deep-storage) (typically cloud storage, HDFS, or a shared filesystem).
|
|
||||||
Your data can be recovered from deep storage even if every single Druid server fails. For more limited failures affecting
|
|
||||||
just a few Druid servers, replication ensures that queries are still possible while the system recovers.
|
|
||||||
7. **Indexes for quick filtering.** Druid uses [Roaring](https://roaringbitmap.org/) or
|
7. **Indexes for quick filtering.** Druid uses [Roaring](https://roaringbitmap.org/) or
|
||||||
[CONCISE](https://arxiv.org/pdf/1004.0403) compressed bitmap indexes to create indexes that power fast filtering and
|
[CONCISE](https://arxiv.org/pdf/1004.0403) compressed bitmap indexes to create indexes to enable fast filtering and searching across multiple columns.
|
||||||
searching across multiple columns.
|
8. **Time-based partitioning.** Druid first partitions data by time. You can optionally implement additional partitioning based upon other fields.
|
||||||
8. **Time-based partitioning.** Druid first partitions data by time, and can additionally partition based on other fields.
|
Time-based queries only access the partitions that match the time range of the query which leads to significant performance improvements.
|
||||||
This means time-based queries will only access the partitions that match the time range of the query. This leads to
|
|
||||||
significant performance improvements for time-based data.
|
|
||||||
9. **Approximate algorithms.** Druid includes algorithms for approximate count-distinct, approximate ranking, and
|
9. **Approximate algorithms.** Druid includes algorithms for approximate count-distinct, approximate ranking, and
|
||||||
computation of approximate histograms and quantiles. These algorithms offer bounded memory usage and are often
|
computation of approximate histograms and quantiles. These algorithms offer bounded memory usage and are often
|
||||||
substantially faster than exact computations. For situations where accuracy is more important than speed, Druid also
|
substantially faster than exact computations. For situations where accuracy is more important than speed, Druid also
|
||||||
offers exact count-distinct and exact ranking.
|
offers exact count-distinct and exact ranking.
|
||||||
10. **Automatic summarization at ingest time.** Druid optionally supports data summarization at ingestion time. This
|
10. **Automatic summarization at ingest time.** Druid optionally supports data summarization at ingestion time. This
|
||||||
summarization partially pre-aggregates your data, and can lead to big costs savings and performance boosts.
|
summarization partially pre-aggregates your data, potentially leading to significant cost savings and performance boosts.
|
||||||
|
|
||||||
## When should I use Druid?
|
## When to use Druid
|
||||||
|
|
||||||
Druid is used by many companies of various sizes for many different use cases. Check out the
|
Druid is used by many companies of various sizes for many different use cases. For more information see
|
||||||
[Powered by Apache Druid](/druid-powered) page
|
[Powered by Apache Druid](/druid-powered).
|
||||||
|
|
||||||
Druid is likely a good choice if your use case fits a few of the following descriptors:
|
Druid is likely a good choice if your use case matches a few of the following:
|
||||||
|
|
||||||
- Insert rates are very high, but updates are less common.
|
- Insert rates are very high, but updates are less common.
|
||||||
- Most of your queries are aggregation and reporting queries ("group by" queries). You may also have searching and
|
- Most of your queries are aggregation and reporting queries. For example "group by" queries. You may also have searching and
|
||||||
scanning queries.
|
scanning queries.
|
||||||
- You are targeting query latencies of 100ms to a few seconds.
|
- You are targeting query latencies of 100ms to a few seconds.
|
||||||
- Your data has a time component (Druid includes optimizations and design choices specifically related to time).
|
- Your data has a time component. Druid includes optimizations and design choices specifically related to time.
|
||||||
- You may have more than one table, but each query hits just one big distributed table. Queries may potentially hit more
|
- You may have more than one table, but each query hits just one big distributed table. Queries may potentially hit more
|
||||||
than one smaller "lookup" table.
|
than one smaller "lookup" table.
|
||||||
- You have high cardinality data columns (e.g. URLs, user IDs) and need fast counting and ranking over them.
|
- You have high cardinality data columns, e.g. URLs, user IDs, and need fast counting and ranking over them.
|
||||||
- You want to load data from Kafka, HDFS, flat files, or object storage like Amazon S3.
|
- You want to load data from Kafka, HDFS, flat files, or object storage like Amazon S3.
|
||||||
|
|
||||||
Situations where you would likely _not_ want to use Druid include:
|
Situations where you would likely _not_ want to use Druid include:
|
||||||
|
|
||||||
- You need low-latency updates of _existing_ records using a primary key. Druid supports streaming inserts, but not streaming updates (updates are done using
|
- You need low-latency updates of _existing_ records using a primary key. Druid supports streaming inserts, but not streaming updates. You can perform updates using
|
||||||
background batch jobs).
|
background batch jobs.
|
||||||
- You are building an offline reporting system where query latency is not very important.
|
- You are building an offline reporting system where query latency is not very important.
|
||||||
- You want to do "big" joins (joining one big fact table to another big fact table) and you are okay with these queries
|
- You want to do "big" joins, meaning joining one big fact table to another big fact table, and you are okay with these queries
|
||||||
taking a long time to complete.
|
taking a long time to complete.
|
||||||
|
|
||||||
|
## Learn more
|
||||||
|
- Try the Druid [Quickstart](../design/index.md).
|
||||||
|
- Learn more about Druid components in [Design](../design/architecture.md).
|
||||||
|
- Read about new features and other details of [Druid Releases](https://github.com/apache/druid/releases).
|
||||||
|
|
|
@ -33,6 +33,8 @@ following order:
|
||||||
5. Broker
|
5. Broker
|
||||||
6. Coordinator ( or merged Coordinator+Overlord )
|
6. Coordinator ( or merged Coordinator+Overlord )
|
||||||
|
|
||||||
|
For information about the latest release, see [Druid releases](https://github.com/apache/druid/releases).
|
||||||
|
|
||||||
\* In 0.12.0, there are protocol changes between the Kafka supervisor and Kafka Indexing task and also some changes to the metadata formats persisted on disk. Therefore, to support rolling upgrade, all the Middle Managers will need to be upgraded first before the Overlord. Note that this ordering is different from the standard order of upgrade, also note that this ordering is only necessary when using the Kafka Indexing Service. If one is not using Kafka Indexing Service or can handle down time for Kafka Supervisor then one can upgrade in any order.
|
\* In 0.12.0, there are protocol changes between the Kafka supervisor and Kafka Indexing task and also some changes to the metadata formats persisted on disk. Therefore, to support rolling upgrade, all the Middle Managers will need to be upgraded first before the Overlord. Note that this ordering is different from the standard order of upgrade, also note that this ordering is only necessary when using the Kafka Indexing Service. If one is not using Kafka Indexing Service or can handle down time for Kafka Supervisor then one can upgrade in any order.
|
||||||
|
|
||||||
## Historical
|
## Historical
|
||||||
|
|
Loading…
Reference in New Issue