Apache Druid is a real-time analytics database designed for fast slice-and-dice analytics ("[OLAP](http://en.wikipedia.org/wiki/Online_analytical_processing)" queries) on large data sets. Most often, Druid powers use cases where real-time ingestion, fast query performance, and high uptime are important.
Druid is commonly used as the database backend for GUIs of analytical applications, or for highly-concurrent APIs that need fast aggregations. Druid works best with event-oriented data.
If you are experimenting with a new use case for Druid or have questions about Druid's capabilities and features, join the [Apache Druid Slack](http://apachedruidworkspace.slack.com/) channel. There, you can connect with Druid experts, ask questions, and get help in real time.
1.**Columnar storage format.** Druid uses column-oriented storage. This means it only loads the exact columns
needed for a particular query. This greatly improves speed for queries that retrieve only a few columns. Additionally, to support fast scans and aggregations, Druid optimizes column storage for each column according to its data type.
2.**Scalable distributed system.** Typical Druid deployments span clusters ranging from tens to hundreds of servers. Druid can ingest data at the rate of millions of records per second while retaining trillions of records and maintaining query latencies ranging from the sub-second to a few seconds.
3.**Massively parallel processing.** Druid can process each query in parallel across the entire cluster.
4.**Realtime or batch ingestion.** Druid can ingest data either real-time or in batches. Ingested data is immediately available for
querying.
5.**Self-healing, self-balancing, easy to operate.** As an operator, you add servers to scale out or
remove servers to scale down. The Druid cluster re-balances itself automatically in the background without any downtime. If a
Druid server fails, the system automatically routes data around the damage until the server can be replaced. Druid
is designed to run continuously without planned downtime for any reason. This is true for configuration changes and software
6.**Cloud-native, fault-tolerant architecture that won't lose data.** After ingestion, Druid safely stores a copy of your data in [deep storage](architecture.md#deep-storage). Deep storage is typically cloud storage, HDFS, or a shared filesystem. You can recover your data from deep storage even in the unlikely case that all Druid servers fail. For a limited failure that affects only a few Druid servers, replication ensures that queries are still possible during system recoveries.
- You need low-latency updates of _existing_ records using a primary key. Druid supports streaming inserts, but not streaming updates. You can perform updates using