mirror of https://github.com/apache/druid.git
added titles since there is no other indication other than URL as to which page has been selected from the left-side nav menu
This commit is contained in:
parent
f36d72642d
commit
0b7664f639
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
|
||||
# Tutorial: A First Look at Druid
|
||||
Greetings! This tutorial will help clarify some core Druid concepts. We will use a realtime dataset and issue some basic Druid queries. If you are ready to explore Druid, and learn a thing or two, read on!
|
||||
|
||||
About the data
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
|
||||
# Tutorial: All About Queries
|
||||
Hello! This tutorial is meant to provide a more in-depth look into Druid queries. The tutorial is somewhat incomplete right now but we hope to add more content to it in the near future.
|
||||
|
||||
Setup
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
|
||||
# Tutorial: Loading Your Data (Part 1)
|
||||
In our last [tutorial](Tutorial%3A-The-Druid-Cluster.html), we set up a complete Druid cluster. We created all the Druid dependencies and loaded some batched data. Druid shards data into self-contained chunks known as [segments](Segments.html). Segments are the fundamental unit of storage in Druid and all Druid nodes only understand segments.
|
||||
|
||||
In this tutorial, we will learn about batch ingestion (as opposed to real-time ingestion) and how to create segments using the final piece of the Druid Cluster, the [indexing service](Indexing-Service.html). The indexing service is a standalone service that accepts [tasks](Tasks.html) in the form of POST requests. The output of most tasks are segments.
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
|
||||
# Tutorial: Loading Your Data (Part 2)
|
||||
In this tutorial we will cover more advanced/real-world ingestion topics.
|
||||
|
||||
Druid can ingest streaming or batch data. Streaming data is ingested via the real-time node, and batch data is ingested via the Hadoop batch indexer. Druid also has a standalone ingestion service called the [indexing service](Indexing-Service.html).
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
layout: doc_page
|
||||
---
|
||||
|
||||
# Tutorial: The Druid Cluster
|
||||
Welcome back! In our first [tutorial](Tutorial%3A-A-First-Look-at-Druid.html), we introduced you to the most basic Druid setup: a single realtime node. We streamed in some data and queried it. Realtime nodes collect very recent data and periodically hand that data off to the rest of the Druid cluster. Some questions about the architecture must naturally come to mind. What does the rest of Druid cluster look like? How does Druid load available static data?
|
||||
|
||||
This tutorial will hopefully answer these questions!
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
layout: doc_page
|
||||
---
|
||||
|
||||
# About Druid
|
||||
|
||||
Druid is an open-source analytics data store designed for real-time exploratory queries on large-scale data sets (100’s of Billions entries, 100’s TB data). Druid provides for cost-effective and always-on realtime data ingestion and arbitrary data exploration.
|
||||
|
||||
- Try out Druid with our Getting Started [Tutorial](./Tutorial%3A-A-First-Look-at-Druid.html)
|
||||
|
|
Loading…
Reference in New Issue