Update screenshots for Druid console doc (#12593)

* druid console doc updates

* remove extra image

* Apply suggestions from code review

Co-authored-by: Katya Macedo  <38017980+ektravel@users.noreply.github.com>

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Charles Smith <techdocsmith@gmail.com>

* updated screenshot labels

Co-authored-by: Katya Macedo  <38017980+ektravel@users.noreply.github.com>
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
This commit is contained in:
Victoria Lim 2022-06-15 16:42:20 -07:00 committed by GitHub
parent 70f3b13621
commit 94564b6ce6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
22 changed files with 62 additions and 45 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 79 KiB

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 274 KiB

After

Width:  |  Height:  |  Size: 270 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 111 KiB

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 97 KiB

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 133 KiB

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 246 KiB

After

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 90 KiB

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 92 KiB

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 70 KiB

View File

@ -78,7 +78,7 @@ caller. End users typically query Brokers rather than querying Historicals or Mi
Overlords, and Coordinators. They are optional since you can also simply contact the Druid Brokers, Overlords, and Overlords, and Coordinators. They are optional since you can also simply contact the Druid Brokers, Overlords, and
Coordinators directly. Coordinators directly.
The Router also runs the [Druid Console](../operations/druid-console.md), a management UI for datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. The user can also run SQL and native Druid queries within the console. The Router also runs the [Druid console](../operations/druid-console.md), a management UI for datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. The user can also run SQL and native Druid queries within the console.
### Data server ### Data server

View File

@ -24,13 +24,13 @@ title: "Router Process"
> The Router is an optional and [experimental](../development/experimental.md) feature due to the fact that its recommended place in the Druid cluster architecture is still evolving. > The Router is an optional and [experimental](../development/experimental.md) feature due to the fact that its recommended place in the Druid cluster architecture is still evolving.
> However, it has been battle-tested in production, and it hosts the powerful [Druid Console](../operations/druid-console.md), so you should feel safe deploying it. > However, it has been battle-tested in production, and it hosts the powerful [Druid console](../operations/druid-console.md), so you should feel safe deploying it.
The Apache Druid Router process can be used to route queries to different Broker processes. By default, the broker routes queries based on how [Rules](../operations/rule-configuration.md) are set up. For example, if 1 month of recent data is loaded into a `hot` cluster, queries that fall within the recent month can be routed to a dedicated set of brokers. Queries outside this range are routed to another set of brokers. This set up provides query isolation such that queries for more important data are not impacted by queries for less important data. The Apache Druid Router process can be used to route queries to different Broker processes. By default, the broker routes queries based on how [Rules](../operations/rule-configuration.md) are set up. For example, if 1 month of recent data is loaded into a `hot` cluster, queries that fall within the recent month can be routed to a dedicated set of brokers. Queries outside this range are routed to another set of brokers. This set up provides query isolation such that queries for more important data are not impacted by queries for less important data.
For query routing purposes, you should only ever need the Router process if you have a Druid cluster well into the terabyte range. For query routing purposes, you should only ever need the Router process if you have a Druid cluster well into the terabyte range.
In addition to query routing, the Router also runs the [Druid Console](../operations/druid-console.md), a management UI for datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. The user can also run SQL and native Druid queries within the console. In addition to query routing, the Router also runs the [Druid console](../operations/druid-console.md), a management UI for datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. The user can also run SQL and native Druid queries within the console.
### Configuration ### Configuration

View File

@ -25,7 +25,7 @@ title: "Druid pac4j based Security extension"
Apache Druid Extension to enable [OpenID Connect](https://openid.net/connect/) based Authentication for Druid Processes using [pac4j](https://github.com/pac4j/pac4j) as the underlying client library. Apache Druid Extension to enable [OpenID Connect](https://openid.net/connect/) based Authentication for Druid Processes using [pac4j](https://github.com/pac4j/pac4j) as the underlying client library.
This can be used with any authentication server that supports same e.g. [Okta](https://developer.okta.com/). This can be used with any authentication server that supports same e.g. [Okta](https://developer.okta.com/).
This extension should only be used at the router node to enable a group of users in existing authentication server to interact with Druid cluster, using the [Web Console](../../operations/druid-console.md). This extension does not support JDBC client authentication. This extension should only be used at the router node to enable a group of users in existing authentication server to interact with Druid cluster, using the [Druid console](../../operations/druid-console.md). This extension does not support JDBC client authentication.
## Configuration ## Configuration

View File

@ -1,6 +1,6 @@
--- ---
id: druid-console id: druid-console
title: "Web console" title: "Druid console"
--- ---
<!-- <!--
@ -22,17 +22,15 @@ title: "Web console"
~ under the License. ~ under the License.
--> -->
Druid includes a console for managing datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. You can also run SQL and native Druid queries in the console. Druid includes a web console for loading data, managing datasources and tasks, and viewing server status and segment information.
You can also run SQL and native Druid queries in the console.
The Druid Console is hosted by the [Router](../design/router.md) process. Enable the following cluster settings to use the Druid console. Note that these settings are enabled by default.
- Enable the Router's [management proxy](../design/router.md#enabling-the-management-proxy).
The following cluster settings must be enabled, as they are by default: - Enable [Druid SQL](../configuration/index.md#sql) for the Broker processes in the cluster.
- the Router's [management proxy](../design/router.md#enabling-the-management-proxy) must be enabled.
- the Broker processes in the cluster must have [Druid SQL](../querying/sql.md) enabled.
You can access the Druid console at:
The [Router](../design/router.md) service hosts the Druid console.
Access the Druid console at the following address:
``` ```
http://<ROUTER_IP>:<ROUTER_PORT> http://<ROUTER_IP>:<ROUTER_PORT>
``` ```
@ -41,36 +39,50 @@ http://<ROUTER_IP>:<ROUTER_PORT>
will show console users the files that the underlying user has permissions to. In general, avoid running Druid as will show console users the files that the underlying user has permissions to. In general, avoid running Druid as
root user. Consider creating a dedicated user account for running Druid. root user. Consider creating a dedicated user account for running Druid.
Below is a description of the high-level features and functionality of the Druid Console This topic presents the high-level features and functionality of the Druid console.
## Home ## Home
The home view provides a high level overview of the cluster. The **Home** view provides a high-level overview of the cluster.
Each card is clickable and links to the appropriate view. Each card is clickable and links to the appropriate view.
The legacy menu allows you to go to the [legacy coordinator and overlord consoles](./management-uis.md#legacy-consoles) should you need them.
The **Home** view displays the following cards:
* __Status__. Click this card for information on the Druid version and any extensions loaded on the cluster.
* [Datasources](#datasources)
* [Segments](#segments)
* [Supervisors](#supervisors-and-tasks)
* [Tasks](#supervisors-and-tasks)
* [Services](#services)
* [Lookups](#lookups)
You can access the [data loader](#data-loader) and [lookups view](#lookups) from the top-level navigation of the **Home** view.
![home-view](../assets/web-console-01-home-view.png "home view") ![home-view](../assets/web-console-01-home-view.png "home view")
## Data loader ## Data loader
The data loader view allows you to load data by building an ingestion spec with a step-by-step wizard. You can use the data loader to build an ingestion spec with a step-by-step wizard.
![data-loader-1](../assets/web-console-02-data-loader-1.png) ![data-loader-1](../assets/web-console-02-data-loader-1.png)
After selecting the location of your data just follow the series for steps that will show you incremental previews of the data as it will be ingested. After selecting the location of your data, follow the series of steps displaying incremental previews of the data as it is ingested.
After filling in the required details on every step you can navigate to the next step by clicking the `Next` button. After filling in the required details on every step you can navigate to the next step by clicking **Next**.
You can also freely navigate between the steps from the top navigation. You can also freely navigate between the steps from the top navigation.
Navigating with the top navigation will leave the underlying spec unmodified while clicking the `Next` button will attempt to fill in the subsequent steps with appropriate defaults. Navigating with the top navigation leaves the underlying spec unmodified while clicking **Next** attempts to fill in the subsequent steps with appropriate defaults.
![data-loader-2](../assets/web-console-03-data-loader-2.png) ![data-loader-2](../assets/web-console-03-data-loader-2.png)
## Datasources ## Datasources
The datasources view shows all the currently enabled datasources. The **Datasources** view shows all the datasources currently loaded on the cluster, as well as their sizes and availability.
From this view, you can see the sizes and availability of the different datasources. From the **Datasources** view, you can edit the retention rules, configure automatic compaction, and drop data in a datasource.
You can edit the retention rules, configure automatic compaction, and drop data.
Like any view that is powered by a DruidSQL query you can click `View SQL query for table` from the `...` menu to run the underlying SQL query directly. A datasource is partitioned into one or more segments organized by time chunks.
To display a timeline of segments, toggle the option for **Show segment timeline**.
Like any view that is powered by a Druid SQL query, you can click **View SQL query for table** from the ellipsis menu to run the underlying SQL query directly.
![datasources](../assets/web-console-04-datasources.png) ![datasources](../assets/web-console-04-datasources.png)
@ -80,17 +92,20 @@ You can view and edit retention rules to determine the general availability of a
## Segments ## Segments
The segment view shows all the segments in the cluster. The **Segments** view shows all the [segments](../design/segments.md) in the cluster.
Each segment has a detail view that provides more information. Each segment has a detail view that provides more information.
The Segment ID is also conveniently broken down into Datasource, Start, End, Version, and Partition columns for ease of filtering and sorting. The Segment ID is also conveniently broken down into Datasource, Start, End, Version, and Partition columns for ease of filtering and sorting.
![segments](../assets/web-console-06-segments.png) ![segments](../assets/web-console-06-segments.png)
## Tasks and supervisors ## Supervisors and tasks
From this view, you can check the status of existing supervisors as well as suspend, resume, and reset them. From this view, you can check the status of existing supervisors as well as suspend, resume, and reset them.
The tasks table allows you see the currently running and recently completed tasks. The supervisor oversees the state of the indexing tasks to coordinate handoffs, manage failures, and ensure that the scalability and replication requirements are maintained.
To make managing a lot of tasks more accessible, you can group the tasks by their `Type`, `Datasource`, or `Status` to make navigation easier.
The tasks table allows you to see the currently running and recently completed tasks.
To navigate your tasks more easily, you can group them by their **Type**, **Datasource**, or **Status**.
Submit a task manually by clicking the ellipsis icon and selecting **Submit JSON task**.
![supervisors](../assets/web-console-07-supervisors.png) ![supervisors](../assets/web-console-07-supervisors.png)
@ -102,27 +117,29 @@ Click on the magnifying glass for any task to see more detail about it.
![tasks-status](../assets/web-console-09-task-status.png) ![tasks-status](../assets/web-console-09-task-status.png)
## Servers ## Services
The servers tab lets you see the current status of the nodes making up your cluster. The **Services** view lets you see the current status of the nodes making up your cluster.
You can group the nodes by type or by tier to get meaningful summary statistics. You can group the nodes by type or by tier to get meaningful summary statistics.
![servers](../assets/web-console-10-servers.png) ![servers](../assets/web-console-10-servers.png)
## Query ## Query
The query view lets you issue [DruidSQL](../querying/sql.md) queries and display the results as a table. The **Query** view lets you issue [Druid SQL](../querying/sql.md) queries and display the results as a table.
The view will attempt to infer your query and let you modify via contextual actions such as adding filters and changing the sort order when possible. The view will attempt to infer your query and let you modify the query via contextual actions such as adding filters and changing the sort order when possible.
From the ellipsis menu beside **Run**, you can view your query history, see the native query translation for a given Druid SQL query, and set the [query context](../querying/query-context.md).
![query-sql](../assets/web-console-11-query-sql.png) ![query-sql](../assets/web-console-11-query-sql.png)
The query view can also issue queries in Druid's [native query format](../querying/querying.md), which is JSON over HTTP. You can also use the query editor to issue queries in Druid's [native query format](../querying/querying.md), which is JSON over HTTP.
To send a native Druid query, you must start your query with `{` and format it as JSON.
![query-rune](../assets/web-console-12-query-rune.png) ![query-rune](../assets/web-console-12-query-rune.png)
## Lookups ## Lookups
You can create and edit query time lookups via the lookup view. Access the **Lookups** view from the **Lookups** card in the home view or by clicking on the gear icon in the upper right corner.
Here you can create and edit query time [lookups](../querying/lookups.md).
![lookups](../assets/web-console-13-lookups.png) ![lookups](../assets/web-console-13-lookups.png)

View File

@ -37,7 +37,7 @@ The following recommendations apply to the Druid cluster setup:
> **WARNING!** \ > **WARNING!** \
Druid administrators have the same OS permissions as the Unix user account running Druid. See [Authentication and authorization model](security-user-auth.md#authentication-and-authorization-model). If the Druid process is running under the OS root user account, then Druid administrators can read or write all files that the root account has access to, including sensitive files such as `/etc/passwd`. Druid administrators have the same OS permissions as the Unix user account running Druid. See [Authentication and authorization model](security-user-auth.md#authentication-and-authorization-model). If the Druid process is running under the OS root user account, then Druid administrators can read or write all files that the root account has access to, including sensitive files such as `/etc/passwd`.
* Enable authentication to the Druid cluster for production environments and other environments that can be accessed by untrusted networks. * Enable authentication to the Druid cluster for production environments and other environments that can be accessed by untrusted networks.
* Enable authorization and do not expose the Druid Console without authorization enabled. If authorization is not enabled, any user that has access to the web console has the same privileges as the operating system user that runs the Druid Console process. * Enable authorization and do not expose the Druid console without authorization enabled. If authorization is not enabled, any user that has access to the web console has the same privileges as the operating system user that runs the Druid console process.
* Grant users the minimum permissions necessary to perform their functions. For instance, do not allow users who only need to query data to write to data sources or view state. * Grant users the minimum permissions necessary to perform their functions. For instance, do not allow users who only need to query data to write to data sources or view state.
* Do not provide plain-text passwords for production systems in configuration specs. For example, sensitive properties should not be in the `consumerProperties` field of `KafkaSupervisorIngestionSpec`. See [Environment variable dynamic config provider](./dynamic-config-provider.md#environment-variable-dynamic-config-provider) for more information. * Do not provide plain-text passwords for production systems in configuration specs. For example, sensitive properties should not be in the `consumerProperties` field of `KafkaSupervisorIngestionSpec`. See [Environment variable dynamic config provider](./dynamic-config-provider.md#environment-variable-dynamic-config-provider) for more information.
* Disable JavaScript, as noted in the [Security section](https://druid.apache.org/docs/latest/development/javascript.html#security) of the JavaScript guide. * Disable JavaScript, as noted in the [Security section](https://druid.apache.org/docs/latest/development/javascript.html#security) of the JavaScript guide.
@ -51,7 +51,7 @@ The following recommendations apply to the network where Druid runs:
* When possible, use firewall and other network layer filtering to only expose Druid services and ports specifically required for your use case. For example, only expose Broker ports to downstream applications that execute queries. You can limit access to a specific IP address or IP range to further tighten and enhance security. * When possible, use firewall and other network layer filtering to only expose Druid services and ports specifically required for your use case. For example, only expose Broker ports to downstream applications that execute queries. You can limit access to a specific IP address or IP range to further tighten and enhance security.
The following recommendation applies to Druid's authorization and authentication model: The following recommendation applies to Druid's authorization and authentication model:
* Only grant `WRITE` permissions to any `DATASOURCE` to trusted users. Druid's trust model assumes those users have the same privileges as the operating system user that runs the Druid Console process. Additionally, users with `WRITE` permissions can make changes to datasources and they have access to both task and supervisor update (POST) APIs which may affect ingestion. * Only grant `WRITE` permissions to any `DATASOURCE` to trusted users. Druid's trust model assumes those users have the same privileges as the operating system user that runs the Druid console process. Additionally, users with `WRITE` permissions can make changes to datasources and they have access to both task and supervisor update (POST) APIs which may affect ingestion.
* Only grant `STATE READ`, `STATE WRITE`, `CONFIG WRITE`, and `DATASOURCE WRITE` permissions to highly-trusted users. These permissions allow users to access resources on behalf of the Druid server process regardless of the datasource. * Only grant `STATE READ`, `STATE WRITE`, `CONFIG WRITE`, and `DATASOURCE WRITE` permissions to highly-trusted users. These permissions allow users to access resources on behalf of the Druid server process regardless of the datasource.
* If your Druid client application allows less-trusted users to control the input source or firehose of an ingestion task, validate the URLs from the users. It is possible to point unchecked URLs to other locations and resources within your network or local file system. * If your Druid client application allows less-trusted users to control the input source or firehose of an ingestion task, validate the URLs from the users. It is possible to point unchecked URLs to other locations and resources within your network or local file system.

View File

@ -47,7 +47,7 @@ bin/post-index-task --file quickstart/tutorial/compaction-init-index.json --url
> Please note that `maxRowsPerSegment` in the ingestion spec is set to 1000. This is to generate multiple segments per hour and _NOT_ recommended in production. > Please note that `maxRowsPerSegment` in the ingestion spec is set to 1000. This is to generate multiple segments per hour and _NOT_ recommended in production.
> It's 5000000 by default and may need to be adjusted to make your segments optimized. > It's 5000000 by default and may need to be adjusted to make your segments optimized.
After the ingestion completes, go to [http://localhost:8888/unified-console.html#datasources](http://localhost:8888/unified-console.html#datasources) in a browser to see the new datasource in the Druid Console. After the ingestion completes, go to [http://localhost:8888/unified-console.html#datasources](http://localhost:8888/unified-console.html#datasources) in a browser to see the new datasource in the Druid console.
![compaction-tutorial datasource](../assets/tutorial-compaction-01.png "compaction-tutorial datasource") ![compaction-tutorial datasource](../assets/tutorial-compaction-01.png "compaction-tutorial datasource")
@ -104,13 +104,13 @@ After the task finishes, refresh the [segments view](http://localhost:8888/unifi
The original 51 segments will eventually be marked as "unused" by the Coordinator and removed, with the new compacted segments remaining. The original 51 segments will eventually be marked as "unused" by the Coordinator and removed, with the new compacted segments remaining.
By default, the Druid Coordinator will not mark segments as unused until the Coordinator process has been up for at least 15 minutes, so you may see the old segment set and the new compacted set at the same time in the Druid Console, with 75 total segments: By default, the Druid Coordinator will not mark segments as unused until the Coordinator process has been up for at least 15 minutes, so you may see the old segment set and the new compacted set at the same time in the Druid console, with 75 total segments:
![Compacted segments intermediate state 1](../assets/tutorial-compaction-03.png "Compacted segments intermediate state 1") ![Compacted segments intermediate state 1](../assets/tutorial-compaction-03.png "Compacted segments intermediate state 1")
![Compacted segments intermediate state 2](../assets/tutorial-compaction-04.png "Compacted segments intermediate state 2") ![Compacted segments intermediate state 2](../assets/tutorial-compaction-04.png "Compacted segments intermediate state 2")
The new compacted segments have a more recent version than the original segments, so even when both sets of segments are shown in the Druid Console, queries will only read from the new compacted segments. The new compacted segments have a more recent version than the original segments, so even when both sets of segments are shown in the Druid console, queries will only read from the new compacted segments.
Let's try running a COUNT(*) on `compaction-tutorial` again, where the row count should still be 39,244: Let's try running a COUNT(*) on `compaction-tutorial` again, where the row count should still be 39,244:

View File

@ -254,7 +254,7 @@ If the supervisor was successfully created, you will get a response containing t
For more details about what's going on here, check out the For more details about what's going on here, check out the
[Druid Kafka indexing service documentation](../development/extensions-core/kafka-ingestion.md). [Druid Kafka indexing service documentation](../development/extensions-core/kafka-ingestion.md).
You can view the current supervisors and tasks in the Druid Console: [http://localhost:8888/unified-console.md#tasks](http://localhost:8888/unified-console.html#tasks). You can view the current supervisors and tasks in the Druid console: [http://localhost:8888/unified-console.md#tasks](http://localhost:8888/unified-console.html#tasks).
## Querying your data ## Querying your data

View File

@ -147,7 +147,7 @@ performance issues. For more information, see [Native queries](../querying/query
9. Finally, click `...` and **Edit context** to see how you can add additional parameters controlling the execution of the query execution. In the field, enter query context options as JSON key-value pairs, as described in [Context flags](../querying/query-context.md). 9. Finally, click `...` and **Edit context** to see how you can add additional parameters controlling the execution of the query execution. In the field, enter query context options as JSON key-value pairs, as described in [Context flags](../querying/query-context.md).
That's it! We've built a simple query using some of the query builder features built into the Druid Console. The following That's it! We've built a simple query using some of the query builder features built into the Druid console. The following
sections provide a few more example queries you can try. Also, see [Other ways to invoke SQL queries](#other-ways-to-invoke-sql-queries) to learn how sections provide a few more example queries you can try. Also, see [Other ways to invoke SQL queries](#other-ways-to-invoke-sql-queries) to learn how
to run Druid SQL from the command line or over HTTP. to run Druid SQL from the command line or over HTTP.

View File

@ -41,7 +41,7 @@ The ingestion spec can be found at `quickstart/tutorial/retention-index.json`. L
bin/post-index-task --file quickstart/tutorial/retention-index.json --url http://localhost:8081 bin/post-index-task --file quickstart/tutorial/retention-index.json --url http://localhost:8081
``` ```
After the ingestion completes, go to [http://localhost:8888/unified-console.html#datasources](http://localhost:8888/unified-console.html#datasources) in a browser to access the Druid Console's datasource view. After the ingestion completes, go to [http://localhost:8888/unified-console.html#datasources](http://localhost:8888/unified-console.html#datasources) in a browser to access the Druid console's datasource view.
This view shows the available datasources and a summary of the retention rules for each datasource: This view shows the available datasources and a summary of the retention rules for each datasource:
@ -85,7 +85,7 @@ Now click `Save`. You can see the new rules in the datasources view:
![New rules](../assets/tutorial-retention-05.png "New rules") ![New rules](../assets/tutorial-retention-05.png "New rules")
Give the cluster a few minutes to apply the rule change, and go to the [segments view](http://localhost:8888/unified-console.html#segments) in the Druid Console. Give the cluster a few minutes to apply the rule change, and go to the [segments view](http://localhost:8888/unified-console.html#segments) in the Druid console.
The segments for the first 12 hours of 2015-09-12 are now gone: The segments for the first 12 hours of 2015-09-12 are now gone:
![New segments](../assets/tutorial-retention-06.png "New segments") ![New segments](../assets/tutorial-retention-06.png "New segments")