mirror of https://github.com/apache/druid.git
Docs for request logging (#12363)
* add docs for request logging * remove stray character * Update docs/operations/request-logging.md Co-authored-by: TSFenwick <tsfenwick@gmail.com> * Apply suggestions from code review Co-authored-by: Charles Smith <techdocsmith@gmail.com> Co-authored-by: TSFenwick <tsfenwick@gmail.com> Co-authored-by: Charles Smith <techdocsmith@gmail.com>
This commit is contained in:
parent
f2495a67d2
commit
9ed7aa33ec
|
@ -251,16 +251,17 @@ Note that some sensitive information may be logged if these settings are enabled
|
|||
### Request Logging
|
||||
|
||||
All processes that can serve queries can also log the query requests they see. Broker processes can additionally log the SQL requests (both from HTTP and JDBC) they see.
|
||||
For an example of setting up request logging, see [Request logging](../operations/request-logging.md).
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.request.logging.type`|Choices: noop, file, emitter, slf4j, filtered, composing, switching. How to log every query request.|[required to configure request logging]|
|
||||
|`druid.request.logging.type`|How to log every query request. Choices: `noop`, [`file`](#file-request-logging), [`emitter`](#emitter-request-logging), [`slf4j`](#slf4j-request-logging), [`filtered`](#filtered-request-logging), [`composing`](#composing-request-logging), [`switching`](#switching-request-logging)|`noop` (request logging disabled by default)|
|
||||
|
||||
Note that, you can enable sending all the HTTP requests to log by setting "org.apache.druid.jetty.RequestLog" to DEBUG level. See [Logging](../configuration/logging.md)
|
||||
Note that you can enable sending all the HTTP requests to log by setting `org.apache.druid.jetty.RequestLog` to the `DEBUG` level. See [Logging](../configuration/logging.md) for more information.
|
||||
|
||||
#### File Request Logging
|
||||
#### File request logging
|
||||
|
||||
Daily request logs are stored on disk.
|
||||
The `file` request logger stores daily request logs on disk.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|
@ -279,24 +280,24 @@ For SQL query request, the `native_query` field is empty. Example
|
|||
2019-01-14T10:00:00.000Z 127.0.0.1 {"sqlQuery/time":100,"sqlQuery/bytes":600,"success":true,"identity":"user1"} {"query":"SELECT page, COUNT(*) AS Edits FROM wikiticker WHERE __time BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 10","context":{"sqlQueryId":"c9d035a0-5ffd-4a79-a865-3ffdadbb5fdd","nativeQueryIds":"[490978e4-f5c7-4cf6-b174-346e63cf8863]"}}
|
||||
```
|
||||
|
||||
#### Emitter Request Logging
|
||||
#### Emitter request logging
|
||||
|
||||
Every request is emitted to some external location.
|
||||
The `emitter` request logger emits every request to the external location specified in the [emitter](#enabling-metrics) configuration.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.request.logging.feed`|Feed name for requests.|none|
|
||||
|
||||
#### SLF4J Request Logging
|
||||
#### SLF4J request logging
|
||||
|
||||
Every request is logged via SLF4J. Native queries are serialized into JSON in the log message regardless of the SLF4J format specification. They will be logged under the class `org.apache.druid.server.log.LoggingRequestLogger`.
|
||||
The `slf4j` request logger logs every request using SLF4J. It serializes native queries into JSON in the log message regardless of the SLF4J format specification. Requests are logged under the class `org.apache.druid.server.log.LoggingRequestLogger`.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.request.logging.setMDC`|If MDC entries should be set in the log entry. Your logging setup still has to be configured to handle MDC to format this data|false|
|
||||
|`druid.request.logging.setContextMDC`|If the druid query `context` should be added to the MDC entries. Has no effect unless `setMDC` is `true`|false|
|
||||
|`druid.request.logging.setMDC`|If you want to set MDC entries within the log entry, set this value to `true`. Your logging system must be configured to support MDC in order to format this data.|false|
|
||||
|`druid.request.logging.setContextMDC`|Set to "true" to add the Druid query `context` to the MDC entries. Only applies when `setMDC` is `true`.|false|
|
||||
|
||||
For native query, the following MDC fields are populated with `setMDC`:
|
||||
For a native query, the following MDC fields are populated when `setMDC` is `true`:
|
||||
|
||||
|MDC field|Description|
|
||||
|---------|-----------|
|
||||
|
@ -310,31 +311,36 @@ For native query, the following MDC fields are populated with `setMDC`:
|
|||
|`resultOrdering`|The ordering of results|
|
||||
|`descending`|If the query is a descending query|
|
||||
|
||||
#### Filtered Request Logging
|
||||
Filtered Request Logger filters requests based on a configurable query/time threshold (for native query) and sqlQuery/time threshold (for SQL query).
|
||||
For native query, only request logs where query/time is above the threshold are emitted. For SQL query, only request logs where sqlQuery/time is above the threshold are emitted.
|
||||
#### Filtered request logging
|
||||
|
||||
The `filtered` request logger filters requests based on the query type or how long a query takes to complete.
|
||||
For native queries, the logger only logs requests when the `query/time` metric exceeds the threshold provided in `queryTimeThresholdMs`.
|
||||
For SQL queries, it only logs requests when the `sqlQuery/time` metric exceeds threshold provided in `sqlQueryTimeThresholdMs`.
|
||||
See [Metrics](../operations/metrics.md) for more details on query metrics.
|
||||
|
||||
Requests that meet the threshold are logged using the request logger type set in `druid.request.logging.delegate.type`.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.request.logging.queryTimeThresholdMs`|Threshold value for query/time in milliseconds.|0, i.e., no filtering|
|
||||
|`druid.request.logging.sqlQueryTimeThresholdMs`|Threshold value for sqlQuery/time in milliseconds.|0, i.e., no filtering|
|
||||
|`druid.request.logging.queryTimeThresholdMs`|Threshold value for the `query/time` metric in milliseconds.|0, i.e., no filtering|
|
||||
|`druid.request.logging.sqlQueryTimeThresholdMs`|Threshold value for the `sqlQuery/time` metric in milliseconds.|0, i.e., no filtering|
|
||||
|`druid.request.logging.mutedQueryTypes` | Query requests of these types are not logged. Query types are defined as string objects corresponding to the "queryType" value for the specified query in the Druid's [native JSON query API](http://druid.apache.org/docs/latest/querying/querying). Misspelled query types will be ignored. Example to ignore scan and timeBoundary queries: ["scan", "timeBoundary"]| []|
|
||||
|`druid.request.logging.delegate.type`|Type of delegate request logger to log requests.|none|
|
||||
|
||||
#### Composite Request Logging
|
||||
Composite Request Logger emits request logs to multiple request loggers.
|
||||
#### Composing request logging
|
||||
The `composing` request logger emits request logs to multiple request loggers.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.request.logging.loggerProviders`|List of request loggers for emitting request logs.|none|
|
||||
|
||||
#### Switching Request Logging
|
||||
Switching Request Logger routes native query's request logs to one request logger and SQL query's request logs to another request logger.
|
||||
#### Switching request logging
|
||||
The `switching` request logger routes native query request logs to one request logger and SQL query request logs to another request logger.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|`druid.request.logging.nativeQueryLogger`|request logger for emitting native query's request logs.|none|
|
||||
|`druid.request.logging.sqlQueryLogger`|request logger for emitting SQL query's request logs.|none|
|
||||
|`druid.request.logging.nativeQueryLogger`|Request logger for emitting native query request logs.|none|
|
||||
|`druid.request.logging.sqlQueryLogger`|Request logger for emitting SQL query request logs.|none|
|
||||
|
||||
### Audit Logging
|
||||
|
||||
|
@ -355,7 +361,7 @@ You can configure Druid processes to emit [metrics](../operations/metrics.md) re
|
|||
|--------|-----------|-------|
|
||||
|`druid.monitoring.emissionPeriod`| Frequency that Druid emits metrics.|`PT1M`|
|
||||
|[`druid.monitoring.monitors`](#metrics-monitors)|Sets list of Druid monitors used by a process.|none (no monitors)|
|
||||
|[`druid.emitter`](#metrics-emitters)|Setting this value initializes one of the emitter modules.|`noop`|
|
||||
|[`druid.emitter`](#metrics-emitters)|Setting this value initializes one of the emitter modules.|`noop` (metric emission disabled by default)|
|
||||
|
||||
#### Metrics monitors
|
||||
|
||||
|
@ -380,7 +386,9 @@ Metric monitoring is an essential part of Druid operations. The following monit
|
|||
|
||||
For example, you might configure monitors on all processes for system and JVM information within `common.runtime.properties` as follows:
|
||||
|
||||
`druid.monitoring.monitors=["org.apache.druid.java.util.metrics.SysMonitor","org.apache.druid.java.util.metrics.JvmMonitor"]`
|
||||
```
|
||||
druid.monitoring.monitors=["org.apache.druid.java.util.metrics.SysMonitor","org.apache.druid.java.util.metrics.JvmMonitor"]
|
||||
```
|
||||
|
||||
You can override cluster-wide configuration by amending the `runtime.properties` of individual processes.
|
||||
|
||||
|
@ -388,12 +396,12 @@ You can override cluster-wide configuration by amending the `runtime.properties`
|
|||
|
||||
There are several emitters available:
|
||||
|
||||
- `noop` (default)
|
||||
- log4j ([`logging`](#logging-emitter-module))
|
||||
- POSTs of JSON events ([`http`](#http-emitter-module))
|
||||
- [`parametrized`](#parametrized-http-emitter-module)
|
||||
- [`composing`](#composing-emitter-module) to initialize multiple emitter modules
|
||||
- [`graphite`](#graphite-emitter)
|
||||
- `noop` (default) disables metric emission.
|
||||
- [`logging`](#logging-emitter-module) emits logs using Log4j2.
|
||||
- [`http`](#http-emitter-module) sends `POST` requests of JSON events.
|
||||
- [`parametrized`](#parametrized-http-emitter-module) operates like the `http` emitter but fine-tunes the recipient URL based on the event feed.
|
||||
- [`composing`](#composing-emitter-module) initializes multiple emitter modules.
|
||||
- [`graphite`](#graphite-emitter) emits metrics to a [Graphite](https://graphiteapp.org/) Carbon service.
|
||||
|
||||
##### Logging Emitter Module
|
||||
|
||||
|
@ -436,11 +444,14 @@ The following properties allow the HTTP Emitter to use its own truststore config
|
|||
|
||||
##### Parametrized HTTP Emitter Module
|
||||
|
||||
`druid.emitter.parametrized.httpEmitting.*` configs correspond to the configs of HTTP Emitter Modules, see above.
|
||||
Except `recipientBaseUrl`. E.g., `druid.emitter.parametrized.httpEmitting.flushMillis`,
|
||||
`druid.emitter.parametrized.httpEmitting.flushCount`, `druid.emitter.parametrized.httpEmitting.ssl.trustStorePath`, etc.
|
||||
The parametrized emitter takes the same configs as the [`http` emitter](#http-emitter-module) using the prefix `druid.emitter.parametrized.httpEmitting.`.
|
||||
For example:
|
||||
* `druid.emitter.parametrized.httpEmitting.flushMillis`
|
||||
* `druid.emitter.parametrized.httpEmitting.flushCount`
|
||||
* `druid.emitter.parametrized.httpEmitting.ssl.trustStorePath`
|
||||
|
||||
The additional configs are:
|
||||
Do not specify `recipientBaseUrl` with the parametrized emitter.
|
||||
Instead use `recipientBaseUrlPattern` described in the table below.
|
||||
|
||||
|Property|Description|Default|
|
||||
|--------|-----------|-------|
|
||||
|
@ -454,7 +465,7 @@ The additional configs are:
|
|||
|
||||
##### Graphite Emitter
|
||||
|
||||
To use graphite as emitter set `druid.emitter=graphite`. For configuration details please follow this [link](../development/extensions-contrib/graphite.md).
|
||||
To use graphite as emitter set `druid.emitter=graphite`. For configuration details, see [Graphite emitter](../development/extensions-contrib/graphite.md) for the Graphite emitter Druid extension.
|
||||
|
||||
|
||||
### Metadata storage
|
||||
|
|
|
@ -23,7 +23,7 @@ title: "Metrics"
|
|||
-->
|
||||
|
||||
|
||||
You can configure Druid to [emit metrics](../configuration/index.html#enabling-metrics) that are essential for monitoring query execution, ingestion, coordination, and so on.
|
||||
You can configure Druid to [emit metrics](../configuration/index.md#enabling-metrics) that are essential for monitoring query execution, ingestion, coordination, and so on.
|
||||
|
||||
All Druid metrics share a common set of fields:
|
||||
|
||||
|
@ -37,9 +37,6 @@ Metrics may have additional dimensions beyond those listed above.
|
|||
|
||||
> Most metric values reset each emission period, as specified in `druid.monitoring.emissionPeriod`.
|
||||
|
||||
Available Metrics
|
||||
-----------------
|
||||
|
||||
## Query metrics
|
||||
|
||||
### Broker
|
||||
|
@ -49,18 +46,18 @@ Available Metrics
|
|||
|`query/time`|Milliseconds taken to complete a query.|Common: dataSource, type, interval, hasFilters, duration, context, remoteAddress, id. Aggregation Queries: numMetrics, numComplexMetrics. GroupBy: numDimensions. TopN: threshold, dimension.|< 1s|
|
||||
|`query/bytes`|The total number of bytes returned to the requesting client in the query response from the broker. Other services report the total bytes for their portion of the query. |Common: `dataSource`, `type`, `interval`, `hasFilters`, `duration`, `context`, `remoteAddress`, `id`. Aggregation Queries: `numMetrics`, `numComplexMetrics`. GroupBy: `numDimensions`. TopN: `threshold`, `dimension`.| |
|
||||
|`query/node/time`|Milliseconds taken to query individual historical/realtime processes.|id, status, server.|< 1s|
|
||||
|`query/node/bytes`|number of bytes returned from querying individual historical/realtime processes.|id, status, server.| |
|
||||
|`query/node/bytes`|Number of bytes returned from querying individual historical/realtime processes.|id, status, server.| |
|
||||
|`query/node/ttfb`|Time to first byte. Milliseconds elapsed until Broker starts receiving the response from individual historical/realtime processes.|id, status, server.|< 1s|
|
||||
|`query/node/backpressure`|Milliseconds that the channel to this process has spent suspended due to backpressure.|id, status, server.| |
|
||||
|`query/count`|number of total queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/success/count`|number of queries successfully processed|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/failed/count`|number of failed queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/interrupted/count`|number of queries interrupted due to cancellation.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/timeout/count`|number of timed out queries.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/segments/count`|This metric is not enabled by default. See the `QueryMetrics` Interface for reference regarding enabling this metric. Number of segments that will be touched by the query. In the broker, it makes a plan to distribute the query to realtime tasks and historicals based on a snapshot of segment distribution state. If there are some segments moved after this snapshot is created, certain historicals and realtime tasks can report those segments as missing to the broker. The broker will re-send the query to the new servers that serve those segments after move. In this case, those segments can be counted more than once in this metric.|Varies.|
|
||||
|`query/count`|Number of total queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/success/count`|Number of queries successfully processed|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/failed/count`|Number of failed queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/interrupted/count`|Number of queries interrupted due to cancellation.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/timeout/count`|Number of timed out queries.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/segments/count`|This metric is not enabled by default. See the `QueryMetrics` Interface for reference regarding enabling this metric. Number of segments that will be touched by the query. In the broker, it makes a plan to distribute the query to realtime tasks and historicals based on a snapshot of segment distribution state. If there are some segments moved after this snapshot is created, certain historicals and realtime tasks can report those segments as missing to the broker. The broker will re-send the query to the new servers that serve those segments after move. In this case, those segments can be counted more than once in this metric.|Varies.||
|
||||
|`query/priority`|Assigned lane and priority, only if Laning strategy is enabled. Refer to [Laning strategies](../configuration/index.md#laning-strategies)|lane, dataSource, type|0|
|
||||
|`sqlQuery/time`|Milliseconds taken to complete a SQL query.|id, nativeQueryIds, dataSource, remoteAddress, success.|< 1s|
|
||||
|`sqlQuery/bytes`|number of bytes returned in SQL query response.|id, nativeQueryIds, dataSource, remoteAddress, success.| |
|
||||
|`sqlQuery/bytes`|Number of bytes returned in the SQL query response.|id, nativeQueryIds, dataSource, remoteAddress, success.| |
|
||||
|
||||
### Historical
|
||||
|
||||
|
@ -72,11 +69,11 @@ Available Metrics
|
|||
|`segment/scan/pending`|Number of segments in queue waiting to be scanned.||Close to 0|
|
||||
|`query/segmentAndCache/time`|Milliseconds taken to query individual segment or hit the cache (if it is enabled on the Historical process).|id, segment.|several hundred milliseconds|
|
||||
|`query/cpu/time`|Microseconds of CPU time taken to complete a query|Common: dataSource, type, interval, hasFilters, duration, context, remoteAddress, id. Aggregation Queries: numMetrics, numComplexMetrics. GroupBy: numDimensions. TopN: threshold, dimension.|Varies|
|
||||
|`query/count`|number of total queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/success/count`|number of queries successfully processed|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/failed/count`|number of failed queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/interrupted/count`|number of queries interrupted due to cancellation.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/timeout/count`|number of timed out queries.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/count`|Total number of queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/success/count`|Number of queries successfully processed|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/failed/count`|Number of failed queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/interrupted/count`|Number of queries interrupted due to cancellation.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/timeout/count`|Number of timed out queries.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|
||||
### Real-time
|
||||
|
||||
|
@ -85,11 +82,11 @@ Available Metrics
|
|||
|`query/time`|Milliseconds taken to complete a query.|Common: dataSource, type, interval, hasFilters, duration, context, remoteAddress, id. Aggregation Queries: numMetrics, numComplexMetrics. GroupBy: numDimensions. TopN: threshold, dimension.|< 1s|
|
||||
|`query/wait/time`|Milliseconds spent waiting for a segment to be scanned.|id, segment.|several hundred milliseconds|
|
||||
|`segment/scan/pending`|Number of segments in queue waiting to be scanned.||Close to 0|
|
||||
|`query/count`|number of total queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/success/count`|number of queries successfully processed|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/failed/count`|number of failed queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/interrupted/count`|number of queries interrupted due to cancellation.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/timeout/count`|number of timed out queries.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/count`|Number of total queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/success/count`|Number of queries successfully processed|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/failed/count`|Number of failed queries|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/interrupted/count`|Number of queries interrupted due to cancellation.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|`query/timeout/count`|Number of timed out queries.|This metric is only available if the QueryCountStatsMonitor module is included.||
|
||||
|
||||
### Jetty
|
||||
|
||||
|
@ -145,9 +142,11 @@ If SQL is enabled, the Broker will emit the following metrics for SQL.
|
|||
|`sqlQuery/time`|Milliseconds taken to complete a SQL.|id, nativeQueryIds, dataSource, remoteAddress, success.|< 1s|
|
||||
|`sqlQuery/bytes`|number of bytes returned in SQL response.|id, nativeQueryIds, dataSource, remoteAddress, success.| |
|
||||
|
||||
## Ingestion Metrics (Kafka Indexing Service)
|
||||
## Ingestion metrics
|
||||
|
||||
These metrics are applicable for the Kafka Indexing Service.
|
||||
### Ingestion metrics for Kafka
|
||||
|
||||
These metrics apply to the [Kafka indexing service](../development/extensions-core/kafka-ingestion.md).
|
||||
|
||||
|Metric|Description|Dimensions|Normal Value|
|
||||
|------|-----------|----------|------------|
|
||||
|
@ -155,9 +154,9 @@ These metrics are applicable for the Kafka Indexing Service.
|
|||
|`ingest/kafka/maxLag`|Max lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers across all partitions. Minimum emission period for this metric is a minute.|dataSource.|Greater than 0, should not be a very high number |
|
||||
|`ingest/kafka/avgLag`|Average lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers across all partitions. Minimum emission period for this metric is a minute.|dataSource.|Greater than 0, should not be a very high number |
|
||||
|
||||
## Ingestion Metrics (Kinesis Indexing Service)
|
||||
### Ingestion metrics for Kinesis
|
||||
|
||||
These metrics are applicable for the Kinesis Indexing Service.
|
||||
These metrics apply to the [Kinesis indexing service](../development/extensions-core/kinesis-ingestion.md).
|
||||
|
||||
|Metric|Description|Dimensions|Normal Value|
|
||||
|------|-----------|----------|------------|
|
||||
|
@ -165,7 +164,7 @@ These metrics are applicable for the Kinesis Indexing Service.
|
|||
|`ingest/kinesis/maxLag/time`|Max lag time in milliseconds between the current message sequence number consumed by the Kinesis indexing tasks and latest sequence number in Kinesis across all shards. Minimum emission period for this metric is a minute.|dataSource.|Greater than 0, up to max Kinesis retention period in milliseconds |
|
||||
|`ingest/kinesis/avgLag/time`|Average lag time in milliseconds between the current message sequence number consumed by the Kinesis indexing tasks and latest sequence number in Kinesis across all shards. Minimum emission period for this metric is a minute.|dataSource.|Greater than 0, up to max Kinesis retention period in milliseconds |
|
||||
|
||||
## Ingestion metrics
|
||||
### Other ingestion metrics
|
||||
|
||||
Streaming ingestion tasks and certain types of
|
||||
batch ingestion emit the following metrics. These metrics are deltas for each emission period.
|
||||
|
|
|
@ -0,0 +1,249 @@
|
|||
---
|
||||
id: request-logging
|
||||
title: Request logging
|
||||
sidebar_label: Request logging
|
||||
---
|
||||
|
||||
<!--
|
||||
~ Licensed to the Apache Software Foundation (ASF) under one
|
||||
~ or more contributor license agreements. See the NOTICE file
|
||||
~ distributed with this work for additional information
|
||||
~ regarding copyright ownership. The ASF licenses this file
|
||||
~ to you under the Apache License, Version 2.0 (the
|
||||
~ "License"); you may not use this file except in compliance
|
||||
~ with the License. You may obtain a copy of the License at
|
||||
~
|
||||
~ http://www.apache.org/licenses/LICENSE-2.0
|
||||
~
|
||||
~ Unless required by applicable law or agreed to in writing,
|
||||
~ software distributed under the License is distributed on an
|
||||
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
~ KIND, either express or implied. See the License for the
|
||||
~ specific language governing permissions and limitations
|
||||
~ under the License.
|
||||
-->
|
||||
|
||||
All Apache Druid services that can serve queries can also log the query requests they process.
|
||||
Request logs contain information on query metrics, including execution time and memory usage.
|
||||
You can use information in the request logs to monitor query performance, determine bottlenecks, and analyze and improve slow queries.
|
||||
|
||||
Request logging is disabled by default.
|
||||
This topic describes how to configure Druid to generate request logs to track query metrics.
|
||||
|
||||
## Configure request logging
|
||||
|
||||
To enable request logging, determine the type of request logger to use, then set the configurations specific to the request logger type.
|
||||
|
||||
The following request logger types are available:
|
||||
|
||||
- `noop`: Disables request logging, the default behavior.
|
||||
- [`file`](../configuration/index.md#file-request-logging): Stores logs to disk.
|
||||
- [`emitter`](../configuration/index.md#emitter-request-logging): Logs request to an external location, which is configured through an emitter.
|
||||
- [`slf4j`](../configuration/index.md#slf4j-request-logging): Logs queries via the SLF4J Java logging API.
|
||||
- [`filtered`](../configuration/index.md#filtered-request-logging): Filters requests by query type or execution time before logging the filtered queries by the delegated request logger.
|
||||
- [`composing`](../configuration/index.md#composing-request-logging): Logs all requests to multiple request loggers.
|
||||
- [`switching`](../configuration/index.md#switching-request-logging): Logs native queries and SQL queries to separate request loggers.
|
||||
|
||||
Define the type of request logger in `druid.request.logging.type`.
|
||||
See the [Request logging configuration](../configuration/index.md#request-logging) for properties to set for each type of request logger.
|
||||
Specify these properties in the `common.runtime.properties` file.
|
||||
You must restart Druid for the changes to take effect.
|
||||
|
||||
Druid stores the results in the Broker logs, unless the request logging type is `emitter`.
|
||||
If you use emitter request logging, you must also configure metrics emission.
|
||||
|
||||
## Configure metrics emission
|
||||
|
||||
Druid includes various emitters to send metrics and alerts.
|
||||
To emit query metrics, set `druid.request.logging.feed=emitter`, and define the emitter type in the `druid.emitter` property.
|
||||
|
||||
You can use any of the following emitters in Druid:
|
||||
|
||||
- `noop`: Disables metric emission, the default behavior.
|
||||
- [`logging`](../configuration/index.md#logging-emitter-module): Emits metrics to Log4j 2. See [Logging](../configuration/logging.md) to configure Log4j 2 for use with Druid.
|
||||
- [`http`](../configuration/index.md#http-emitter-module): Sends HTTP `POST` requests containing the metrics in JSON format to a user-defined endpoint.
|
||||
- [`parametrized`](../configuration/index.md#parametrized-http-emitter-module): Operates like the `http` emitter but fine-tunes the recipient URL based on the event feed.
|
||||
- [`composing`](../configuration/index.md#composing-emitter-module): Emits metrics to multiple emitter types.
|
||||
- [`graphite`](../configuration/index.md#graphite-emitter): Emits metrics to a [Graphite](https://graphiteapp.org/) Carbon service.
|
||||
|
||||
Specify these properties in the `common.runtime.properties` file.
|
||||
See the [Metrics emitters configuration](../configuration/index.md#metrics-emitters) for properties to set for each type of metrics emitter.
|
||||
You must restart Druid for the changes to take effect.
|
||||
|
||||
|
||||
## Example
|
||||
|
||||
The following configuration shows how to enable request logging and post query metrics to the endpoint `http://example.com:8080/path`.
|
||||
```
|
||||
# Enable request logging and configure the emitter request logger
|
||||
druid.request.logging.type=emitter
|
||||
druid.request.logging.feed=myRequestLogFeed
|
||||
|
||||
# Enable metrics emission and tell Druid where to emit messages
|
||||
druid.emitter=http
|
||||
druid.emitter.http.recipientBaseUrl=http://example.com:8080/path
|
||||
|
||||
# Authenticate to the base URL, if needed
|
||||
druid.emitter.http.basicAuthentication=username:password
|
||||
```
|
||||
|
||||
The following shows an example log emitter output:
|
||||
```
|
||||
[
|
||||
{
|
||||
"feed": "metrics",
|
||||
"timestamp": "2022-01-06T20:32:06.628Z",
|
||||
"service": "druid/broker",
|
||||
"host": "localhost:8082",
|
||||
"version": "2022.01.0-iap-SNAPSHOT",
|
||||
"metric": "sqlQuery/bytes",
|
||||
"value": 9351,
|
||||
"dataSource": "[wikipedia]",
|
||||
"id": "56e8317b-31cc-443d-b109-47f51b21d4c3",
|
||||
"nativeQueryIds": "[2b9cbced-11fc-4d78-a58c-c42863dff3c8]",
|
||||
"remoteAddress": "127.0.0.1",
|
||||
"success": "true"
|
||||
},
|
||||
{
|
||||
"feed": "myRequestLogFeed",
|
||||
"timestamp": "2022-01-06T20:32:06.585Z",
|
||||
"remoteAddr": "127.0.0.1",
|
||||
"service": "druid/broker",
|
||||
"sqlQueryContext":
|
||||
{
|
||||
"useApproximateCountDistinct": false,
|
||||
"sqlQueryId": "56e8317b-31cc-443d-b109-47f51b21d4c3",
|
||||
"useApproximateTopN": false,
|
||||
"useCache": false,
|
||||
"sqlOuterLimit": 101,
|
||||
"populateCache": false,
|
||||
"nativeQueryIds": "[2b9cbced-11fc-4d78-a58c-c42863dff3c8]"
|
||||
},
|
||||
"queryStats":
|
||||
{
|
||||
"sqlQuery/time": 43,
|
||||
"sqlQuery/bytes": 9351,
|
||||
"success": true,
|
||||
"context":
|
||||
{
|
||||
"useApproximateCountDistinct": false,
|
||||
"sqlQueryId": "56e8317b-31cc-443d-b109-47f51b21d4c3",
|
||||
"useApproximateTopN": false,
|
||||
"useCache": false,
|
||||
"sqlOuterLimit": 101,
|
||||
"populateCache": false,
|
||||
"nativeQueryIds": "[2b9cbced-11fc-4d78-a58c-c42863dff3c8]"
|
||||
},
|
||||
"identity": "allowAll"
|
||||
},
|
||||
"query": null,
|
||||
"host": "localhost:8082",
|
||||
"sql": "SELECT * FROM wikipedia WHERE cityName = 'Buenos Aires'"
|
||||
},
|
||||
{
|
||||
"feed": "myRequestLogFeed",
|
||||
"timestamp": "2022-01-06T20:32:07.652Z",
|
||||
"remoteAddr": "",
|
||||
"service": "druid/broker",
|
||||
"sqlQueryContext":
|
||||
{},
|
||||
"queryStats":
|
||||
{
|
||||
"query/time": 16,
|
||||
"query/bytes": -1,
|
||||
"success": true,
|
||||
"identity": "allowAll"
|
||||
},
|
||||
"query":
|
||||
{
|
||||
"queryType": "scan",
|
||||
"dataSource":
|
||||
{
|
||||
"type": "table",
|
||||
"name": "wikipedia"
|
||||
},
|
||||
"intervals":
|
||||
{
|
||||
"type": "intervals",
|
||||
"intervals":
|
||||
[
|
||||
"-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
|
||||
]
|
||||
},
|
||||
"virtualColumns":
|
||||
[
|
||||
{
|
||||
"type": "expression",
|
||||
"name": "v0",
|
||||
"expression": "'Buenos Aires'",
|
||||
"outputType": "STRING"
|
||||
}
|
||||
],
|
||||
"resultFormat": "compactedList",
|
||||
"batchSize": 20480,
|
||||
"limit": 101,
|
||||
"filter":
|
||||
{
|
||||
"type": "selector",
|
||||
"dimension": "cityName",
|
||||
"value": "Buenos Aires",
|
||||
"extractionFn": null
|
||||
},
|
||||
"columns":
|
||||
[
|
||||
"__time",
|
||||
"added",
|
||||
"channel",
|
||||
"comment",
|
||||
"commentLength",
|
||||
"countryIsoCode",
|
||||
"countryName",
|
||||
"deleted",
|
||||
"delta",
|
||||
"deltaBucket",
|
||||
"diffUrl",
|
||||
"flags",
|
||||
"isAnonymous",
|
||||
"isMinor",
|
||||
"isNew",
|
||||
"isRobot",
|
||||
"isUnpatrolled",
|
||||
"metroCode",
|
||||
"namespace",
|
||||
"page",
|
||||
"regionIsoCode",
|
||||
"regionName",
|
||||
"user",
|
||||
"v0"
|
||||
],
|
||||
"legacy": false,
|
||||
"context":
|
||||
{
|
||||
"populateCache": false,
|
||||
"queryId": "62e3d373-6e50-41b4-873b-1e56347c2950",
|
||||
"sqlOuterLimit": 101,
|
||||
"sqlQueryId": "cbb3d519-aee9-4566-8920-dbbeab6269f5",
|
||||
"useApproximateCountDistinct": false,
|
||||
"useApproximateTopN": false,
|
||||
"useCache": false
|
||||
},
|
||||
"descending": false,
|
||||
"granularity":
|
||||
{
|
||||
"type": "all"
|
||||
}
|
||||
},
|
||||
"host": "localhost:8082",
|
||||
"sql": null
|
||||
},
|
||||
...
|
||||
]
|
||||
```
|
||||
|
||||
## Learn more
|
||||
|
||||
See the following topics for more information.
|
||||
* [Query metrics](metrics.md#query-metrics)
|
||||
* [Request logging configuration](../configuration/index.md#request-logging)
|
||||
* [Metrics emitters configuration](../configuration/index.md#metrics-emitters)
|
||||
|
|
@ -178,12 +178,19 @@
|
|||
"operations/clean-metadata-store"
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "subcategory",
|
||||
"label": "Monitoring",
|
||||
"ids": [
|
||||
"operations/request-logging",
|
||||
"operations/metrics",
|
||||
"operations/alerts"
|
||||
]
|
||||
},
|
||||
"operations/api-reference",
|
||||
"operations/high-availability",
|
||||
"operations/rolling-updates",
|
||||
"operations/rule-configuration",
|
||||
"operations/metrics",
|
||||
"operations/alerts",
|
||||
"operations/other-hadoop",
|
||||
{
|
||||
"type": "subcategory",
|
||||
|
|
Loading…
Reference in New Issue