mirror of https://github.com/apache/druid.git
Docs: Fix some typos. (#14663)
--------- Co-authored-by: slfan1989 <louj1988@@>
This commit is contained in:
parent
e99bab2fd3
commit
d69edb7723
|
@ -29,7 +29,7 @@ sidebar_label: "DatasourceMetadata"
|
|||
|
||||
Data Source Metadata queries return metadata information for a dataSource. These queries return information about:
|
||||
|
||||
* The timestamp of latest ingested event for the dataSource. This is the ingested event without any consideration of rollup.
|
||||
* The timestamp of the latest ingested event for the dataSource. This is the ingested event without any consideration of rollup.
|
||||
|
||||
The grammar for these queries is:
|
||||
|
||||
|
|
|
@ -75,7 +75,7 @@ stored on this tier.
|
|||
|
||||
## Supporting high query concurrency
|
||||
|
||||
Druid uses a [segment](../design/segments.md) as its fundamental unit of computation. Processes scan segments in parallel and a given process can scan `druid.processing.numThreads` concurrently. You can add more cores to a cluster to process more data in parallel and increase performance. Size your Druid segments such that any computation over any given segment should complete in at most 500ms. Use the the [`query/segment/time`](../operations/metrics.md#historical) metric to monitor computation times.
|
||||
Druid uses a [segment](../design/segments.md) as its fundamental unit of computation. Processes scan segments in parallel and a given process can scan `druid.processing.numThreads` concurrently. You can add more cores to a cluster to process more data in parallel and increase performance. Size your Druid segments such that any computation over any given segment should complete in at most 500ms. Use the [`query/segment/time`](../operations/metrics.md#historical) metric to monitor computation times.
|
||||
|
||||
Druid internally stores requests to scan segments in a priority queue. If a given query requires scanning
|
||||
more segments than the total number of available processors in a cluster, and many similarly expensive queries are concurrently
|
||||
|
|
|
@ -57,7 +57,7 @@ are designed to be lightweight and complete very quickly. This means that for mo
|
|||
more complex visualizations, multiple Druid queries may be required.
|
||||
|
||||
Even though queries are typically made to Brokers or Routers, they can also be accepted by
|
||||
[Historical](../design/historical.md) processes and by [Peons (task JVMs)](../design/peons.md)) that are running
|
||||
[Historical](../design/historical.md) processes and by [Peons (task JVMs)](../design/peons.md) that are running
|
||||
stream ingestion tasks. This may be valuable if you want to query results for specific segments that are served by
|
||||
specific processes.
|
||||
|
||||
|
|
|
@ -159,7 +159,7 @@ If any part of a dimension value contains the value specified in this search que
|
|||
|
||||
### `fragment`
|
||||
|
||||
If any part of a dimension value contains all of the values specified in this search query spec, regardless of case by default, a "match" occurs. The grammar is:
|
||||
If any part of a dimension value contains all the values specified in this search query spec, regardless of case by default, a "match" occurs. The grammar is:
|
||||
|
||||
```json
|
||||
{
|
||||
|
|
Loading…
Reference in New Issue