Docs: Fix some typos. (#14663)

---------

Co-authored-by: slfan1989 <louj1988@@>
This commit is contained in:
slfan1989 2023-07-26 23:54:18 +08:00 committed by GitHub
parent e99bab2fd3
commit d69edb7723
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 4 additions and 4 deletions

View File

@ -29,7 +29,7 @@ sidebar_label: "DatasourceMetadata"
Data Source Metadata queries return metadata information for a dataSource. These queries return information about: Data Source Metadata queries return metadata information for a dataSource. These queries return information about:
* The timestamp of latest ingested event for the dataSource. This is the ingested event without any consideration of rollup. * The timestamp of the latest ingested event for the dataSource. This is the ingested event without any consideration of rollup.
The grammar for these queries is: The grammar for these queries is:

View File

@ -75,7 +75,7 @@ stored on this tier.
## Supporting high query concurrency ## Supporting high query concurrency
Druid uses a [segment](../design/segments.md) as its fundamental unit of computation. Processes scan segments in parallel and a given process can scan `druid.processing.numThreads` concurrently. You can add more cores to a cluster to process more data in parallel and increase performance. Size your Druid segments such that any computation over any given segment should complete in at most 500ms. Use the the [`query/segment/time`](../operations/metrics.md#historical) metric to monitor computation times. Druid uses a [segment](../design/segments.md) as its fundamental unit of computation. Processes scan segments in parallel and a given process can scan `druid.processing.numThreads` concurrently. You can add more cores to a cluster to process more data in parallel and increase performance. Size your Druid segments such that any computation over any given segment should complete in at most 500ms. Use the [`query/segment/time`](../operations/metrics.md#historical) metric to monitor computation times.
Druid internally stores requests to scan segments in a priority queue. If a given query requires scanning Druid internally stores requests to scan segments in a priority queue. If a given query requires scanning
more segments than the total number of available processors in a cluster, and many similarly expensive queries are concurrently more segments than the total number of available processors in a cluster, and many similarly expensive queries are concurrently

View File

@ -57,7 +57,7 @@ are designed to be lightweight and complete very quickly. This means that for mo
more complex visualizations, multiple Druid queries may be required. more complex visualizations, multiple Druid queries may be required.
Even though queries are typically made to Brokers or Routers, they can also be accepted by Even though queries are typically made to Brokers or Routers, they can also be accepted by
[Historical](../design/historical.md) processes and by [Peons (task JVMs)](../design/peons.md)) that are running [Historical](../design/historical.md) processes and by [Peons (task JVMs)](../design/peons.md) that are running
stream ingestion tasks. This may be valuable if you want to query results for specific segments that are served by stream ingestion tasks. This may be valuable if you want to query results for specific segments that are served by
specific processes. specific processes.

View File

@ -159,7 +159,7 @@ If any part of a dimension value contains the value specified in this search que
### `fragment` ### `fragment`
If any part of a dimension value contains all of the values specified in this search query spec, regardless of case by default, a "match" occurs. The grammar is: If any part of a dimension value contains all the values specified in this search query spec, regardless of case by default, a "match" occurs. The grammar is:
```json ```json
{ {