diff --git a/docs/configuration/index.md b/docs/configuration/index.md index 45a89862955..b8f99445cbe 100644 --- a/docs/configuration/index.md +++ b/docs/configuration/index.md @@ -951,7 +951,7 @@ http://:/druid/coordinator/v1/config/history?interval= entries of the audit history of Coordinator dynamic config issue a GET request to the URL - +To view last `n` entries of the audit history of Coordinator dynamic config issue a GET request to the URL - ``` http://:/druid/coordinator/v1/config/history?count= @@ -1211,7 +1211,7 @@ http://:/druid/indexer/v1/worker/history?interval= default value of interval can be specified by setting `druid.audit.manager.auditHistoryMillis` (1 week if not configured) in Overlord runtime.properties. -To view last entries of the audit history of worker config issue a GET request to the URL - +To view last `n` entries of the audit history of worker config issue a GET request to the URL - ``` http://:/druid/indexer/v1/worker/history?count= @@ -2189,7 +2189,7 @@ Supported query contexts: |Property|Description|Default| |--------|-----------|-------| |`druid.router.defaultBrokerServiceName`|The default Broker to connect to in case service discovery fails.|druid/broker| -|`druid.router.tierToBrokerMap`|Queries for a certain tier of data are routed to their appropriate Broker. This value should be an ordered JSON map of tiers to Broker names. The priority of Brokers is based on the ordering.|{"_default_tier": ""}| +|`druid.router.tierToBrokerMap`|Queries for a certain tier of data are routed to their appropriate Broker. This value should be an ordered JSON map of tiers to Broker names. The priority of Brokers is based on the ordering.|`{"_default_tier": ""}`| |`druid.router.defaultRule`|The default rule for all datasources.|"_default"| |`druid.router.pollPeriod`|How often to poll for new rules.|PT1M| |`druid.router.sql.enable`|Enable routing of SQL queries using strategies. When`true`, the Router uses the strategies defined in `druid.router.strategies` to determine the broker service for a given SQL query. When `false`, the Router uses the `defaultBrokerServiceName`.|`false`| diff --git a/docs/data-management/update.md b/docs/data-management/update.md index 4eb31f8242c..3aa11a7411b 100644 --- a/docs/data-management/update.md +++ b/docs/data-management/update.md @@ -56,8 +56,8 @@ source](../ingestion/native-batch-input-source.md#druid-input-source). If needed [`transformSpec`](../ingestion/ingestion-spec.md#transformspec) can be used to filter or modify data during the reindexing job. -With SQL, use [`REPLACE OVERWRITE`](../multi-stage-query/reference.md#replace) with `SELECT ... FROM -
`. (Druid does not have `UPDATE` or `ALTER TABLE` statements.) Any SQL SELECT query can be used to filter, +With SQL, use [`REPLACE
OVERWRITE`](../multi-stage-query/reference.md#replace) with `SELECT ... FROM
`. +(Druid does not have `UPDATE` or `ALTER TABLE` statements.) Any SQL SELECT query can be used to filter, modify, or enrich the data during the reindexing job. ## Rolled-up datasources diff --git a/docs/development/extensions-core/druid-kerberos.md b/docs/development/extensions-core/druid-kerberos.md index 4828f3b56f1..bb0fbb11588 100644 --- a/docs/development/extensions-core/druid-kerberos.md +++ b/docs/development/extensions-core/druid-kerberos.md @@ -53,7 +53,7 @@ The configuration examples in the rest of this document will use "kerberos" as t |`druid.auth.authenticator.kerberos.serverPrincipal`|`HTTP/_HOST@EXAMPLE.COM`| SPNEGO service principal used by druid processes|empty|Yes| |`druid.auth.authenticator.kerberos.serverKeytab`|`/etc/security/keytabs/spnego.service.keytab`|SPNego service keytab used by druid processes|empty|Yes| |`druid.auth.authenticator.kerberos.authToLocal`|`RULE:[1:$1@$0](druid@EXAMPLE.COM)s/.*/druid DEFAULT`|It allows you to set a general rule for mapping principal names to local user names. It will be used if there is not an explicit mapping for the principal name that is being translated.|DEFAULT|No| -|`druid.auth.authenticator.kerberos.cookieSignatureSecret`|`secretString`| Secret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid nodes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.||No| +|`druid.auth.authenticator.kerberos.cookieSignatureSecret`|`secretString`| Secret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid nodes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.|Random value|No| |`druid.auth.authenticator.kerberos.authorizerName`|Depends on available authorizers|Authorizer that requests should be directed to|Empty|Yes| As a note, it is required that the SPNego principal in use by the druid processes must start with HTTP (This specified by [RFC-4559](https://tools.ietf.org/html/rfc4559)) and must be of the form "HTTP/_HOST@REALM". diff --git a/docs/ingestion/data-formats.md b/docs/ingestion/data-formats.md index 780f6bbf2ff..e1b59faa58d 100644 --- a/docs/ingestion/data-formats.md +++ b/docs/ingestion/data-formats.md @@ -439,7 +439,7 @@ For details, see the Schema Registry [documentation](http://docs.confluent.io/cu | type | String | Set value to `schema_registry`. | no | | url | String | Specifies the URL endpoint of the Schema Registry. | yes | | capacity | Integer | Specifies the max size of the cache (default = Integer.MAX_VALUE). | no | -| urls | Array | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) | +| urls | Array | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) | | config | Json | To send additional configurations, configured for Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no | | headers | Json | To send headers to the Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no | @@ -640,7 +640,7 @@ Each entry in the `fields` list can have the following components: | sum() | Provides the sum value of an array of numbers | Double | ✓ | ✓ | ✓ | ✓ | | concat(X) | Provides a concatenated version of the path output with a new item | like input | ✓ | ✗ | ✗ | ✗ | | append(X) | add an item to the json path output array | like input | ✓ | ✗ | ✗ | ✗ | - | keys() | Provides the property keys (An alternative for terminal tilde ~) | Set | ✗ | ✗ | ✗ | ✗ | + | keys() | Provides the property keys (An alternative for terminal tilde ~) | Set | ✗ | ✗ | ✗ | ✗ | ## Parser @@ -1311,7 +1311,7 @@ For details, see the Schema Registry [documentation](http://docs.confluent.io/cu | type | String | Set value to `schema_registry`. | yes | | url | String | Specifies the URL endpoint of the Schema Registry. | yes | | capacity | Integer | Specifies the max size of the cache (default = Integer.MAX_VALUE). | no | -| urls | Array | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) | +| urls | Array | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) | | config | Json | To send additional configurations, configured for Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md). | no | | headers | Json | To send headers to the Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no | diff --git a/docs/ingestion/native-batch-input-source.md b/docs/ingestion/native-batch-input-source.md index 910f68efa55..0db9b9eee44 100644 --- a/docs/ingestion/native-batch-input-source.md +++ b/docs/ingestion/native-batch-input-source.md @@ -346,8 +346,8 @@ Sample specs: |Property|Description|Default|Required| |--------|-----------|-------|---------| |type|Set the value to `azure`.|None|yes| -|uris|JSON array of URIs where the Azure objects to be ingested are located, in the form "azure://\/\"|None|`uris` or `prefixes` or `objects` must be set| -|prefixes|JSON array of URI prefixes for the locations of Azure objects to ingest, in the form `azure://\/\`. Empty objects starting with one of the given prefixes are skipped.|None|`uris` or `prefixes` or `objects` must be set| +|uris|JSON array of URIs where the Azure objects to be ingested are located, in the form `azure:///`|None|`uris` or `prefixes` or `objects` must be set| +|prefixes|JSON array of URI prefixes for the locations of Azure objects to ingest, in the form `azure:///`. Empty objects starting with one of the given prefixes are skipped.|None|`uris` or `prefixes` or `objects` must be set| |objects|JSON array of Azure objects to ingest.|None|`uris` or `prefixes` or `objects` must be set| Note that the Azure input source skips all empty objects only when `prefixes` is specified. diff --git a/docs/misc/math-expr.md b/docs/misc/math-expr.md index d58261e61ec..27bddb37d0c 100644 --- a/docs/misc/math-expr.md +++ b/docs/misc/math-expr.md @@ -63,7 +63,7 @@ The following built-in functions are available. |name|description| |----|-----------| -|cast|cast(expr,'LONG' or 'DOUBLE' or 'STRING' or 'ARRAY', or 'ARRAY' or 'ARRAY') returns expr with specified type. exception can be thrown. Scalar types may be cast to array types and will take the form of a single element list (null will still be null). | +|cast|cast(expr,LONG or DOUBLE or STRING or ARRAY, or ARRAY or ARRAY) returns expr with specified type. exception can be thrown. Scalar types may be cast to array types and will take the form of a single element list (null will still be null). | |if|if(predicate,then,else) returns 'then' if 'predicate' evaluates to a positive number, otherwise it returns 'else' | |nvl|nvl(expr,expr-for-null) returns 'expr-for-null' if 'expr' is null (or empty string for string type) | |like|like(expr, pattern[, escape]) is equivalent to SQL `expr LIKE pattern`| diff --git a/docs/operations/api-reference.md b/docs/operations/api-reference.md index d3dec8f0e64..f24fab839f0 100644 --- a/docs/operations/api-reference.md +++ b/docs/operations/api-reference.md @@ -388,7 +388,7 @@ Returns all rules for a specified datasource and includes default datasource. * `/druid/coordinator/v1/rules/history?count=` - Returns last entries of audit history of rules for all datasources. + Returns last `n` entries of audit history of rules for all datasources. * `/druid/coordinator/v1/rules/{dataSourceName}/history?interval=` @@ -396,7 +396,7 @@ Returns all rules for a specified datasource and includes default datasource. * `/druid/coordinator/v1/rules/{dataSourceName}/history?count=` - Returns last entries of audit history of rules for a specified datasource. + Returns last `n` entries of audit history of rules for a specified datasource. ##### POST diff --git a/docs/operations/clean-metadata-store.md b/docs/operations/clean-metadata-store.md index 0338d9aa702..c5fa5b68108 100644 --- a/docs/operations/clean-metadata-store.md +++ b/docs/operations/clean-metadata-store.md @@ -68,7 +68,7 @@ The cleanup of one entity may depend on the cleanup of another entity as follows For details on configuration properties, see [Metadata management](../configuration/index.md#metadata-management). If you want to skip the details, check out the [example](#example) for configuring automated metadata cleanup. - + ### Segment records and segments in deep storage (kill task) > The kill task is the only configuration in this topic that affects actual data in deep storage and not simply metadata or logs. diff --git a/docs/querying/nested-columns.md b/docs/querying/nested-columns.md index 1d9503eff56..9c131ee27de 100644 --- a/docs/querying/nested-columns.md +++ b/docs/querying/nested-columns.md @@ -246,7 +246,7 @@ FROM ( PARTITIONED BY ALL ``` -## Ingest a JSON string as COMPLEX\ +## Ingest a JSON string as COMPLEX If your source data uses a string representation of your JSON column, you can still ingest the data as `COMPLEX` as follows: - During native batch ingestion, call the `parse_json` function in a `transform` object in the `transformSpec`.