fix html tags in docs (#13117)

* fix html tags in docs

* revert not null
This commit is contained in:
Vadim Ogievetsky 2022-09-18 19:40:33 -07:00
parent 4117b93dff
commit b0f03c8c22
9 changed files with 16 additions and 16 deletions

View File

@ -951,7 +951,7 @@ http://<COORDINATOR_IP>:<PORT>/druid/coordinator/v1/config/history?interval=<int
default value of interval can be specified by setting `druid.audit.manager.auditHistoryMillis` (1 week if not configured) in Coordinator runtime.properties
To view last <n> entries of the audit history of Coordinator dynamic config issue a GET request to the URL -
To view last `n` entries of the audit history of Coordinator dynamic config issue a GET request to the URL -
```
http://<COORDINATOR_IP>:<PORT>/druid/coordinator/v1/config/history?count=<n>
@ -1211,7 +1211,7 @@ http://<OVERLORD_IP>:<port>/druid/indexer/v1/worker/history?interval=<interval>
default value of interval can be specified by setting `druid.audit.manager.auditHistoryMillis` (1 week if not configured) in Overlord runtime.properties.
To view last <n> entries of the audit history of worker config issue a GET request to the URL -
To view last `n` entries of the audit history of worker config issue a GET request to the URL -
```
http://<OVERLORD_IP>:<port>/druid/indexer/v1/worker/history?count=<n>
@ -2189,7 +2189,7 @@ Supported query contexts:
|Property|Description|Default|
|--------|-----------|-------|
|`druid.router.defaultBrokerServiceName`|The default Broker to connect to in case service discovery fails.|druid/broker|
|`druid.router.tierToBrokerMap`|Queries for a certain tier of data are routed to their appropriate Broker. This value should be an ordered JSON map of tiers to Broker names. The priority of Brokers is based on the ordering.|{"_default_tier": "<defaultBrokerServiceName>"}|
|`druid.router.tierToBrokerMap`|Queries for a certain tier of data are routed to their appropriate Broker. This value should be an ordered JSON map of tiers to Broker names. The priority of Brokers is based on the ordering.|`{"_default_tier": "<defaultBrokerServiceName>"}`|
|`druid.router.defaultRule`|The default rule for all datasources.|"_default"|
|`druid.router.pollPeriod`|How often to poll for new rules.|PT1M|
|`druid.router.sql.enable`|Enable routing of SQL queries using strategies. When`true`, the Router uses the strategies defined in `druid.router.strategies` to determine the broker service for a given SQL query. When `false`, the Router uses the `defaultBrokerServiceName`.|`false`|

View File

@ -56,8 +56,8 @@ source](../ingestion/native-batch-input-source.md#druid-input-source). If needed
[`transformSpec`](../ingestion/ingestion-spec.md#transformspec) can be used to filter or modify data during the
reindexing job.
With SQL, use [`REPLACE <table> OVERWRITE`](../multi-stage-query/reference.md#replace) with `SELECT ... FROM
<table>`. (Druid does not have `UPDATE` or `ALTER TABLE` statements.) Any SQL SELECT query can be used to filter,
With SQL, use [`REPLACE <table> OVERWRITE`](../multi-stage-query/reference.md#replace) with `SELECT ... FROM <table>`.
(Druid does not have `UPDATE` or `ALTER TABLE` statements.) Any SQL SELECT query can be used to filter,
modify, or enrich the data during the reindexing job.
## Rolled-up datasources

View File

@ -53,7 +53,7 @@ The configuration examples in the rest of this document will use "kerberos" as t
|`druid.auth.authenticator.kerberos.serverPrincipal`|`HTTP/_HOST@EXAMPLE.COM`| SPNEGO service principal used by druid processes|empty|Yes|
|`druid.auth.authenticator.kerberos.serverKeytab`|`/etc/security/keytabs/spnego.service.keytab`|SPNego service keytab used by druid processes|empty|Yes|
|`druid.auth.authenticator.kerberos.authToLocal`|`RULE:[1:$1@$0](druid@EXAMPLE.COM)s/.*/druid DEFAULT`|It allows you to set a general rule for mapping principal names to local user names. It will be used if there is not an explicit mapping for the principal name that is being translated.|DEFAULT|No|
|`druid.auth.authenticator.kerberos.cookieSignatureSecret`|`secretString`| Secret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid nodes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.|<Random value>|No|
|`druid.auth.authenticator.kerberos.cookieSignatureSecret`|`secretString`| Secret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid nodes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.|Random value|No|
|`druid.auth.authenticator.kerberos.authorizerName`|Depends on available authorizers|Authorizer that requests should be directed to|Empty|Yes|
As a note, it is required that the SPNego principal in use by the druid processes must start with HTTP (This specified by [RFC-4559](https://tools.ietf.org/html/rfc4559)) and must be of the form "HTTP/_HOST@REALM".

View File

@ -439,7 +439,7 @@ For details, see the Schema Registry [documentation](http://docs.confluent.io/cu
| type | String | Set value to `schema_registry`. | no |
| url | String | Specifies the URL endpoint of the Schema Registry. | yes |
| capacity | Integer | Specifies the max size of the cache (default = Integer.MAX_VALUE). | no |
| urls | Array<String> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
| urls | Array<String\> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
| config | Json | To send additional configurations, configured for Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no |
| headers | Json | To send headers to the Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no |
@ -640,7 +640,7 @@ Each entry in the `fields` list can have the following components:
| sum() | Provides the sum value of an array of numbers | Double | &#10003; | &#10003; | &#10003; | &#10003; |
| concat(X) | Provides a concatenated version of the path output with a new item | like input | &#10003; | &#10007; | &#10007; | &#10007; |
| append(X) | add an item to the json path output array | like input | &#10003; | &#10007; | &#10007; | &#10007; |
| keys() | Provides the property keys (An alternative for terminal tilde ~) | Set<E> | &#10007; | &#10007; | &#10007; | &#10007; |
| keys() | Provides the property keys (An alternative for terminal tilde ~) | Set<E\> | &#10007; | &#10007; | &#10007; | &#10007; |
## Parser
@ -1311,7 +1311,7 @@ For details, see the Schema Registry [documentation](http://docs.confluent.io/cu
| type | String | Set value to `schema_registry`. | yes |
| url | String | Specifies the URL endpoint of the Schema Registry. | yes |
| capacity | Integer | Specifies the max size of the cache (default = Integer.MAX_VALUE). | no |
| urls | Array<String> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
| urls | Array<String\> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
| config | Json | To send additional configurations, configured for Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md). | no |
| headers | Json | To send headers to the Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no |

View File

@ -346,8 +346,8 @@ Sample specs:
|Property|Description|Default|Required|
|--------|-----------|-------|---------|
|type|Set the value to `azure`.|None|yes|
|uris|JSON array of URIs where the Azure objects to be ingested are located, in the form "azure://\<container>/\<path-to-file\>"|None|`uris` or `prefixes` or `objects` must be set|
|prefixes|JSON array of URI prefixes for the locations of Azure objects to ingest, in the form `azure://\<container>/\<prefix\>`. Empty objects starting with one of the given prefixes are skipped.|None|`uris` or `prefixes` or `objects` must be set|
|uris|JSON array of URIs where the Azure objects to be ingested are located, in the form `azure://<container>/<path-to-file>`|None|`uris` or `prefixes` or `objects` must be set|
|prefixes|JSON array of URI prefixes for the locations of Azure objects to ingest, in the form `azure://<container>/<prefix>`. Empty objects starting with one of the given prefixes are skipped.|None|`uris` or `prefixes` or `objects` must be set|
|objects|JSON array of Azure objects to ingest.|None|`uris` or `prefixes` or `objects` must be set|
Note that the Azure input source skips all empty objects only when `prefixes` is specified.

View File

@ -63,7 +63,7 @@ The following built-in functions are available.
|name|description|
|----|-----------|
|cast|cast(expr,'LONG' or 'DOUBLE' or 'STRING' or 'ARRAY<LONG>', or 'ARRAY<DOUBLE>' or 'ARRAY<STRING>') returns expr with specified type. exception can be thrown. Scalar types may be cast to array types and will take the form of a single element list (null will still be null). |
|cast|cast(expr,LONG or DOUBLE or STRING or ARRAY<LONG\>, or ARRAY<DOUBLE\> or ARRAY<STRING\>) returns expr with specified type. exception can be thrown. Scalar types may be cast to array types and will take the form of a single element list (null will still be null). |
|if|if(predicate,then,else) returns 'then' if 'predicate' evaluates to a positive number, otherwise it returns 'else' |
|nvl|nvl(expr,expr-for-null) returns 'expr-for-null' if 'expr' is null (or empty string for string type) |
|like|like(expr, pattern[, escape]) is equivalent to SQL `expr LIKE pattern`|

View File

@ -388,7 +388,7 @@ Returns all rules for a specified datasource and includes default datasource.
* `/druid/coordinator/v1/rules/history?count=<n>`
Returns last <n> entries of audit history of rules for all datasources.
Returns last `n` entries of audit history of rules for all datasources.
* `/druid/coordinator/v1/rules/{dataSourceName}/history?interval=<interval>`
@ -396,7 +396,7 @@ Returns all rules for a specified datasource and includes default datasource.
* `/druid/coordinator/v1/rules/{dataSourceName}/history?count=<n>`
Returns last <n> entries of audit history of rules for a specified datasource.
Returns last `n` entries of audit history of rules for a specified datasource.
##### POST

View File

@ -68,7 +68,7 @@ The cleanup of one entity may depend on the cleanup of another entity as follows
For details on configuration properties, see [Metadata management](../configuration/index.md#metadata-management).
If you want to skip the details, check out the [example](#example) for configuring automated metadata cleanup.
<a name="kill-task">
<a name="kill-task"></a>
### Segment records and segments in deep storage (kill task)
> The kill task is the only configuration in this topic that affects actual data in deep storage and not simply metadata or logs.

View File

@ -246,7 +246,7 @@ FROM (
PARTITIONED BY ALL
```
## Ingest a JSON string as COMPLEX\<json>
## Ingest a JSON string as COMPLEX<json\>
If your source data uses a string representation of your JSON column, you can still ingest the data as `COMPLEX<JSON>` as follows:
- During native batch ingestion, call the `parse_json` function in a `transform` object in the `transformSpec`.