switch links from druid.io to druid.apache.org (#7914)

* switch links from druid.io to druid.apache.org

* fix it
This commit is contained in:
Clint Wylie 2019-06-18 09:06:28 -07:00 committed by Fangjin Yang
parent e80297efef
commit 71997c16a2
44 changed files with 79 additions and 86 deletions

View File

@ -31,19 +31,19 @@ Apache Druid (incubating) is a high performance analytics data store for event-d
### More Information
More information about Druid can be found on <http://www.druid.io>.
More information about Druid can be found on <https://druid.apache.org>.
### Documentation
You can find the [documentation for the latest Druid release](http://druid.io/docs/latest/) on
the [project website](http://druid.io/docs/latest/).
You can find the [documentation for the latest Druid release](https://druid.apache.org/docs/latest/) on
the [project website](https://druid.apache.org/docs/latest/).
If you would like to contribute documentation, please do so under
`/docs/content` in this repository and submit a pull request.
### Getting Started
You can get started with Druid with our [quickstart](http://druid.io/docs/latest/tutorials/quickstart.html).
You can get started with Druid with our [quickstart](https://druid.apache.org/docs/latest/tutorials/quickstart.html).
### Reporting Issues
@ -51,9 +51,6 @@ If you find any bugs, please file a [GitHub issue](https://github.com/apache/inc
### Community
The Druid community is in the process of migrating to Apache by way of the Apache Incubator. Eventually, as we proceed
along this path, our site will move from http://druid.io/ to https://druid.apache.org/.
Community support is available on the
[druid-user mailing list](https://groups.google.com/forum/#!forum/druid-user)(druid-user@googlegroups.com), which
is hosted at Google Groups.
@ -72,5 +69,5 @@ For instructions on building Druid from source, see [docs/content/development/bu
### Contributing
Please follow the guidelines listed [here](http://druid.io/community/).
Please follow the guidelines listed [here](https://druid.apache.org/community/).

View File

@ -18,17 +18,13 @@ under the License.
Apache Druid (incubating) is a high performance analytics data store for event-driven data. More information about Druid
can be found on http://www.druid.io.
The Druid community is in the process of migrating to Apache by way of the Apache Incubator. Eventually, as we proceed
along this path, our site will move from http://druid.io/ to https://druid.apache.org/.
can be found on https://druid.apache.org.
Documentation
-------------
You can find the documentation for {THIS_OR_THE_LATEST} Druid release on the project website http://druid.io/docs/{DRUIDVERSION}/.
You can find the documentation for {THIS_OR_THE_LATEST} Druid release on the project website https://druid.apache.org/docs/{DRUIDVERSION}/.
You can get started with Druid with our quickstart at http://druid.io/docs/{DRUIDVERSION}/tutorials/quickstart.html.
You can get started with Druid with our quickstart at https://druid.apache.org/docs/{DRUIDVERSION}/tutorials/quickstart.html.
Build from Source
@ -77,7 +73,7 @@ Contributing
------------
If you find any bugs, please file a GitHub issue at https://github.com/apache/incubator-druid/issues.
If you wish to contribute, please follow the guidelines listed at http://druid.io/community/.
If you wish to contribute, please follow the guidelines listed at https://druid.apache.org/community/.
Disclaimer: Apache Druid is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the

View File

@ -50,7 +50,7 @@ committer who visits an issue or a PR authored by a non-committer.
API elements (`@PublicApi` or `@ExtensionPoint`), runtime configuration options, emitted metric names, HTTP endpoint
behavior, or server behavior in some way that affects one of the following:
- Ability to do a rolling update [as documented](http://druid.io/docs/latest/operations/rolling-updates.html)
- Ability to do a rolling update [as documented](https://druid.apache.org/docs/latest/operations/rolling-updates.html)
without needing any modifications to server configurations or query workload.
- Ability to roll back a Druid cluster to a prior version.
- Ability to continue using old Druid extensions without recompiling them.

View File

@ -35,8 +35,8 @@ ingestion method.
| Parallel indexing | Always parallel | Parallel if firehose is splittable | Always sequential |
| Supported indexing modes | Replacing mode | Both appending and replacing modes | Both appending and replacing modes |
| External dependency | Hadoop (it internally submits Hadoop jobs) | No dependency | No dependency |
| Supported [rollup modes](http://druid.io/docs/latest/ingestion/index.html#roll-up-modes) | Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
| Supported partitioning methods | [Both Hash-based and range partitioning](http://druid.io/docs/latest/ingestion/hadoop.html#partitioning-specification) | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
| Supported [rollup modes](./index.html#roll-up-modes) | Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
| Supported partitioning methods | [Both Hash-based and range partitioning](./hadoop.html#partitioning-specification) | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
| Supported input locations | All locations accessible via HDFS client or Druid dataSource | All implemented [firehoses](./firehose.html) | All implemented [firehoses](./firehose.html) |
| Supported file formats | All implemented Hadoop InputFormats | Currently text file formats (CSV, TSV, JSON) by default. Additional formats can be added though a [custom extension](../development/modules.html) implementing [`FiniteFirehoseFactory`](https://github.com/apache/incubator-druid/blob/master/core/src/main/java/org/apache/druid/data/input/FiniteFirehoseFactory.java) | Currently text file formats (CSV, TSV, JSON) by default. Additional formats can be added though a [custom extension](../development/modules.html) implementing [`FiniteFirehoseFactory`](https://github.com/apache/incubator-druid/blob/master/core/src/main/java/org/apache/druid/data/input/FiniteFirehoseFactory.java) |
| Saving parse exceptions in ingestion report | Currently not supported | Currently not supported | Supported |

View File

@ -18,7 +18,7 @@
# under the License.
############################
# This script downloads the appropriate log4j2 jars and runs jconsole with them as plugins.
# This script can be used as an example for how to connect to a druid.io instance to
# This script can be used as an example for how to connect to a Druid instance to
# change the logging parameters at runtime
############################

View File

@ -23,7 +23,7 @@
# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory

View File

@ -23,7 +23,7 @@
# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory

View File

@ -23,7 +23,7 @@
# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory

View File

@ -23,7 +23,7 @@
# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory

View File

@ -23,7 +23,7 @@
# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory

View File

@ -23,7 +23,7 @@
# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory

View File

@ -26,4 +26,4 @@ Overview
Documentation
=============
See the druid.io website or under [Druid Github Repo](https://github.com/apache/incubator-druid/tree/master/docs/content/development/extensions-contrib/moving-average-query.md).
See the druid.apache.org website or under [Druid Github Repo](https://github.com/apache/incubator-druid/tree/master/docs/content/development/extensions-contrib/moving-average-query.md).

View File

@ -19,4 +19,4 @@
This module contains a simple implementation of [SslContext](http://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLContext.html)
that will be injected to be used with HttpClient that Druid nodes use internally to communicate with each other.
More details [here](http://druid.io/docs/latest/development/extensions-core/simple-client-sslcontext.html).
More details [here](https://druid.apache.org/docs/latest/development/extensions-core/simple-client-sslcontext.html).

View File

@ -103,7 +103,7 @@ public class HyperLogLogCollectorTest
*
* When reaching very large cardinalities (>> 50,000,000), offsets are mismatched between the main HLL and the ones
* with 100 values, requiring a floating max as described in
* http://druid.io/blog/2014/02/18/hyperloglog-optimizations-for-real-world-systems.html
* https://druid.apache.org/blog/2014/02/18/hyperloglog-optimizations-for-real-world-systems.html
*/
@Ignore
@Test

View File

@ -31,7 +31,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = localhost
[ req_ext ]
@ -62,7 +62,7 @@ default_md = default
preserve = no
policy = policy_match
serial = certs.seq
email_in_dn=integration-test@druid.io
email_in_dn=integration-test@druid.apache.org
[req]
default_bits = 4096
@ -77,7 +77,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = itroot
[ v3_ca ]

View File

@ -31,7 +31,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = localhost
[ req_ext ]

View File

@ -32,7 +32,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = thisisprobablynottherighthostname
[ req_ext ]

View File

@ -31,7 +31,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=bad-intermediate@druid.io
emailAddress=bad-intermediate@druid.apache.org
CN = badintermediate
[ req_ext ]
@ -62,7 +62,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=basic-constraint-fail@druid.io
emailAddress=basic-constraint-fail@druid.apache.org
CN = localhost
[ req_ext ]

View File

@ -38,7 +38,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = ${MY_IP}
[ req_ext ]

View File

@ -32,7 +32,7 @@ ST=DR
L=Druid City
O=Druid
OU=RevokedIntegrationTests
emailAddress=revoked-it-cert@druid.io
emailAddress=revoked-it-cert@druid.apache.org
CN = localhost
[ req_ext ]

View File

@ -31,7 +31,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = localhost
[ req_ext ]

View File

@ -31,7 +31,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=intermediate@druid.io
emailAddress=intermediate@druid.apache.org
CN = intermediate
[ req_ext ]
@ -62,7 +62,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=intermediate-client@druid.io
emailAddress=intermediate-client@druid.apache.org
CN = localhost
[ req_ext ]

View File

@ -40,7 +40,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = itroot
[ v3_ca ]

View File

@ -65,7 +65,7 @@ import java.util.List;
*/
@Command(
name = "broker",
description = "Runs a broker node, see http://druid.io/docs/latest/Broker.html for a description"
description = "Runs a broker node, see https://druid.apache.org/docs/latest/Broker.html for a description"
)
public class CliBroker extends ServerRunnable
{

View File

@ -98,7 +98,7 @@ import java.util.concurrent.ExecutorService;
*/
@Command(
name = "coordinator",
description = "Runs the Coordinator, see http://druid.io/docs/latest/Coordinator.html for a description."
description = "Runs the Coordinator, see https://druid.apache.org/docs/latest/Coordinator.html for a description."
)
public class CliCoordinator extends ServerRunnable
{
@ -217,8 +217,8 @@ public class CliCoordinator extends ServerRunnable
throw new UnsupportedOperationException(
"'druid.coordinator.merge.on' is not supported anymore. "
+ "Please consider using Coordinator's automatic compaction instead. "
+ "See http://druid.io/docs/latest/operations/segment-optimization.html and "
+ "http://druid.io/docs/latest/operations/api-reference.html#compaction-configuration for more "
+ "See https://druid.apache.org/docs/latest/operations/segment-optimization.html and "
+ "https://druid.apache.org/docs/latest/operations/api-reference.html#compaction-configuration for more "
+ "details about compaction."
);
}

View File

@ -41,7 +41,7 @@ import java.util.List;
*/
@Command(
name = "hadoop",
description = "Runs the batch Hadoop Druid Indexer, see http://druid.io/docs/latest/Batch-ingestion.html for a description."
description = "Runs the batch Hadoop Druid Indexer, see https://druid.apache.org/docs/latest/Batch-ingestion.html for a description."
)
public class CliHadoopIndexer implements Runnable
{

View File

@ -59,7 +59,7 @@ import java.util.List;
*/
@Command(
name = "historical",
description = "Runs a Historical node, see http://druid.io/docs/latest/Historical.html for a description"
description = "Runs a Historical node, see https://druid.apache.org/docs/latest/Historical.html for a description"
)
public class CliHistorical extends ServerRunnable
{

View File

@ -55,7 +55,7 @@ import java.util.Properties;
*/
@Command(
name = "hadoop-indexer",
description = "Runs the batch Hadoop Druid Indexer, see http://druid.io/docs/latest/Batch-ingestion.html for a description."
description = "Runs the batch Hadoop Druid Indexer, see https://druid.apache.org/docs/latest/Batch-ingestion.html for a description."
)
public class CliInternalHadoopIndexer extends GuiceRunnable
{

View File

@ -69,7 +69,7 @@ import java.util.List;
*/
@Command(
name = "middleManager",
description = "Runs a Middle Manager, this is a \"task\" node used as part of the remote indexing service, see http://druid.io/docs/latest/design/middlemanager.html for a description"
description = "Runs a Middle Manager, this is a \"task\" node used as part of the remote indexing service, see https://druid.apache.org/docs/latest/design/middlemanager.html for a description"
)
public class CliMiddleManager extends ServerRunnable
{

View File

@ -126,7 +126,7 @@ import java.util.List;
*/
@Command(
name = "overlord",
description = "Runs an Overlord node, see http://druid.io/docs/latest/Indexing-Service.html for a description"
description = "Runs an Overlord node, see https://druid.apache.org/docs/latest/Indexing-Service.html for a description"
)
public class CliOverlord extends ServerRunnable
{

View File

@ -117,7 +117,7 @@ import java.util.Set;
@Command(
name = "peon",
description = "Runs a Peon, this is an individual forked \"task\" used as part of the indexing service. "
+ "This should rarely, if ever, be used directly. See http://druid.io/docs/latest/design/peons.html for a description"
+ "This should rarely, if ever, be used directly. See https://druid.apache.org/docs/latest/design/peons.html for a description"
)
public class CliPeon extends GuiceRunnable
{

View File

@ -39,7 +39,7 @@ import java.util.Properties;
*/
@Command(
name = "realtime",
description = "Runs a realtime node, see http://druid.io/docs/latest/Realtime.html for a description"
description = "Runs a realtime node, see https://druid.apache.org/docs/latest/Realtime.html for a description"
)
public class CliRealtime extends ServerRunnable
{

View File

@ -50,7 +50,7 @@ import java.util.concurrent.Executor;
*/
@Command(
name = "realtime",
description = "Runs a standalone realtime node for examples, see http://druid.io/docs/latest/Realtime.html for a description"
description = "Runs a standalone realtime node for examples, see https://druid.apache.org/docs/latest/Realtime.html for a description"
)
public class CliRealtimeExample extends ServerRunnable
{

View File

@ -59,7 +59,7 @@ import java.util.List;
*/
@Command(
name = "router",
description = "Experimental! Understands tiers and routes things to different brokers, see http://druid.io/docs/latest/development/router.html for a description"
description = "Experimental! Understands tiers and routes things to different brokers, see https://druid.apache.org/docs/latest/development/router.html for a description"
)
public class CliRouter extends ServerRunnable
{

View File

@ -234,7 +234,7 @@ exports[`header bar matches snapshot 1`] = `
/>
<Blueprint3.MenuItem
disabled={false}
href="http://druid.io/docs/latest"
href="https://druid.apache.org/docs/latest"
icon="th"
multiline={false}
popoverProps={Object {}}

View File

@ -82,7 +82,7 @@ exports[`about dialog matches snapshot 1`] = `
<p>
For help and support with Druid, please refer to the
<a
href="http://druid.io/community/"
href="https://druid.apache.org/community/"
target="_blank"
>
community page
@ -125,7 +125,7 @@ exports[`about dialog matches snapshot 1`] = `
</button>
<a
class="bp3-button bp3-intent-primary"
href="http://druid.io"
href="https://druid.apache.org"
role="button"
tabindex="0"
target="_blank"

View File

@ -58,7 +58,7 @@ exports[`coordinator dynamic config matches snapshot 1`] = `
<p>
Edit the coordinator dynamic configuration on the fly. For more information please refer to the
<a
href="http://druid.io/docs/latest/configuration/index.html#dynamic-configuration"
href="https://druid.apache.org/docs/latest/configuration/index.html#dynamic-configuration"
target="_blank"
>
documentation

View File

@ -123,7 +123,7 @@ export class CoordinatorDynamicConfigDialog extends React.PureComponent<Coordina
>
<p>
Edit the coordinator dynamic configuration on the fly.
For more information please refer to the <ExternalLink href="http://druid.io/docs/latest/configuration/index.html#dynamic-configuration">documentation</ExternalLink>.
For more information please refer to the <ExternalLink href="https://druid.apache.org/docs/latest/configuration/index.html#dynamic-configuration">documentation</ExternalLink>.
</p>
<AutoForm
fields={[

View File

@ -126,7 +126,7 @@ export class OverlordDynamicConfigDialog extends React.PureComponent<OverlordDyn
>
<p>
Edit the overlord dynamic configuration on the fly.
For more information please refer to the <ExternalLink href="http://druid.io/docs/latest/configuration/index.html#overlord-dynamic-configuration">documentation</ExternalLink>.
For more information please refer to the <ExternalLink href="https://druid.apache.org/docs/latest/configuration/index.html#overlord-dynamic-configuration">documentation</ExternalLink>.
</p>
<AutoForm
fields={[

View File

@ -57,7 +57,7 @@ exports[`retention dialog matches snapshot 1`] = `
<p>
Druid uses rules to determine what data should be retained in the cluster. The rules are evaluated in order from top to bottom. For more information please refer to the
<a
href="http://druid.io/docs/latest/operations/rule-configuration.html"
href="https://druid.apache.org/docs/latest/operations/rule-configuration.html"
target="_blank"
>
documentation

View File

@ -176,7 +176,7 @@ export class RetentionDialog extends React.PureComponent<RetentionDialogProps, R
<p>
Druid uses rules to determine what data should be retained in the cluster.
The rules are evaluated in order from top to bottom.
For more information please refer to the <a href="http://druid.io/docs/latest/operations/rule-configuration.html" target="_blank">documentation</a>.
For more information please refer to the <a href="https://druid.apache.org/docs/latest/operations/rule-configuration.html" target="_blank">documentation</a>.
</p>
<FormGroup>
{(currentRules || []).map(this.renderRule)}

View File

@ -172,7 +172,7 @@ const PARSE_SPEC_FORM_FIELDS: Field<ParseSpec>[] = [
suggestions: ['json', 'csv', 'tsv', 'regex'],
info: <>
<p>The parser used to parse the data.</p>
<p>For more information see <ExternalLink href="http://druid.io/docs/latest/ingestion/data-formats.html">the documentation</ExternalLink>.</p>
<p>For more information see <ExternalLink href="https://druid.apache.org/docs/latest/ingestion/data-formats.html">the documentation</ExternalLink>.</p>
</>
},
{
@ -404,7 +404,7 @@ const FLATTEN_FIELD_FORM_FIELDS: Field<FlattenField>[] = [
placeholder: '$.thing',
isDefined: (flattenField: FlattenField) => flattenField.type === 'path' || flattenField.type === 'jq',
info: <>
Specify a flatten <ExternalLink href="http://druid.io/docs/latest/ingestion/flatten-json">expression</ExternalLink>.
Specify a flatten <ExternalLink href="https://druid.apache.org/docs/latest/ingestion/flatten-json">expression</ExternalLink>.
</>
}
];
@ -440,7 +440,7 @@ const TRANSFORM_FORM_FIELDS: Field<Transform>[] = [
type: 'string',
placeholder: '"foo" + "bar"',
info: <>
A valid Druid <ExternalLink href="http://druid.io/docs/latest/misc/math-expr.html">expression</ExternalLink>.
A valid Druid <ExternalLink href="https://druid.apache.org/docs/latest/misc/math-expr.html">expression</ExternalLink>.
</>
}
];
@ -635,7 +635,7 @@ export function getIoConfigFormFields(ingestionComboType: IngestionComboType): F
suggestions: ['local', 'http', 'static-s3', 'static-google-blobstore'],
info: <>
<p>
Druid connects to raw data through <ExternalLink href="http://druid.io/docs/latest/ingestion/firehose.html">firehoses</ExternalLink>.
Druid connects to raw data through <ExternalLink href="https://druid.apache.org/docs/latest/ingestion/firehose.html">firehoses</ExternalLink>.
You can change your selected firehose here.
</p>
</>
@ -665,7 +665,7 @@ export function getIoConfigFormFields(ingestionComboType: IngestionComboType): F
type: 'string',
placeholder: '/path/to/files/',
info: <>
<ExternalLink href="http://druid.io/docs/latest/ingestion/firehose.html#localfirehose">firehose.baseDir</ExternalLink>
<ExternalLink href="https://druid.apache.org/docs/latest/ingestion/firehose.html#localfirehose">firehose.baseDir</ExternalLink>
<p>Specifies the directory to search recursively for files to be ingested.</p>
</>
},
@ -675,7 +675,7 @@ export function getIoConfigFormFields(ingestionComboType: IngestionComboType): F
type: 'string',
defaultValue: '*.*',
info: <>
<ExternalLink href="http://druid.io/docs/latest/ingestion/firehose.html#localfirehose">firehose.filter</ExternalLink>
<ExternalLink href="https://druid.apache.org/docs/latest/ingestion/firehose.html#localfirehose">firehose.filter</ExternalLink>
<p>A wildcard filter for files. See <ExternalLink href="https://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/filefilter/WildcardFileFilter.html">here</ExternalLink> for format information.</p>
</>
}
@ -716,7 +716,7 @@ export function getIoConfigFormFields(ingestionComboType: IngestionComboType): F
label: 'Google blobs',
type: 'json',
info: <>
<p>JSON array of <ExternalLink href="http://druid.io/docs/latest/development/extensions-contrib/google.html">Google Blobs</ExternalLink>.</p>
<p>JSON array of <ExternalLink href="https://druid.apache.org/docs/latest/development/extensions-contrib/google.html">Google Blobs</ExternalLink>.</p>
</>
}
];
@ -728,7 +728,7 @@ export function getIoConfigFormFields(ingestionComboType: IngestionComboType): F
label: 'Bootstrap servers',
type: 'string',
info: <>
<ExternalLink href="http://druid.io/docs/latest/development/extensions-core/kafka-ingestion#kafkasupervisorioconfig">consumerProperties</ExternalLink>
<ExternalLink href="https://druid.apache.org/docs/latest/development/extensions-core/kafka-ingestion#kafkasupervisorioconfig">consumerProperties</ExternalLink>
<p>A list of Kafka brokers in the form: <Code>{`<BROKER_1>:<PORT_1>,<BROKER_2>:<PORT_2>,...`}</Code></p>
</>
},
@ -742,7 +742,7 @@ export function getIoConfigFormFields(ingestionComboType: IngestionComboType): F
type: 'json',
defaultValue: {},
info: <>
<ExternalLink href="http://druid.io/docs/latest/development/extensions-core/kafka-ingestion#kafkasupervisorioconfig">consumerProperties</ExternalLink>
<ExternalLink href="https://druid.apache.org/docs/latest/development/extensions-core/kafka-ingestion#kafkasupervisorioconfig">consumerProperties</ExternalLink>
<p>A map of properties to be passed to the Kafka consumer.</p>
</>
}

View File

@ -19,12 +19,12 @@
export const LEGACY_COORDINATOR_CONSOLE = '/index.html';
export const LEGACY_OVERLORD_CONSOLE = '/console.html';
export const DRUID_WEBSITE = 'http://druid.io';
export const DRUID_WEBSITE = 'https://druid.apache.org';
export const DRUID_GITHUB = 'https://github.com/apache/druid';
export const DRUID_DOCS = 'http://druid.io/docs/latest';
export const DRUID_DOCS_SQL = 'http://druid.io/docs/latest/querying/sql.html';
export const DRUID_DOCS_RUNE = 'http://druid.io/docs/latest/querying/querying.html';
export const DRUID_COMMUNITY = 'http://druid.io/community/';
export const DRUID_DOCS = 'https://druid.apache.org/docs/latest';
export const DRUID_DOCS_SQL = 'https://druid.apache.org/docs/latest/querying/sql.html';
export const DRUID_DOCS_RUNE = 'https://druid.apache.org/docs/latest/querying/querying.html';
export const DRUID_COMMUNITY = 'https://druid.apache.org/community/';
export const DRUID_USER_GROUP = 'https://groups.google.com/forum/#!forum/druid-user';
export const DRUID_DEVELOPER_GROUP = 'https://lists.apache.org/list.html?dev@druid.apache.org';
export const DRUID_DOCS_API = 'http://druid.io/docs/latest/operations/api-reference.html';
export const DRUID_DOCS_API = 'https://druid.apache.org/docs/latest/operations/api-reference.html';

View File

@ -593,7 +593,7 @@ export class LoadDataView extends React.PureComponent<LoadDataViewProps, LoadDat
<div className="control">
<Callout className="intro">
<p>
Druid ingests raw data and converts it into a custom, <ExternalLink href="http://druid.io/docs/latest/design/segments.html">indexed</ExternalLink> format that is optimized for analytic queries.
Druid ingests raw data and converts it into a custom, <ExternalLink href="https://druid.apache.org/docs/latest/design/segments.html">indexed</ExternalLink> format that is optimized for analytic queries.
</p>
<p>
To get started, please specify where your raw data is stored and what data you want to ingest.
@ -762,7 +762,7 @@ export class LoadDataView extends React.PureComponent<LoadDataViewProps, LoadDat
{
canFlatten &&
<p>
If you have nested data, you can <ExternalLink href="http://druid.io/docs/latest/ingestion/flatten-json.html">flatten</ExternalLink> it here.
If you have nested data, you can <ExternalLink href="https://druid.apache.org/docs/latest/ingestion/flatten-json.html">flatten</ExternalLink> it here.
If the provided flattening capabilities are not sufficient, please pre-process your data before ingesting it into Druid.
</p>
}
@ -884,7 +884,7 @@ export class LoadDataView extends React.PureComponent<LoadDataViewProps, LoadDat
/>
<AnchorButton
icon={IconNames.INFO_SIGN}
href="http://druid.io/docs/latest/ingestion/flatten-json.html"
href="https://druid.apache.org/docs/latest/ingestion/flatten-json.html"
target="_blank"
minimal
/>
@ -1145,7 +1145,7 @@ export class LoadDataView extends React.PureComponent<LoadDataViewProps, LoadDat
Optional
</p>
<p>
Druid can perform simple <ExternalLink href="http://druid.io/docs/latest/ingestion/transform-spec.html#transforms">transforms</ExternalLink> of column values.
Druid can perform simple <ExternalLink href="https://druid.apache.org/docs/latest/ingestion/transform-spec.html#transforms">transforms</ExternalLink> of column values.
</p>
<p>
Click "Preview" to see the result of any specified transforms.
@ -1351,7 +1351,7 @@ export class LoadDataView extends React.PureComponent<LoadDataViewProps, LoadDat
Optional
</p>
<p>
Druid can <ExternalLink href="http://druid.io/docs/latest/querying/filters.html">filter</ExternalLink> out unwanted data.
Druid can <ExternalLink href="https://druid.apache.org/docs/latest/querying/filters.html">filter</ExternalLink> out unwanted data.
</p>
<p>
Click "Preview" to see the impact of any specified filters.
@ -1607,7 +1607,7 @@ export class LoadDataView extends React.PureComponent<LoadDataViewProps, LoadDat
If you want to change the type, click on the column header.
</p>
<p>
Select whether or not you want to <ExternalLink href="http://druid.io/docs/latest/tutorials/tutorial-rollup.html">roll-up</ExternalLink> your data.
Select whether or not you want to <ExternalLink href="https://druid.apache.org/docs/latest/tutorials/tutorial-rollup.html">roll-up</ExternalLink> your data.
</p>
</Callout>
{
@ -1623,7 +1623,7 @@ export class LoadDataView extends React.PureComponent<LoadDataViewProps, LoadDat
content={
<div className="label-info-text">
<p>
Select whether or not you want to set an explicit list of <ExternalLink href="http://druid.io/docs/latest/ingestion/ingestion-spec.html#dimensionsspec">dimensions</ExternalLink> and <ExternalLink href="http://druid.io/docs/latest/querying/aggregations.html">metrics</ExternalLink>.
Select whether or not you want to set an explicit list of <ExternalLink href="https://druid.apache.org/docs/latest/ingestion/ingestion-spec.html#dimensionsspec">dimensions</ExternalLink> and <ExternalLink href="https://druid.apache.org/docs/latest/querying/aggregations.html">metrics</ExternalLink>.
Explicitly setting dimensions and metrics can lead to better compression and performance.
If you disable this option, Druid will try to auto-detect fields in your data and treat them as individual columns.
</p>
@ -1665,7 +1665,7 @@ export class LoadDataView extends React.PureComponent<LoadDataViewProps, LoadDat
The primary timestamp will be truncated to the specified query granularity, and rows containing the same string field values will be aggregated together.
</p>
<p>
If you enable rollup, you must specify which columns are <a href="http://druid.io/docs/latest/ingestion/ingestion-spec.html#dimensionsspec">dimensions</a> (fields you want to group and filter on), and which are <a href="http://druid.io/docs/latest/querying/aggregations.html">metrics</a> (fields you want to aggregate on).
If you enable rollup, you must specify which columns are <a href="https://druid.apache.org/docs/latest/ingestion/ingestion-spec.html#dimensionsspec">dimensions</a> (fields you want to group and filter on), and which are <a href="https://druid.apache.org/docs/latest/querying/aggregations.html">metrics</a> (fields you want to aggregate on).
</p>
</div>
}