mirror of https://github.com/apache/druid.git
Merge branch 'Update-Query-Interrupted-Exception' into 6088-Time-Ordering-On-Scans-N-Way-Merge
This commit is contained in:
commit
7bfa77d3c1
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
name: Feature/Change
|
||||
about: A template for Druid feature and change descriptions
|
||||
title: ""
|
||||
labels: Feature/Change Description
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
### Description
|
||||
|
||||
Please describe the feature or change with as much detail as possible.
|
||||
|
||||
If you have a detailed implementation in mind and wish to contribute that implementation yourself, and the change that
|
||||
you are planning would require a 'Design Review' tag because it introduces or changes some APIs, or it is large and
|
||||
imposes lasting consequences on the codebase, please open a Proposal instead.
|
||||
|
||||
### Motivation
|
||||
|
||||
Please provide the following for the desired feature or change:
|
||||
- A detailed description of the intended use case, if applicable
|
||||
- Rationale for why the desired feature/change would be beneficial
|
|
@ -1,20 +0,0 @@
|
|||
---
|
||||
name: Feature/Change Request
|
||||
about: A template for Druid feature and change requests
|
||||
title: ""
|
||||
labels: Wish list
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
# Description
|
||||
|
||||
Please describe the feature or change with as much detail as possible.
|
||||
|
||||
If you have a detailed implementation in mind and wish to contribute that implementation yourself, please open a Proposal issue instead.
|
||||
|
||||
# Motivation
|
||||
|
||||
Please provide the following for the desired feature or change:
|
||||
- A detailed description of the intended use case, if applicable
|
||||
- Rationale for why the desired feature/change would be beneficial
|
|
@ -1,23 +1,22 @@
|
|||
---
|
||||
name: Bug Report
|
||||
about: A template for Druid bug reports
|
||||
name: Problem Report
|
||||
about: A template for Druid problem reports
|
||||
title: ""
|
||||
labels: Bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
Please provide a detailed title (e.g. "Broker crashes when using TopN query with Bound filter" instead of just "Broker crashes").
|
||||
|
||||
# Affected Version
|
||||
### Affected Version
|
||||
|
||||
The Druid version where the bug was encountered.
|
||||
The Druid version where the problem was encountered.
|
||||
|
||||
# Description
|
||||
### Description
|
||||
|
||||
Please include as much detailed information about the bug as possible.
|
||||
Please include as much detailed information about the problem as possible.
|
||||
- Cluster size
|
||||
- Configurations in use
|
||||
- Steps to reproduce the bug
|
||||
- Steps to reproduce the problem
|
||||
- The error message or stack traces encountered. Providing more context, such as nearby log messages or even entire logs, can be helpful.
|
||||
- Any debugging that you have already done
|
|
@ -2,16 +2,16 @@
|
|||
name: Proposal
|
||||
about: A template for major Druid change proposals
|
||||
title: "[PROPOSAL]"
|
||||
labels: Proposal
|
||||
labels: Proposal, Design Review
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
# Motivation
|
||||
### Motivation
|
||||
|
||||
A description of the problem.
|
||||
|
||||
# Proposed changes
|
||||
### Proposed changes
|
||||
|
||||
This section should provide a detailed description of the changes being proposed. This will usually be the longest section; please feel free to split this section or other sections into subsections if needed.
|
||||
|
||||
|
@ -21,11 +21,11 @@ This section should include any changes made to user-facing interfaces, for exam
|
|||
- SQL language
|
||||
- Emitted metrics
|
||||
|
||||
# Rationale
|
||||
### Rationale
|
||||
|
||||
A discussion of why this particular solution is the best one. One good way to approach this is to discuss other alternative solutions that you considered and decided against. This should also include a discussion of any specific benefits or drawbacks you are aware of.
|
||||
|
||||
# Operational impact
|
||||
### Operational impact
|
||||
|
||||
This section should describe how the proposed changes will impact the operation of existing clusters. It should answer questions such as:
|
||||
|
||||
|
@ -33,10 +33,10 @@ This section should describe how the proposed changes will impact the operation
|
|||
- Is there a migration path that cluster operators need to be aware of?
|
||||
- Will there be any effect on the ability to do a rolling upgrade, or to do a rolling _downgrade_ if an operator wants to switch back to a previous version?
|
||||
|
||||
# Test plan (optional)
|
||||
### Test plan (optional)
|
||||
|
||||
An optional discussion of how the proposed changes will be tested. This section should focus on higher level system test strategy and not unit tests (as UTs will be implementation dependent).
|
||||
|
||||
# Future work (optional)
|
||||
### Future work (optional)
|
||||
|
||||
An optional discussion of things that you believe are out of scope for the particular proposal but would be nice follow-ups. It helps show where a particular change could be leading us. There isn't any commitment that the proposal author will actually work on the items discussed in this section.
|
||||
|
|
|
@ -14,3 +14,4 @@ target
|
|||
*.DS_Store
|
||||
_site
|
||||
dependency-reduced-pom.xml
|
||||
README.BINARY
|
||||
|
|
|
@ -0,0 +1,106 @@
|
|||
|
||||
### Licensing Labels
|
||||
|
||||
#### Binary-only
|
||||
|
||||
This product bundles fonts from Font Awesome Free version 4.2.0, copyright Font Awesome,
|
||||
which is available under the SIL OFL 1.1. For details, see licenses/bin/font-awesome.silofl
|
||||
* https://fontawesome.com/
|
||||
|
||||
This product bundles JavaBeans Activation Framework version 1.2.0, copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/javax.activation.CDDL11
|
||||
* https://github.com/javaee/activation
|
||||
* com.sun.activation:javax.activation
|
||||
|
||||
This product bundles Jersey version 1.19.3, copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/jersey.CDDL11
|
||||
* https://jersey.github.io/
|
||||
* com.sun.jersey:jersey-core
|
||||
* com.sun.jersey:jersey-server
|
||||
* com.sun.jersey:jersey-servlet
|
||||
* com.sun.jersey:contribs
|
||||
|
||||
This product bundles Expression Language 3.0 API version 3.0.0., copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
|
||||
* https://github.com/javaee/el-spec
|
||||
* javax.el:javax.el-api
|
||||
|
||||
This product bundles Java Servlet API version 3.1.0, copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
|
||||
* https://github.com/javaee/servlet-spec
|
||||
* javax.servlet:javax.servlet-api
|
||||
|
||||
This product bundles JSR311 API version 1.1.1, copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/jsr311-api.CDDL11
|
||||
* https://github.com/javaee/jsr311
|
||||
* javax.ws.rs:jsr311-api
|
||||
|
||||
This product bundles Expression Language 3.0 version 3.0.0., copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
|
||||
* https://github.com/javaee/el-spec
|
||||
* org.glassfish:javax.el
|
||||
|
||||
This product bundles Jersey version 1.9, copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/jersey.CDDL11
|
||||
* https://jersey.github.io/
|
||||
* com.sun.jersey:jersey-client
|
||||
* com.sun.jersey:jersey-core
|
||||
|
||||
This product bundles JavaBeans Activation Framework version 1.1, copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/javaxCDDL11
|
||||
* https://github.com/javaee/activation
|
||||
* javax.activation:activation
|
||||
|
||||
This product bundles Java Servlet API version 2.5, copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
|
||||
* https://github.com/javaee/servlet-spec
|
||||
* javax.servlet:javax.servlet-api
|
||||
|
||||
This product bundles JAXB version 2.2.2, copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
|
||||
* https://github.com/javaee/jaxb-v2
|
||||
* javax.xml.bind:jaxb-api
|
||||
|
||||
This product bundles stax-api version 1.0-2, copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
|
||||
* https://github.com/javaee/
|
||||
* javax.xml.stream:stax-api
|
||||
|
||||
This product bundles jsp-api version 2.1, copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
|
||||
* https://github.com/javaee/javaee-jsp-api
|
||||
* javax.servlet.jsp:jsp-api
|
||||
|
||||
This product bundles Jersey version 1.15, copyright Oracle and/or its affiliates.,
|
||||
which is available under the CDDL 1.1. For details, see licenses/bin/jersey.CDDL11
|
||||
* https://jersey.github.io/
|
||||
* com.sun.jersey:jersey-client
|
||||
|
||||
This product bundles OkHttp Aether Connector version 0.0.9, copyright to original author or authors,
|
||||
which is available under the Eclipse Public License 1.0. For details, see licenses/bin/aether-connector-okhttp.EPL1.
|
||||
* https://github.com/takari/aether-connector-okhttp
|
||||
* io.tesla.aether:aether-connector-okhttp
|
||||
|
||||
This product bundles Tesla Aether version 0.0.5, copyright to original author or authors,
|
||||
which is available under the Eclipse Public License 1.0. For details, see licenses/bin/tesla-aether.EPL1.
|
||||
* https://github.com/tesla/tesla-aether
|
||||
* io.tesla.aether:tesla-aether
|
||||
|
||||
This product bundles Eclipse Aether libraries version 0.9.0.M2, copyright Sonatype, Inc.,
|
||||
which is available under the Eclipse Public License 1.0. For details, see licenses/bin/aether-core.EPL1.
|
||||
* https://github.com/eclipse/aether-core
|
||||
* org.eclipse.aether:aether-api
|
||||
* org.eclipse.aether:aether-connector-file
|
||||
* org.eclipse.aether:aether-impl
|
||||
* org.eclipse.aether:aether-spi
|
||||
* org.eclipse.aether:aether-util
|
||||
|
||||
This product bundles Rhino version 1.7R5, copyright Mozilla and individual contributors.,
|
||||
which is available under the Mozilla Public License Version 2.0. For details, see licenses/bin/rhino.MPL2.
|
||||
* https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Rhino
|
||||
* org.mozilla:rhino
|
||||
|
||||
This product bundles "Java Concurrency In Practice" Book Annotations, copyright Brian Goetz and Tim Peierls,
|
||||
which is available under the Creative Commons Attribution 2.5 license. For details, see licenses/bin/creative-commons-2.5.LICENSE.
|
||||
* http://jcip.net/
|
||||
* net.jcip:jcip-annotations
|
96
LICENSE
96
LICENSE
|
@ -200,3 +200,99 @@
|
|||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
APACHE DRUID (INCUBATING) SUBCOMPONENTS:
|
||||
|
||||
Apache Druid (incubating) includes a number of subcomponents with
|
||||
separate copyright notices and license terms. Your use of the source
|
||||
code for these subcomponents is subject to the terms and
|
||||
conditions of the following licenses.
|
||||
|
||||
|
||||
Apache License version 2.0
|
||||
================================
|
||||
|
||||
SOURCE/JAVA-CORE
|
||||
This product contains conjunctive normal form conversion code, a variance aggregator algorithm, and Bloom filter
|
||||
adapted from Apache Hive.
|
||||
* processing/src/main/java/org/apache/druid/segment/filter/Filters.java
|
||||
* extensions-core/stats/src/main/java/io/druid/query/aggregation/variance/VarianceAggregatorCollector.java
|
||||
* extensions-core/druid-bloom-filter/src/main/java/org/apache/druid/query/filter/BloomKFilter.java
|
||||
|
||||
This product contains variable length long deserialization code adapted from Apache Lucene.
|
||||
* processing/src/main/java/org/apache/druid/segment/data/VSizeLongSerde.java
|
||||
|
||||
This product contains SQL query planning code adapted from Apache Calcite.
|
||||
* sql/src/main/java/org/apache/druid/sql/calcite/
|
||||
|
||||
This product contains Kerberos authentication code adapted from Apache Hadoop.
|
||||
* extensions-core/druid-kerberos/src/main/java/org/apache/druid/security/kerberos/
|
||||
|
||||
This product contains a modified version of the java-alphanum library,
|
||||
copyright Andrew Duffy (https://github.com/amjjd/java-alphanum).
|
||||
* processing/src/main/java/org/apache/druid/query/ordering/StringComparators.java
|
||||
|
||||
This product contains a modified version of the Metamarkets java-util library,
|
||||
copyright Metamarkets Group Inc. (https://github.com/metamx/java-util).
|
||||
* java-util/
|
||||
|
||||
This product contains a modified version of the Metamarkets bytebuffer-collections library,
|
||||
copyright Metamarkets Group Inc. (https://github.com/metamx/bytebuffer-collections)
|
||||
* processing/src/main/java/org/apache/druid/collections/
|
||||
|
||||
This product contains a modified version of the Metamarkets extendedset library,
|
||||
copyright Metamarkets Group Inc. (https://github.com/metamx/extendedset)
|
||||
* extendedset/
|
||||
|
||||
This product contains a modified version of the CONCISE (COmpressed 'N' Composable Integer SEt) library,
|
||||
copyright Alessandro Colantonio (https://sourceforge.net/projects/concise/), extending the functionality of
|
||||
ConciseSet to use IntBuffers.
|
||||
* extendedset/src/main/java/org/apache/druid/extendedset/intset/
|
||||
|
||||
This product contains modified portions of the Guava library,
|
||||
copyright The Guava Authors (https://github.com/google/guava).
|
||||
Closer class:
|
||||
* core/src/main/java/org/apache/druid/java/util/common/io/Closer.java
|
||||
Splitter.splitToList() method:
|
||||
* core/src/main/java/org/apache/druid/java/util/common/parsers/DelimitedParser.java
|
||||
DirectExecutorService class:
|
||||
* core/src/main/java/org/apache/druid/java/util/common/concurrent/DirectExecutorService.java
|
||||
|
||||
This product contains modified versions of the Dockerfile and related configuration files
|
||||
from SequenceIQ's Hadoop Docker image, copyright SequenceIQ, Inc. (https://github.com/sequenceiq/hadoop-docker/)
|
||||
* examples/quickstart/tutorial/hadoop/docker/
|
||||
|
||||
This product contains fixed bins histogram percentile computation code adapted from Netflix Spectator,
|
||||
copyright Netflix, Inc. (https://github.com/Netflix/spectator)
|
||||
* extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/FixedBucketsHistogram.java
|
||||
|
||||
|
||||
MIT License
|
||||
================================
|
||||
|
||||
SOURCE/WEB-CONSOLE
|
||||
This product bundles jQuery version 1.11.0, copyright jQuery Foundation, Inc.,
|
||||
which is available under an MIT license. For details, see licenses/src/jquery.MIT.
|
||||
|
||||
This product bundles jQuery UI version 1.9.2, copyright jQuery Foundation and other contributors,
|
||||
which is available under an MIT license. For details, see licenses/src/jquery-ui.MIT.
|
||||
|
||||
This product bundles underscore version 1.2.2, copyright Jeremy Ashkenas, DocumentCloud,
|
||||
which is available under an MIT license. For details, see licenses/src/underscore.MIT.
|
||||
|
||||
|
||||
BSD-3-Clause License
|
||||
================================
|
||||
|
||||
SOURCE/WEB-CONSOLE
|
||||
This product bundles demo_table.css and jquery.dataTables.js from DataTables version 1.8.2, copyright Allan Jardine.,
|
||||
which is available under a BSD-3-Clause License. For details, see licenses/src/datatables.BSD3.
|
||||
|
||||
|
||||
Public Domain
|
||||
================================
|
||||
|
||||
SOURCE/JAVA-CORE
|
||||
This product uses a smear function adapted from MurmurHash3, written by Austin Appleby who has placed
|
||||
MurmurHash3 in the public domain (https://github.com/aappleby/smhasher/blob/master/src/MurmurHash3.cpp).
|
||||
* processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/Groupers.java
|
||||
|
|
File diff suppressed because it is too large
Load Diff
147
NOTICE
147
NOTICE
|
@ -4,97 +4,62 @@ Copyright 2018 The Apache Software Foundation
|
|||
This product includes software developed at
|
||||
The Apache Software Foundation (http://www.apache.org/).
|
||||
|
||||
|
||||
############ SOURCE/JAVA-CORE ############
|
||||
|
||||
================= Apache Hive =================
|
||||
Apache Hive
|
||||
Copyright 2008-2018 The Apache Software Foundation
|
||||
|
||||
|
||||
|
||||
|
||||
================= Apache Lucene =================
|
||||
Apache Lucene
|
||||
Copyright 2001-2019 The Apache Software Foundation
|
||||
|
||||
|
||||
|
||||
|
||||
================= Apache Calcite =================
|
||||
Apache Calcite
|
||||
Copyright 2012-2019 The Apache Software Foundation
|
||||
|
||||
This product is based on source code originally developed
|
||||
by DynamoBI Corporation, LucidEra Inc., SQLstream Inc. and others
|
||||
under the auspices of the Eigenbase Foundation
|
||||
and released as the LucidDB project.
|
||||
|
||||
|
||||
|
||||
|
||||
================= Apache Hadoop =================
|
||||
Apache Hadoop
|
||||
Copyright 2009-2017 The Apache Software Foundation
|
||||
|
||||
|
||||
|
||||
|
||||
================= Metamarkets java-util =================
|
||||
java-util
|
||||
Copyright 2011-2017 Metamarkets Group Inc.
|
||||
|
||||
|
||||
|
||||
|
||||
================= Metamarkets bytebuffer-collections =================
|
||||
bytebuffer-collections
|
||||
Copyright 2011-2015 Metamarkets Group Inc.
|
||||
|
||||
|
||||
|
||||
|
||||
================= Metamarkets extendedset =================
|
||||
extendedset
|
||||
Copyright 2012 Metamarkets Group Inc.
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
This product contains a modified version of Andrew Duffy's java-alphanum library
|
||||
* LICENSE:
|
||||
* https://github.com/amjjd/java-alphanum/blob/5c036e2e492cc7f3b7bcdebd46b8f9e2a87927e5/LICENSE.txt (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/amjjd/java-alphanum
|
||||
|
||||
This product contains conjunctive normal form conversion code, a variance aggregator algorithm, and bloom filter adapted from Apache Hive
|
||||
* LICENSE:
|
||||
* https://github.com/apache/hive/blob/branch-2.0/LICENSE (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/apache/hive
|
||||
|
||||
This product contains variable length long deserialization code adapted from Apache Lucene
|
||||
* LICENSE:
|
||||
* https://github.com/apache/lucene-solr/blob/master/lucene/LICENSE.txt (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/apache/lucene-solr
|
||||
|
||||
This product contains a modified version of Metamarkets java-util library
|
||||
* LICENSE:
|
||||
* https://github.com/metamx/java-util/blob/master/LICENSE (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/metamx/java-util
|
||||
* COMMIT TAG:
|
||||
* https://github.com/metamx/java-util/commit/826021f
|
||||
|
||||
This product contains a modified version of TestNG 6.8.7
|
||||
* LICENSE:
|
||||
* http://testng.org/license/ (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* http://testng.org/
|
||||
|
||||
This product contains a modified version of Metamarkets bytebuffer-collections library
|
||||
* LICENSE:
|
||||
* https://github.com/metamx/bytebuffer-collections/blob/master/LICENSE (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/metamx/bytebuffer-collections
|
||||
* COMMIT TAG:
|
||||
* https://github.com/metamx/bytebuffer-collections/commit/3d1e7c8
|
||||
|
||||
This product contains SQL query planning code adapted from Apache Calcite
|
||||
* LICENSE:
|
||||
* https://github.com/apache/calcite/blob/master/LICENSE (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://calcite.apache.org/
|
||||
|
||||
This product contains a modified version of Metamarkets extendedset library
|
||||
* LICENSE:
|
||||
* https://github.com/metamx/extendedset/blob/master/LICENSE (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/metamx/extendedset
|
||||
* COMMIT TAG:
|
||||
* https://github.com/metamx/extendedset/commit/c9d647d
|
||||
|
||||
This product contains a modified version of Alessandro Colantonio's CONCISE
|
||||
This library contains a modified version of Alessandro Colantonio's CONCISE
|
||||
(COmpressed 'N' Composable Integer SEt) library, extending the functionality of
|
||||
ConciseSet to use IntBuffers.
|
||||
* (c) 2010 Alessandro Colantonio
|
||||
* <mailto:colanton@mat.uniroma3.it>
|
||||
* <http://ricerca.mat.uniroma3.it/users/colanton>
|
||||
* LICENSE:
|
||||
* Apache License, Version 2.0
|
||||
* HOMEPAGE:
|
||||
* https://sourceforge.net/projects/concise/
|
||||
|
||||
This product contains a modified version of The Guava Authors's Closer class from Guava library:
|
||||
* LICENSE:
|
||||
* https://github.com/google/guava/blob/c462d69329709f72a17a64cb229d15e76e72199c/COPYING (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/google/guava
|
||||
* COMMIT TAG:
|
||||
* https://github.com/google/guava/commit/0ba7ccf36f5384a321cb78d62375bf7574e7bc24
|
||||
|
||||
This product contains code adapted from Apache Hadoop
|
||||
* LICENSE:
|
||||
* https://github.com/apache/hadoop/blob/trunk/LICENSE.txt (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* http://hadoop.apache.org/
|
||||
|
||||
This product contains modified versions of the Dockerfile and related configuration files from SequenceIQ's Hadoop Docker image:
|
||||
* LICENSE:
|
||||
* https://github.com/sequenceiq/hadoop-docker/blob/master/LICENSE (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/sequenceiq/hadoop-docker/
|
||||
* COMMIT TAG:
|
||||
* update this when this patch is committed
|
||||
|
||||
This product contains fixed bins histogram percentile computation code adapted from Netflix Spectator:
|
||||
* LICENSE:
|
||||
* https://github.com/Netflix/spectator/blob/master/LICENSE (Apache License, Version 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/Netflix/spectator
|
||||
ConciseSet to use IntBuffers.
|
File diff suppressed because it is too large
Load Diff
|
@ -71,3 +71,4 @@ For instructions on building Druid from source, see [docs/content/development/bu
|
|||
### Contributing
|
||||
|
||||
Please follow the guidelines listed [here](http://druid.io/community/).
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@
|
|||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.checkerframework</groupId>
|
||||
<artifactId>checker</artifactId>
|
||||
<artifactId>checker-qual</artifactId>
|
||||
<version>${checkerframework.version}</version>
|
||||
</dependency>
|
||||
|
||||
|
|
|
@ -21,6 +21,6 @@ set -e
|
|||
|
||||
pushd $TRAVIS_BUILD_DIR/integration-tests
|
||||
|
||||
mvn verify -P integration-tests -Dit.test=ITUnionQueryTest,ITNestedQueryPushDownTest,ITTwitterQueryTest,ITWikipediaQueryTest,ITBasicAuthConfigurationTest,ITTLSTest
|
||||
mvn verify -P integration-tests -Dit.test=ITUnionQueryTest,ITNestedQueryPushDownTest,ITTwitterQueryTest,ITWikipediaQueryTest,ITBasicAuthConfigurationTest,ITTLSTest,ITSystemTableQueryTest,ITSystemTableBatchIndexTaskTest
|
||||
|
||||
popd
|
||||
|
|
|
@ -122,6 +122,27 @@ interface Function
|
|||
}
|
||||
}
|
||||
|
||||
class Pi implements Function
|
||||
{
|
||||
private static final double PI = Math.PI;
|
||||
|
||||
@Override
|
||||
public String name()
|
||||
{
|
||||
return "pi";
|
||||
}
|
||||
|
||||
@Override
|
||||
public ExprEval apply(List<Expr> args, Expr.ObjectBinding bindings)
|
||||
{
|
||||
if (args.size() >= 1) {
|
||||
throw new IAE("Function[%s] needs 0 argument", name());
|
||||
}
|
||||
|
||||
return ExprEval.of(PI);
|
||||
}
|
||||
}
|
||||
|
||||
class Abs extends SingleParamMath
|
||||
{
|
||||
@Override
|
||||
|
@ -248,6 +269,22 @@ interface Function
|
|||
}
|
||||
}
|
||||
|
||||
class Cot extends SingleParamMath
|
||||
{
|
||||
@Override
|
||||
public String name()
|
||||
{
|
||||
return "cot";
|
||||
}
|
||||
|
||||
@Override
|
||||
protected ExprEval eval(double param)
|
||||
{
|
||||
return ExprEval.of(Math.cos(param) / Math.sin(param));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
class Div extends DoubleParamMath
|
||||
{
|
||||
@Override
|
||||
|
|
|
@ -77,6 +77,27 @@
|
|||
<failIfNoFiles>false</failIfNoFiles>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
<version>1.8</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<phase>prepare-package</phase>
|
||||
<configuration>
|
||||
<target>
|
||||
<concat destfile="${project.build.directory}/../../README.BINARY">
|
||||
<fileset file="${project.build.directory}/../../README.md" />
|
||||
<fileset file="${project.build.directory}/../../LABELS.md" />
|
||||
</concat>
|
||||
</target>
|
||||
</configuration>
|
||||
<goals>
|
||||
<goal>run</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
|
||||
|
|
|
@ -232,12 +232,31 @@
|
|||
<fileSet>
|
||||
<directory>../</directory>
|
||||
<includes>
|
||||
<include>LICENSE</include>
|
||||
<include>NOTICE</include>
|
||||
<include>DISCLAIMER</include>
|
||||
<include>licenses/**</include>
|
||||
</includes>
|
||||
</fileSet>
|
||||
</fileSets>
|
||||
<files>
|
||||
<file>
|
||||
<source>../LICENSE.BINARY</source>
|
||||
<outputDirectory>.</outputDirectory>
|
||||
<destName>LICENSE</destName>
|
||||
<lineEnding>keep</lineEnding>
|
||||
</file>
|
||||
<file>
|
||||
<source>../NOTICE.BINARY</source>
|
||||
<outputDirectory>.</outputDirectory>
|
||||
<destName>NOTICE</destName>
|
||||
<lineEnding>keep</lineEnding>
|
||||
</file>
|
||||
<file>
|
||||
<source>../README.BINARY</source>
|
||||
<outputDirectory>.</outputDirectory>
|
||||
<destName>README</destName>
|
||||
<lineEnding>keep</lineEnding>
|
||||
</file>
|
||||
</files>
|
||||
<dependencySets>
|
||||
<dependencySet>
|
||||
<useProjectArtifact>false</useProjectArtifact>
|
||||
|
|
|
@ -47,6 +47,10 @@
|
|||
<exclude>.gitignore</exclude>
|
||||
<exclude>.dockerignore</exclude>
|
||||
<exclude>.travis.yml</exclude>
|
||||
<exclude>LICENSE.BINARY</exclude>
|
||||
<exclude>NOTICE.BINARY</exclude>
|
||||
<exclude>README.BINARY</exclude>
|
||||
<exclude>licenses/bin</exclude>
|
||||
<exclude>publications/**</exclude>
|
||||
<exclude>upload.sh</exclude>
|
||||
</excludes>
|
||||
|
|
|
@ -0,0 +1,77 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
|
||||
existing_jar_dict_notice = {}
|
||||
|
||||
def main():
|
||||
if len(sys.argv) != 3:
|
||||
sys.stderr.write('usage: program <druid source path> <full tmp path>\n')
|
||||
sys.exit(1)
|
||||
|
||||
druid_path = sys.argv[1]
|
||||
tmp_path = sys.argv[2]
|
||||
|
||||
generate_reports(druid_path, tmp_path)
|
||||
|
||||
def generate_reports(druid_path, tmp_path):
|
||||
license_main_path = tmp_path + "/license-reports"
|
||||
license_ext_path = tmp_path + "/license-reports/ext"
|
||||
os.mkdir(license_main_path)
|
||||
os.mkdir(license_ext_path)
|
||||
|
||||
print("********** Generating main LICENSE report.... **********")
|
||||
os.chdir(druid_path)
|
||||
command = "mvn -Pdist -Ddependency.locations.enabled=false project-info-reports:dependencies"
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
command = "cp -r distribution/target/site {}/site".format(license_main_path)
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
|
||||
sys.exit()
|
||||
|
||||
print("********** Generating extension LICENSE reports.... **********")
|
||||
extension_dirs = os.listdir("extensions-core")
|
||||
for extension_dir in extension_dirs:
|
||||
full_extension_dir = druid_path + "/extensions-core/" + extension_dir
|
||||
if not os.path.isdir(full_extension_dir):
|
||||
continue
|
||||
|
||||
print("--- Generating report for {}... ---".format(extension_dir))
|
||||
|
||||
extension_report_dir = "{}/{}".format(license_ext_path, extension_dir)
|
||||
os.mkdir(extension_report_dir)
|
||||
os.chdir(full_extension_dir)
|
||||
|
||||
try:
|
||||
command = "mvn -Ddependency.locations.enabled=false project-info-reports:dependencies"
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
command = "cp -r target/site {}/site".format(extension_report_dir)
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
except:
|
||||
print("Encountered error when generating report for: " + extension_dir)
|
||||
|
||||
os.chdir("..")
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except KeyboardInterrupt:
|
||||
print('Interrupted, closing.')
|
|
@ -0,0 +1,80 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import re
|
||||
import requests
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
|
||||
if len(sys.argv) != 5:
|
||||
sys.stderr.write('usage: program <github-username> <upstream-remote> <previous-release-branch> <current-release-branch>\n')
|
||||
sys.stderr.write(" e.g., program myusername upstream 0.13.0-incubating 0.14.0-incubating")
|
||||
sys.stderr.write(" It is also necessary to set a GIT_TOKEN environment variable containing a personal access token.")
|
||||
sys.exit(1)
|
||||
|
||||
github_username = sys.argv[1]
|
||||
upstream_remote = sys.argv[2]
|
||||
previous_branch = sys.argv[3]
|
||||
release_branch = sys.argv[4]
|
||||
master_branch = "master"
|
||||
|
||||
upstream_master = "{}/{}".format(upstream_remote, master_branch)
|
||||
upstream_previous = "{}/{}".format(upstream_remote, previous_branch)
|
||||
upstream_release = "{}/{}".format(upstream_remote, release_branch)
|
||||
|
||||
command = "git log {}..{} --oneline | tail -1".format(upstream_master, upstream_previous)
|
||||
|
||||
# Find the commit where the previous release branch was cut from master
|
||||
previous_branch_first_commit = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
match_result = re.match("(\w+) .*", previous_branch_first_commit)
|
||||
previous_branch_first_commit = match_result.group(1)
|
||||
|
||||
print("Previous branch: {}, first commit: {}".format(upstream_previous, previous_branch_first_commit))
|
||||
|
||||
# Find all commits between that commit and the current release branch
|
||||
command = "git rev-list {}..{}".format(previous_branch_first_commit, upstream_release)
|
||||
all_release_commits = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
|
||||
for commit_id in all_release_commits.splitlines():
|
||||
try:
|
||||
# wait 3 seconds between calls to avoid hitting the rate limit
|
||||
time.sleep(3)
|
||||
|
||||
search_url = "https://api.github.com/search/issues?q=type:pr+is:merged+is:closed+repo:apache/incubator-druid+SHA:{}"
|
||||
resp = requests.get(search_url.format(commit_id), auth=(github_username, os.environ["GIT_TOKEN"]))
|
||||
resp_json = resp.json()
|
||||
|
||||
milestone_found = False
|
||||
closed_pr_nums = []
|
||||
if (resp_json.get("items") is None):
|
||||
print("Could not get PRs for commit ID {}, resp: {}".format(commit_id, resp_json))
|
||||
continue
|
||||
|
||||
for pr in resp_json["items"]:
|
||||
closed_pr_nums.append(pr["number"])
|
||||
milestone = pr["milestone"]
|
||||
if milestone is not None:
|
||||
milestone_found = True
|
||||
print("COMMIT: {}, PR#: {}, MILESTONE: {}".format(commit_id, pr["number"], pr["milestone"]["url"]))
|
||||
if not milestone_found:
|
||||
print("NO MILESTONE FOUND FOR COMMIT: {}, CLOSED PRs: {}".format(commit_id, closed_pr_nums))
|
||||
|
||||
except Exception as e:
|
||||
print("Got exception for commitID: {} ex: {}".format(commit_id, e))
|
||||
continue
|
|
@ -0,0 +1,105 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
existing_jar_dict_notice = {}
|
||||
|
||||
def main():
|
||||
if len(sys.argv) != 3:
|
||||
sys.stderr.write('usage: program <full extracted druid distribution path> <full tmp path>\n')
|
||||
sys.exit(1)
|
||||
|
||||
druid_path = sys.argv[1]
|
||||
tmp_path = sys.argv[2]
|
||||
|
||||
# copy everything in lib/ to the staging dir
|
||||
lib_path = druid_path + "/lib"
|
||||
tmp_lib_path = tmp_path + "/1-lib"
|
||||
os.mkdir(tmp_lib_path)
|
||||
command = "cp -r {}/* {}".format(lib_path, tmp_lib_path)
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
|
||||
# copy hadoop deps to the staging dir
|
||||
hdeps_path = druid_path + "/hadoop-dependencies"
|
||||
tmp_hdeps_path = tmp_path + "/2-hdeps"
|
||||
os.mkdir(tmp_hdeps_path)
|
||||
command = "cp -r {}/* {}".format(hdeps_path, tmp_hdeps_path)
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
|
||||
|
||||
# copy all extension folders to the staging dir
|
||||
ext_path = druid_path + "/extensions"
|
||||
tmp_ext_path = tmp_path + "/3-ext"
|
||||
os.mkdir(tmp_ext_path)
|
||||
command = "cp -r {}/* {}".format(ext_path, tmp_ext_path)
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
|
||||
|
||||
get_notices(tmp_path)
|
||||
|
||||
def get_notices(tmp_jar_path):
|
||||
print("********** Scanning directory for NOTICE" + tmp_jar_path + " **********")
|
||||
jar_files = os.listdir(tmp_jar_path)
|
||||
os.chdir(tmp_jar_path)
|
||||
|
||||
for jar_file in jar_files:
|
||||
if os.path.isdir(jar_file):
|
||||
get_notices(jar_file)
|
||||
continue
|
||||
elif not os.path.isfile(jar_file) or ".jar" not in jar_file:
|
||||
continue
|
||||
|
||||
if existing_jar_dict_notice.get(jar_file) is not None:
|
||||
print("---------- Already saw file: " + jar_file)
|
||||
continue
|
||||
else:
|
||||
existing_jar_dict_notice[jar_file] = True
|
||||
|
||||
try:
|
||||
command = "jar tf {} | grep NOTICE".format(jar_file)
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
except:
|
||||
print("---------- no NOTICE file found in: " + jar_file)
|
||||
continue
|
||||
|
||||
for line in outstr.splitlines():
|
||||
try:
|
||||
command = "jar xf {} {}".format(jar_file, line)
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
|
||||
command = "mv {} {}.NOTICE-FILE".format(line, jar_file)
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
|
||||
command = "cat {}.NOTICE-FILE".format(jar_file)
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
print("================= " + jar_file + " =================")
|
||||
print(outstr)
|
||||
print("\n")
|
||||
except:
|
||||
print("Error while grabbing NOTICE file: " + jar_file)
|
||||
continue
|
||||
|
||||
os.chdir("..")
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except KeyboardInterrupt:
|
||||
print('Interrupted, closing.')
|
|
@ -57,6 +57,7 @@ for redirect in redirects:
|
|||
raise Exception('Redirect target does not exist for source: ' + source)
|
||||
|
||||
# Write redirect file
|
||||
os.makedirs(os.path.dirname(source_file), exist_ok=True)
|
||||
with open(source_file, 'w') as f:
|
||||
f.write("---\n")
|
||||
f.write("layout: redirect_page\n")
|
||||
|
|
|
@ -0,0 +1,54 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
deleted_paths_dict = {}
|
||||
|
||||
# assumes docs/latest in the doc repo has the current files for the next release
|
||||
# deletes docs for old versions and copies docs/latest into the old versions
|
||||
# run `git status | grep deleted:` on the doc repo to see what pages were deleted and feed that into
|
||||
# missing-redirect-finder2.py
|
||||
def main():
|
||||
if len(sys.argv) != 2:
|
||||
sys.stderr.write('usage: program <druid-docs-repo-path>\n')
|
||||
sys.exit(1)
|
||||
|
||||
druid_docs_path = sys.argv[1]
|
||||
druid_docs_path = "{}/docs".format(druid_docs_path)
|
||||
prev_release_doc_paths = os.listdir(druid_docs_path)
|
||||
for doc_path in prev_release_doc_paths:
|
||||
if (doc_path != "img" and doc_path != "latest"):
|
||||
print("DOC PATH: " + doc_path)
|
||||
|
||||
try:
|
||||
command = "rm -rf {}/{}/*".format(druid_docs_path, doc_path)
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
|
||||
command = "cp -r {}/latest/* {}/{}/".format(druid_docs_path, druid_docs_path, doc_path)
|
||||
outstr = subprocess.check_output(command, shell=True).decode('UTF-8')
|
||||
except:
|
||||
print("error in path: " + doc_path)
|
||||
continue
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except KeyboardInterrupt:
|
||||
print('Interrupted, closing.')
|
|
@ -0,0 +1,49 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import sys
|
||||
|
||||
# Takes the output of `git status | grep deleted:` on the doc repo
|
||||
# and cross references deleted pages with the _redirects.json file
|
||||
if len(sys.argv) != 3:
|
||||
sys.stderr.write('usage: program <del_paths_file> <redirect.json file>\n')
|
||||
sys.exit(1)
|
||||
|
||||
del_paths = sys.argv[1]
|
||||
redirect_json_path = sys.argv[2]
|
||||
|
||||
dep_dict = {}
|
||||
with open(del_paths, 'r') as del_paths_file:
|
||||
for line in del_paths_file.readlines():
|
||||
subidx = line.index("/", 0)
|
||||
line2 = line[subidx+1:]
|
||||
subidx = line2.index("/", 0)
|
||||
line3 = line2[subidx+1:]
|
||||
dep_dict[line3.strip("\n")] = True
|
||||
|
||||
existing_redirects = {}
|
||||
with open(redirect_json_path, 'r') as redirect_json_file:
|
||||
redirect_json = json.load(redirect_json_file)
|
||||
for redirect_entry in redirect_json:
|
||||
redirect_source = redirect_entry["source"]
|
||||
redirect_source = redirect_source.replace(".html", ".md")
|
||||
existing_redirects[redirect_source] = True
|
||||
|
||||
for dep in dep_dict:
|
||||
if dep not in existing_redirects:
|
||||
print("MISSING REDIRECT: " + dep)
|
|
@ -0,0 +1,73 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import re
|
||||
import shutil
|
||||
import sys
|
||||
|
||||
# Helper program for generating LICENSE contents for dependencies under web-console.
|
||||
# Generates entries for MIT-licensed deps and dumps info for non-MIT deps.
|
||||
# Uses JSON output from https://www.npmjs.com/package/license-checker.
|
||||
|
||||
if len(sys.argv) != 3:
|
||||
sys.stderr.write('usage: program <license-report-path> <license-output-path>\n')
|
||||
sys.stderr.write('Run the following command in web-console/ to generate the input license report:\n')
|
||||
sys.stderr.write(' license-checker --production --json\n')
|
||||
sys.exit(1)
|
||||
|
||||
license_report_path = sys.argv[1]
|
||||
license_output_path = sys.argv[2]
|
||||
|
||||
non_mit_licenses = []
|
||||
|
||||
license_entry_template = "This product bundles {} version {}, copyright {},\n which is available under an MIT license. For details, see licenses/{}.MIT.\n"
|
||||
|
||||
with open(license_report_path, 'r') as license_report_file:
|
||||
license_report = json.load(license_report_file)
|
||||
for dependency_name_version in license_report:
|
||||
dependency = license_report[dependency_name_version]
|
||||
|
||||
match_result = re.match("(.+)@(.+)", dependency_name_version)
|
||||
dependency_name = match_result.group(1)
|
||||
nice_dependency_name = dependency_name.replace("/", "-")
|
||||
dependency_ver = match_result.group(2)
|
||||
|
||||
try:
|
||||
licenseType = dependency["licenses"]
|
||||
licenseFile = dependency["licenseFile"]
|
||||
except:
|
||||
print("No license file for {}".format(dependency_name_version))
|
||||
|
||||
try:
|
||||
publisher = dependency["publisher"]
|
||||
except:
|
||||
publisher = ""
|
||||
|
||||
if licenseType != "MIT":
|
||||
non_mit_licenses.append(dependency)
|
||||
continue
|
||||
|
||||
fullDependencyPath = dependency["path"]
|
||||
partialDependencyPath = re.match(".*/(web-console.*)", fullDependencyPath).group(1)
|
||||
|
||||
print(license_entry_template.format(dependency_name, dependency_ver, publisher, nice_dependency_name))
|
||||
shutil.copy2(licenseFile, license_output_path + "/" + nice_dependency_name + ".MIT")
|
||||
|
||||
print("\nNon-MIT licenses:\n--------------------\n")
|
||||
for non_mit_license in non_mit_licenses:
|
||||
print(non_mit_license)
|
|
@ -0,0 +1,37 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import re
|
||||
import sys
|
||||
|
||||
# Helper program for listing the deps in the compiled web-console-<VERSION>.js file in druid-console.jar
|
||||
|
||||
if len(sys.argv) != 2:
|
||||
sys.stderr.write('usage: program <web-console js path>\n')
|
||||
sys.exit(1)
|
||||
|
||||
web_console_path = sys.argv[1]
|
||||
|
||||
dep_dict = {}
|
||||
with open(web_console_path, 'r') as web_console_file:
|
||||
for line in web_console_file.readlines():
|
||||
match_result = re.match('/\*\*\*/ "\./node_modules/([\@\-a-zA-Z0-9_]+)/.*', line)
|
||||
if match_result != None:
|
||||
dependency_name = match_result.group(1)
|
||||
dep_dict[dependency_name] = True
|
||||
for dep in dep_dict:
|
||||
print(dep)
|
|
@ -122,5 +122,48 @@
|
|||
{"source": "tutorials/tutorial-loading-batch-data.html", "target": "tutorial-batch.html"},
|
||||
{"source": "tutorials/tutorial-loading-streaming-data.html", "target": "tutorial-streams.html"},
|
||||
{"source": "tutorials/tutorial-the-druid-cluster.html", "target": "cluster.html"},
|
||||
{"source": "development/extensions-core/caffeine-cache.html", "target":"../../configuration/caching.html"}
|
||||
{"source": "development/extensions-core/caffeine-cache.html", "target":"../../configuration/caching.html"},
|
||||
{"source": "Production-Cluster-Configuration.html", "target": "tutorials/cluster.html"},
|
||||
{"source": "development/extensions-contrib/parquet.html", "target":"../../development/extensions-core/parquet.html"},
|
||||
{"source": "development/extensions-contrib/scan-query.html", "target":"../../querying/scan-query.html"},
|
||||
{"source": "tutorials/ingestion.html", "target": "index.html"},
|
||||
{"source": "tutorials/ingestion-streams.html", "target": "index.html"},
|
||||
{"source": "ingestion/native-batch.html", "target": "native_tasks.html"},
|
||||
{"source": "Compute.html", "target": "design/processes.html"},
|
||||
{"source": "Contribute.html", "target": "../../community/index.html"},
|
||||
{"source": "Download.html", "target": "../../downloads.html"},
|
||||
{"source": "Druid-Personal-Demo-Cluster.html", "target": "tutorials/index.html"},
|
||||
{"source": "Home.html", "target": "index.html"},
|
||||
{"source": "Loading-Your-Data.html", "target": "ingestion/index.html"},
|
||||
{"source": "Master.html", "target": "design/processes.html"},
|
||||
{"source": "MySQL.html", "target": "development/extensions-core/mysql.html"},
|
||||
{"source": "OrderBy.html", "target": "querying/limitspec.html"},
|
||||
{"source": "Querying-your-data.html", "target": "querying/querying.html"},
|
||||
{"source": "Spatial-Filters.html", "target": "development/geo.html"},
|
||||
{"source": "Spatial-Indexing.html", "target": "development/geo.html"},
|
||||
{"source": "Stand-Alone-With-Riak-CS.html", "target": "index.html"},
|
||||
{"source": "Support.html", "target": "../../community/index.html"},
|
||||
{"source": "Tutorial:-Webstream.html", "target": "tutorials/index.html"},
|
||||
{"source": "Twitter-Tutorial.html", "target": "tutorials/index.html"},
|
||||
{"source": "Tutorial:-Loading-Your-Data-Part-1.html", "target": "tutorials/index.html"},
|
||||
{"source": "Tutorial:-Loading-Your-Data-Part-2.html", "target": "tutorials/index.html"},
|
||||
{"source": "Kafka-Eight.html", "target": "development/extensions-core/kafka-eight-firehose.html"},
|
||||
{"source": "Thanks.html", "target": "../../community/index.html"},
|
||||
{"source": "Tutorial-A-First-Look-at-Druid.html", "target": "tutorials/index.html"},
|
||||
{"source": "Tutorial-All-About-Queries.html", "target": "tutorials/index.html"},
|
||||
{"source": "Tutorial-Loading-Batch-Data.html", "target": "tutorials/index.html"},
|
||||
{"source": "Tutorial-Loading-Streaming-Data.html", "target": "tutorials/index.html"},
|
||||
{"source": "Tutorial-The-Druid-Cluster.html", "target": "tutorials/index.html"},
|
||||
{"source": "configuration/hadoop.html", "target": "ingestion/hadoop.html"},
|
||||
{"source": "configuration/production-cluster.html", "target": "tutorials/cluster.html"},
|
||||
{"source": "configuration/zookeeper.html", "target": "dependencies/zookeeper.html"},
|
||||
{"source": "querying/optimizations.html", "target": "dependencies/cluster.html"},
|
||||
{"source": "development/community-extensions/azure.html", "target": "../extensions-contrib/azure.html"},
|
||||
{"source": "development/community-extensions/cassandra.html", "target": "../extensions-contrib/cassandra.html"},
|
||||
{"source": "development/community-extensions/cloudfiles.html", "target": "../extensions-contrib/cloudfiles.html"},
|
||||
{"source": "development/community-extensions/graphite.html", "target": "../extensions-contrib/graphite.html"},
|
||||
{"source": "development/community-extensions/kafka-simple.html", "target": "../extensions-contrib/kafka-simple.html"},
|
||||
{"source": "development/community-extensions/rabbitmq.html", "target": "../extensions-contrib/rabbitmq.html"},
|
||||
{"source": "development/extensions-core/namespaced-lookup.html", "target": "lookups-cached-global.html"},
|
||||
{"source": "operations/insert-segment-to-db.html", "target": "../index.html"}
|
||||
]
|
||||
|
|
|
@ -243,9 +243,9 @@ and `druid.tlsPort` properties on each process. Please see `Configuration` secti
|
|||
#### Jetty Server TLS Configuration
|
||||
|
||||
Druid uses Jetty as an embedded web server. To get familiar with TLS/SSL in general and related concepts like Certificates etc.
|
||||
reading this [Jetty documentation](http://www.eclipse.org/jetty/documentation/9.3.x/configuring-ssl.html) might be helpful.
|
||||
reading this [Jetty documentation](http://www.eclipse.org/jetty/documentation/9.4.x/configuring-ssl.html) might be helpful.
|
||||
To get more in depth knowledge of TLS/SSL support in Java in general, please refer to this [guide](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html).
|
||||
The documentation [here](http://www.eclipse.org/jetty/documentation/9.3.x/configuring-ssl.html#configuring-sslcontextfactory)
|
||||
The documentation [here](http://www.eclipse.org/jetty/documentation/9.4.x/configuring-ssl.html#configuring-sslcontextfactory)
|
||||
can help in understanding TLS/SSL configurations listed below. This [document](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html) lists all the possible
|
||||
values for the below mentioned configs among others provided by Java implementation.
|
||||
|
||||
|
@ -783,8 +783,8 @@ A sample Coordinator dynamic config JSON object is shown below:
|
|||
"replicationThrottleLimit": 10,
|
||||
"emitBalancingStats": false,
|
||||
"killDataSourceWhitelist": ["wikipedia", "testDatasource"],
|
||||
"historicalNodesInMaintenance": ["localhost:8182", "localhost:8282"],
|
||||
"nodesInMaintenancePriority": 7
|
||||
"decommissioningNodes": ["localhost:8182", "localhost:8282"],
|
||||
"decommissioningMaxPercentOfMaxSegmentsToMove": 70
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -798,14 +798,14 @@ Issuing a GET request at the same URL will return the spec that is currently in
|
|||
|`maxSegmentsToMove`|The maximum number of segments that can be moved at any given time.|5|
|
||||
|`replicantLifetime`|The maximum number of Coordinator runs for a segment to be replicated before we start alerting.|15|
|
||||
|`replicationThrottleLimit`|The maximum number of segments that can be replicated at one time.|10|
|
||||
|`balancerComputeThreads`|Thread pool size for computing moving cost of segments in segment balancing. Consider increasing this if you have a lot of segments and moving segment starts to get stuck.|1|
|
||||
|`balancerComputeThreads`|Thread pool size for computing moving cost of segments in segment balancing. Consider increasing this if you have a lot of segments and moving segments starts to get stuck.|1|
|
||||
|`emitBalancingStats`|Boolean flag for whether or not we should emit balancing stats. This is an expensive operation.|false|
|
||||
|`killDataSourceWhitelist`|List of dataSources for which kill tasks are sent if property `druid.coordinator.kill.on` is true. This can be a list of comma-separated dataSources or a JSON array.|none|
|
||||
|`killAllDataSources`|Send kill tasks for ALL dataSources if property `druid.coordinator.kill.on` is true. If this is set to true then `killDataSourceWhitelist` must not be specified or be empty list.|false|
|
||||
|`killPendingSegmentsSkipList`|List of dataSources for which pendingSegments are _NOT_ cleaned up if property `druid.coordinator.kill.pendingSegments.on` is true. This can be a list of comma-separated dataSources or a JSON array.|none|
|
||||
|`maxSegmentsInNodeLoadingQueue`|The maximum number of segments that could be queued for loading to any given server. This parameter could be used to speed up segments loading process, especially if there are "slow" processes in the cluster (with low loading speed) or if too much segments scheduled to be replicated to some particular node (faster loading could be preferred to better segments distribution). Desired value depends on segments loading speed, acceptable replication time and number of processes. Value 1000 could be a start point for a rather big cluster. Default value is 0 (loading queue is unbounded) |0|
|
||||
|`historicalNodesInMaintenance`| List of Historical nodes in maintenance mode. Coordinator doesn't assign new segments on those nodes and moves segments from the nodes according to a specified priority.|none|
|
||||
|`nodesInMaintenancePriority`| Priority of segments from servers in maintenance. Coordinator takes ceil(maxSegmentsToMove * (priority / 10)) from servers in maitenance during balancing phase, i.e.:<br>0 - no segments from servers in maintenance will be processed during balancing<br>5 - 50% segments from servers in maintenance<br>10 - 100% segments from servers in maintenance<br>By leveraging the priority an operator can prevent general nodes from overload or decrease maitenance time instead.|7|
|
||||
|`maxSegmentsInNodeLoadingQueue`|The maximum number of segments that could be queued for loading to any given server. This parameter could be used to speed up segments loading process, especially if there are "slow" nodes in the cluster (with low loading speed) or if too much segments scheduled to be replicated to some particular node (faster loading could be preferred to better segments distribution). Desired value depends on segments loading speed, acceptable replication time and number of nodes. Value 1000 could be a start point for a rather big cluster. Default value is 0 (loading queue is unbounded) |0|
|
||||
|`decommissioningNodes`| List of historical servers to 'decommission'. Coordinator will not assign new segments to 'decommissioning' servers, and segments will be moved away from them to be placed on non-decommissioning servers at the maximum rate specified by `decommissioningMaxPercentOfMaxSegmentsToMove`.|none|
|
||||
|`decommissioningMaxPercentOfMaxSegmentsToMove`| The maximum number of segments that may be moved away from 'decommissioning' servers to non-decommissioning (that is, active) servers during one Coordinator run. This value is relative to the total maximum segment movements allowed during one run which is determined by `maxSegmentsToMove`. If `decommissioningMaxPercentOfMaxSegmentsToMove` is 0, segments will neither be moved from _or to_ 'decommissioning' servers, effectively putting them in a sort of "maintenance" mode that will not participate in balancing or assignment by load rules. Decommissioning can also become stalled if there are no available active servers to place the segments. By leveraging the maximum percent of decommissioning segment movements, an operator can prevent active servers from overload by prioritizing balancing, or decrease decommissioning time instead. The value should be between 0 and 100.|70|
|
||||
|
||||
To view the audit history of Coordinator dynamic config issue a GET request to the URL -
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@ org.apache.druid.cli.Main server broker
|
|||
|
||||
Most druid queries contain an interval object that indicates a span of time for which data is requested. Likewise, Druid [Segments](../design/segments.html) are partitioned to contain data for some interval of time and segments are distributed across a cluster. Consider a simple datasource with 7 segments where each segment contains data for a given day of the week. Any query issued to the datasource for more than one day of data will hit more than one segment. These segments will likely be distributed across multiple processes, and hence, the query will likely hit multiple processes.
|
||||
|
||||
To determine which processes to forward queries to, the Broker process first builds a view of the world from information in Zookeeper. Zookeeper maintains information about [Historical](../design/historical.html) and streaming ingestion [Peon](../design/peon.html) processes and the segments they are serving. For every datasource in Zookeeper, the Broker process builds a timeline of segments and the processes that serve them. When queries are received for a specific datasource and interval, the Broker process performs a lookup into the timeline associated with the query datasource for the query interval and retrieves the processes that contain data for the query. The Broker process then forwards down the query to the selected processes.
|
||||
To determine which processes to forward queries to, the Broker process first builds a view of the world from information in Zookeeper. Zookeeper maintains information about [Historical](../design/historical.html) and streaming ingestion [Peon](../design/peons.html) processes and the segments they are serving. For every datasource in Zookeeper, the Broker process builds a timeline of segments and the processes that serve them. When queries are received for a specific datasource and interval, the Broker process performs a lookup into the timeline associated with the query datasource for the query interval and retrieves the processes that contain data for the query. The Broker process then forwards down the query to the selected processes.
|
||||
|
||||
### Caching
|
||||
|
||||
|
|
|
@ -113,7 +113,7 @@ If it finds such segments, it simply skips them.
|
|||
|
||||
### The Coordinator Console
|
||||
|
||||
The Druid Coordinator exposes a web GUI for displaying cluster information and rule configuration. For more details, please see [coordinator console](../operations/web-consoles.html#coordinator-console).
|
||||
The Druid Coordinator exposes a web GUI for displaying cluster information and rule configuration. For more details, please see [coordinator console](../operations/management-uis.html#coordinator-consoles).
|
||||
|
||||
### FAQ
|
||||
|
||||
|
|
|
@ -41,7 +41,7 @@ This mode is recommended if you intend to use the indexing service as the single
|
|||
|
||||
### Overlord Console
|
||||
|
||||
The Overlord provides a UI for managing tasks and workers. For more details, please see [overlord console](../operations/web-consoles.html#overlord-console).
|
||||
The Overlord provides a UI for managing tasks and workers. For more details, please see [overlord console](../operations/management-uis.html#overlord-console).
|
||||
|
||||
### Blacklisted Workers
|
||||
|
||||
|
|
|
@ -28,7 +28,11 @@ Make sure to [include](../../operations/including-extensions.html) `druid-histog
|
|||
|
||||
The `druid-histogram` extension provides an approximate histogram aggregator and a fixed buckets histogram aggregator.
|
||||
|
||||
## Approximate Histogram aggregator
|
||||
## Approximate Histogram aggregator (Deprecated)
|
||||
|
||||
<div class="note caution">
|
||||
The Approximate Histogram aggregator is deprecated. Please use <a href="../extensions-core/datasketches-quantiles.html">DataSketches Quantiles</a> instead which provides a superior distribution-independent algorithm with formal error guarantees.
|
||||
</div>
|
||||
|
||||
This aggregator is based on
|
||||
[http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf](http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf)
|
||||
|
|
|
@ -76,7 +76,7 @@ This string can then be used in the native or sql Druid query.
|
|||
|`type` |Filter Type. Should always be `bloom`|yes|
|
||||
|`dimension` |The dimension to filter over. | yes |
|
||||
|`bloomKFilter` |Base64 encoded Binary representation of `org.apache.hive.common.util.BloomKFilter`| yes |
|
||||
|`extractionFn`|[Extraction function](./../dimensionspecs.html#extraction-functions) to apply to the dimension values |no|
|
||||
|`extractionFn`|[Extraction function](../../querying/dimensionspecs.html#extraction-functions) to apply to the dimension values |no|
|
||||
|
||||
|
||||
### Serialized Format for BloomKFilter
|
||||
|
@ -129,7 +129,7 @@ for the query.
|
|||
|-------------------------|------------------------------|----------------------------------|
|
||||
|`type` |Aggregator Type. Should always be `bloom`|yes|
|
||||
|`name` |Output field name |yes|
|
||||
|`field` |[DimensionSpec](./../dimensionspecs.html) to add to `org.apache.hive.common.util.BloomKFilter` | yes |
|
||||
|`field` |[DimensionSpec](../../querying/dimensionspecs.html) to add to `org.apache.hive.common.util.BloomKFilter` | yes |
|
||||
|`maxNumEntries` |Maximum number of distinct values supported by `org.apache.hive.common.util.BloomKFilter`, default `1500`| no |
|
||||
|
||||
### Example
|
||||
|
|
|
@ -1,3 +1,8 @@
|
|||
---
|
||||
layout: doc_page
|
||||
title: "Kinesis Indexing Service"
|
||||
---
|
||||
|
||||
<!--
|
||||
~ Licensed to the Apache Software Foundation (ASF) under one
|
||||
~ or more contributor license agreements. See the NOTICE file
|
||||
|
@ -17,10 +22,6 @@
|
|||
~ under the License.
|
||||
-->
|
||||
|
||||
---
|
||||
layout: doc_page
|
||||
---
|
||||
|
||||
# Kinesis Indexing Service
|
||||
|
||||
Similar to the [Kafka indexing service](./kafka-ingestion.html), the Kinesis indexing service enables the configuration of *supervisors* on the Overlord, which facilitate ingestion from
|
||||
|
|
|
@ -45,9 +45,9 @@ The following table contains optional parameters for supporting client certifica
|
|||
|`druid.client.https.keyStorePath`|The file path or URL of the TLS/SSL Key store containing the client certificate that Druid will use when communicating with other Druid services. If this is null, the other properties in this table are ignored.|none|yes|
|
||||
|`druid.client.https.keyStoreType`|The type of the key store.|none|yes|
|
||||
|`druid.client.https.certAlias`|Alias of TLS client certificate in the keystore.|none|yes|
|
||||
|`druid.client.https.keyStorePassword`|The [Password Provider](../operations/password-provider.html) or String password for the Key Store.|none|no|
|
||||
|`druid.client.https.keyStorePassword`|The [Password Provider](../../operations/password-provider.html) or String password for the Key Store.|none|no|
|
||||
|`druid.client.https.keyManagerFactoryAlgorithm`|Algorithm to use for creating KeyManager, more details [here](https://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#KeyManager).|`javax.net.ssl.KeyManagerFactory.getDefaultAlgorithm()`|no|
|
||||
|`druid.client.https.keyManagerPassword`|The [Password Provider](../operations/password-provider.html) or String password for the Key Manager.|none|no|
|
||||
|`druid.client.https.keyManagerPassword`|The [Password Provider](../../operations/password-provider.html) or String password for the Key Manager.|none|no|
|
||||
|`druid.client.https.validateHostnames`|Validate the hostname of the server. This should not be disabled unless you are using [custom TLS certificate checks](../../operations/tls-support.html#custom-tls-certificate-checks) and know that standard hostname validation is not needed.|true|no|
|
||||
|
||||
This [document](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html) lists all the possible
|
||||
|
|
|
@ -47,7 +47,7 @@ Core extensions are maintained by Druid committers.
|
|||
|druid-caffeine-cache|A local cache implementation backed by Caffeine.|[link](../development/extensions-core/caffeine-cache.html)|
|
||||
|druid-datasketches|Support for approximate counts and set operations with [DataSketches](http://datasketches.github.io/).|[link](../development/extensions-core/datasketches-extension.html)|
|
||||
|druid-hdfs-storage|HDFS deep storage.|[link](../development/extensions-core/hdfs.html)|
|
||||
|druid-histogram|Approximate histograms and quantiles aggregator.|[link](../development/extensions-core/approximate-histograms.html)|
|
||||
|druid-histogram|Approximate histograms and quantiles aggregator. Deprecated, please use the [DataSketches quantiles aggregator](../development/extensions-core/datasketches-quantiles.html) from the `druid-datasketches` extension instead.|[link](../development/extensions-core/approximate-histograms.html)|
|
||||
|druid-kafka-eight|Kafka ingest firehose (high level consumer) for realtime nodes(deprecated).|[link](../development/extensions-core/kafka-eight-firehose.html)|
|
||||
|druid-kafka-extraction-namespace|Kafka-based namespaced lookup. Requires namespace lookup extension.|[link](../development/extensions-core/kafka-extraction-namespace.html)|
|
||||
|druid-kafka-indexing-service|Supervised exactly-once Kafka ingestion for the indexing service.|[link](../development/extensions-core/kafka-ingestion.html)|
|
||||
|
|
|
@ -112,8 +112,8 @@ A sample ingest firehose spec is shown below -
|
|||
SqlFirehoseFactory can be used to ingest events residing in RDBMS. The database connection information is provided as part of the ingestion spec. For each query, the results are fetched locally and indexed. If there are multiple queries from which data needs to be indexed, queries are prefetched in the background upto `maxFetchCapacityBytes` bytes.
|
||||
|
||||
Requires one of the following extensions:
|
||||
* [MySQL Metadata Store](../ingestion/mysql.html).
|
||||
* [PostgreSQL Metadata Store](../ingestion/postgresql.html).
|
||||
* [MySQL Metadata Store](../development/extensions-core/mysql.html).
|
||||
* [PostgreSQL Metadata Store](../development/extensions-core/postgresql.html).
|
||||
|
||||
```json
|
||||
{
|
||||
|
|
|
@ -54,7 +54,17 @@ which specifies a split and submits worker tasks using those specs. As a result,
|
|||
the implementation of splittable firehoses. Please note that multiple tasks can be created for the same worker task spec
|
||||
if one of them fails.
|
||||
|
||||
Since this task doesn't shuffle intermediate data, it isn't available for [perfect rollup](../ingestion/index.html#roll-up-modes).
|
||||
You may want to consider the below points:
|
||||
- Since this task doesn't shuffle intermediate data, it isn't available for [perfect rollup](../ingestion/index.html#roll-up-modes).
|
||||
- The number of tasks for parallel ingestion is decided by `maxNumSubTasks` in the tuningConfig.
|
||||
Since the supervisor task creates up to `maxNumSubTasks` worker tasks regardless of the available task slots,
|
||||
it may affect to other ingestion performance. As a result, it's important to set `maxNumSubTasks` properly.
|
||||
See the below [Capacity Planning](#capacity-planning) section for more details.
|
||||
- By default, batch ingestion replaces all data in any segment that it writes to. If you'd like to add to the segment
|
||||
instead, set the appendToExisting flag in ioConfig. Note that it only replaces data in segments where it actively adds
|
||||
data: if there are segments in your granularitySpec's intervals that have no data written by this task, they will be
|
||||
left alone.
|
||||
|
||||
|
||||
An example ingestion spec is:
|
||||
|
||||
|
@ -122,16 +132,15 @@ An example ingestion spec is:
|
|||
"baseDir": "examples/indexing/",
|
||||
"filter": "wikipedia_index_data*"
|
||||
}
|
||||
},
|
||||
"tuningconfig": {
|
||||
"type": "index_parallel",
|
||||
"maxNumSubTasks": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
By default, batch ingestion replaces all data in any segment that it writes to. If you'd like to add to the segment
|
||||
instead, set the appendToExisting flag in ioConfig. Note that it only replaces data in segments where it actively adds
|
||||
data: if there are segments in your granularitySpec's intervals that have no data written by this task, they will be
|
||||
left alone.
|
||||
|
||||
#### Task Properties
|
||||
|
||||
|property|description|required?|
|
||||
|
@ -181,7 +190,7 @@ The tuningConfig is optional and default parameters will be used if no tuningCon
|
|||
|reportParseExceptions|If true, exceptions encountered during parsing will be thrown and will halt ingestion; if false, unparseable rows and fields will be skipped.|false|no|
|
||||
|pushTimeout|Milliseconds to wait for pushing segments. It must be >= 0, where 0 means to wait forever.|0|no|
|
||||
|segmentWriteOutMediumFactory|Segment write-out medium to use when creating segments. See [SegmentWriteOutMediumFactory](#segmentWriteOutMediumFactory).|Not specified, the value from `druid.peon.defaultSegmentWriteOutMediumFactory.type` is used|no|
|
||||
|maxNumSubTasks|Maximum number of tasks which can be run at the same time.|Integer.MAX_VALUE|no|
|
||||
|maxNumSubTasks|Maximum number of tasks which can be run at the same time. The supervisor task would spawn worker tasks up to `maxNumSubTasks` regardless of the available task slots. If this value is set to 1, the supervisor task processes data ingestion on its own instead of spawning worker tasks. If this value is set to too large, too many worker tasks can be created which might block other ingestion. Check [Capacity Planning](#capacity-planning) for more details.|1|no|
|
||||
|maxRetry|Maximum number of retries on task failures.|3|no|
|
||||
|taskStatusCheckPeriodMs|Polling period in milleseconds to check running task statuses.|1000|no|
|
||||
|chatHandlerTimeout|Timeout for reporting the pushed segments in worker tasks.|PT10S|no|
|
||||
|
@ -372,7 +381,7 @@ An example of the result is
|
|||
"reportParseExceptions": false,
|
||||
"pushTimeout": 0,
|
||||
"segmentWriteOutMediumFactory": null,
|
||||
"maxNumSubTasks": 2147483647,
|
||||
"maxNumSubTasks": 4,
|
||||
"maxRetry": 3,
|
||||
"taskStatusCheckPeriodMs": 1000,
|
||||
"chatHandlerTimeout": "PT10S",
|
||||
|
@ -408,6 +417,27 @@ An example of the result is
|
|||
|
||||
Returns the task attempt history of the worker task spec of the given id, or HTTP 404 Not Found error if the supervisor task is running in the sequential mode.
|
||||
|
||||
### Capacity Planning
|
||||
|
||||
The supervisor task can create up to `maxNumSubTasks` worker tasks no matter how many task slots are currently available.
|
||||
As a result, total number of tasks which can be run at the same time is `(maxNumSubTasks + 1)` (including the supervisor task).
|
||||
Please note that this can be even larger than total number of task slots (sum of the capacity of all workers).
|
||||
If `maxNumSubTasks` is larger than `n (available task slots)`, then
|
||||
`maxNumSubTasks` tasks are created by the supervisor task, but only `n` tasks would be started.
|
||||
Others will wait in the pending state until any running task is finished.
|
||||
|
||||
If you are using the Parallel Index Task with stream ingestion together,
|
||||
we would recommend to limit the max capacity for batch ingestion to prevent
|
||||
stream ingestion from being blocked by batch ingestion. Suppose you have
|
||||
`t` Parallel Index Tasks to run at the same time, but want to limit
|
||||
the max number of tasks for batch ingestion to `b`. Then, (sum of `maxNumSubTasks`
|
||||
of all Parallel Index Tasks + `t` (for supervisor tasks)) must be smaller than `b`.
|
||||
|
||||
If you have some tasks of a higher priority than others, you may set their
|
||||
`maxNumSubTasks` to a higher value than lower priority tasks.
|
||||
This may help the higher priority tasks to finish earlier than lower priority tasks
|
||||
by assigning more task slots to them.
|
||||
|
||||
Local Index Task
|
||||
----------------
|
||||
|
||||
|
|
|
@ -88,7 +88,7 @@ The `errorMsg` field shows a message describing the error that caused a task to
|
|||
|
||||
### Row stats
|
||||
|
||||
The non-parallel [Native Batch Task](../native_tasks.md), the Hadoop batch task, and the tasks created by the Kafka Indexing Service support retrieval of row stats while the task is running.
|
||||
The non-parallel [Native Batch Task](../ingestion/native_tasks.html), the Hadoop batch task, and the tasks created by the Kafka Indexing Service support retrieval of row stats while the task is running.
|
||||
|
||||
The live report can be accessed with a GET to the following URL on a Peon running a task:
|
||||
|
||||
|
@ -133,7 +133,7 @@ An example report is shown below. The `movingAverages` section contains 1 minute
|
|||
}
|
||||
```
|
||||
|
||||
Note that this is only supported by the non-parallel [Native Batch Task](../native_tasks.md), the Hadoop Batch task, and the tasks created by the Kafka Indexing Service.
|
||||
Note that this is only supported by the non-parallel [Native Batch Task](../ingestion/native_tasks.html), the Hadoop Batch task, and the tasks created by the Kafka Indexing Service.
|
||||
|
||||
For the Kafka Indexing Service, a GET to the following Overlord API will retrieve live row stat reports from each task being managed by the supervisor and provide a combined report.
|
||||
|
||||
|
@ -149,4 +149,4 @@ Current lists of unparseable events can be retrieved from a running task with a
|
|||
http://<middlemanager-host>:<worker-port>/druid/worker/v1/chat/<task-id>/unparseableEvents
|
||||
```
|
||||
|
||||
Note that this is only supported by the non-parallel [Native Batch Task](../native_tasks.md) and the tasks created by the Kafka Indexing Service.
|
||||
Note that this is only supported by the non-parallel [Native Batch Task](../ingestion/native_tasks.html) and the tasks created by the Kafka Indexing Service.
|
||||
|
|
|
@ -109,6 +109,7 @@ See javadoc of java.lang.Math for detailed explanation for each function.
|
|||
|copysign|copysign(x) would return the first floating-point argument with the sign of the second floating-point argument|
|
||||
|cos|cos(x) would return the trigonometric cosine of x|
|
||||
|cosh|cosh(x) would return the hyperbolic cosine of x|
|
||||
|cot|cot(x) would return the trigonometric cotangent of an angle x|
|
||||
|div|div(x,y) is integer division of x by y|
|
||||
|exp|exp(x) would return Euler's number raised to the power of x|
|
||||
|expm1|expm1(x) would return e^x-1|
|
||||
|
@ -122,6 +123,7 @@ See javadoc of java.lang.Math for detailed explanation for each function.
|
|||
|min|min(x, y) would return the smaller of two values|
|
||||
|nextafter|nextafter(x, y) would return the floating-point number adjacent to the x in the direction of the y|
|
||||
|nextUp|nextUp(x) would return the floating-point value adjacent to x in the direction of positive infinity|
|
||||
|pi|pi would return the constant value of the π |
|
||||
|pow|pow(x, y) would return the value of the x raised to the power of y|
|
||||
|remainder|remainder(x, y) would return the remainder operation on two arguments as prescribed by the IEEE 754 standard|
|
||||
|rint|rint(x) would return value that is closest in value to x and is equal to a mathematical integer|
|
||||
|
|
|
@ -33,8 +33,6 @@ The Coordinator loads a set of rules from the metadata storage. Rules may be spe
|
|||
|
||||
Note: It is recommended that the Coordinator console is used to configure rules. However, the Coordinator process does have HTTP endpoints to programmatically configure rules.
|
||||
|
||||
When a rule is updated, the change may not be reflected until the next time the Coordinator runs. This will be fixed in the near future.
|
||||
|
||||
Load Rules
|
||||
----------
|
||||
|
||||
|
|
|
@ -37,9 +37,9 @@ and `druid.tlsPort` properties on each process. Please see `Configuration` secti
|
|||
# Jetty Server TLS Configuration
|
||||
|
||||
Druid uses Jetty as an embedded web server. To get familiar with TLS/SSL in general and related concepts like Certificates etc.
|
||||
reading this [Jetty documentation](http://www.eclipse.org/jetty/documentation/9.3.x/configuring-ssl.html) might be helpful.
|
||||
reading this [Jetty documentation](http://www.eclipse.org/jetty/documentation/9.4.x/configuring-ssl.html) might be helpful.
|
||||
To get more in depth knowledge of TLS/SSL support in Java in general, please refer to this [guide](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html).
|
||||
The documentation [here](http://www.eclipse.org/jetty/documentation/9.3.x/configuring-ssl.html#configuring-sslcontextfactory)
|
||||
The documentation [here](http://www.eclipse.org/jetty/documentation/9.4.x/configuring-ssl.html#configuring-sslcontextfactory)
|
||||
can help in understanding TLS/SSL configurations listed below. This [document](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html) lists all the possible
|
||||
values for the below mentioned configs among others provided by Java implementation.
|
||||
|
||||
|
@ -58,7 +58,7 @@ The following table contains configuration options related to client certificate
|
|||
|`druid.server.https.trustStoreType`|The type of the trust store containing certificates used to validate client certificates. Not needed if `requireClientCertificate` is false.|`java.security.KeyStore.getDefaultType()`|no|
|
||||
|`druid.server.https.trustStorePath`|The file path or URL of the trust store containing certificates used to validate client certificates. Not needed if `requireClientCertificate` is false.|none|yes, only if `requireClientCertificate` is true|
|
||||
|`druid.server.https.trustStoreAlgorithm`|Algorithm to be used by TrustManager to validate client certificate chains. Not needed if `requireClientCertificate` is false.|`javax.net.ssl.TrustManagerFactory.getDefaultAlgorithm()`|no|
|
||||
|`druid.server.https.trustStorePassword`|The [Password Provider](../../operations/password-provider.html) or String password for the Trust Store. Not needed if `requireClientCertificate` is false.|none|no|
|
||||
|`druid.server.https.trustStorePassword`|The [Password Provider](../operations/password-provider.html) or String password for the Trust Store. Not needed if `requireClientCertificate` is false.|none|no|
|
||||
|`druid.server.https.validateHostnames`|If set to true, check that the client's hostname matches the CN/subjectAltNames in the client certificate. Not used if `requireClientCertificate` is false.|true|no|
|
||||
|`druid.server.https.crlPath`|Specifies a path to a file containing static [Certificate Revocation Lists](https://en.wikipedia.org/wiki/Certificate_revocation_list), used to check if a client certificate has been revoked. Not used if `requireClientCertificate` is false.|null|no|
|
||||
|
||||
|
|
|
@ -277,9 +277,15 @@ The [DataSketches Theta Sketch](../development/extensions-core/datasketches-thet
|
|||
|
||||
The [DataSketches HLL Sketch](../development/extensions-core/datasketches-hll.html) extension-provided aggregator gives distinct count estimates using the HyperLogLog algorithm. The HLL Sketch is faster and requires less storage than the Theta Sketch, but does not support intersection or difference operations.
|
||||
|
||||
#### Cardinality/HyperUnique
|
||||
#### Cardinality/HyperUnique (Deprecated)
|
||||
|
||||
The [Cardinality and HyperUnique](../hll-old.html) aggregators are older aggregator implementations available by default in Druid that also provide distinct count estimates using the HyperLogLog algorithm. The newer [DataSketches HLL Sketch](../development/extensions-core/datasketches-hll.html) extension-provided aggregator has superior accuracy and performance and is recommended instead.
|
||||
<div class="note caution">
|
||||
The Cardinality and HyperUnique aggregators are deprecated. Please use <a href="../development/extensions-core/datasketches-hll.html">DataSketches HLL Sketch</a> instead.
|
||||
</div>
|
||||
|
||||
The [Cardinality and HyperUnique](../querying/hll-old.html) aggregators are older aggregator implementations available by default in Druid that also provide distinct count estimates using the HyperLogLog algorithm. The newer [DataSketches HLL Sketch](../development/extensions-core/datasketches-hll.html) extension-provided aggregator has superior accuracy and performance and is recommended instead.
|
||||
|
||||
The DataSketches team has published a [comparison study](https://datasketches.github.io/docs/HLL/HllSketchVsDruidHyperLogLogCollector.html) between Druid's original HLL algorithm and the DataSketches HLL algorithm. Based on the demonstrated advantages of the DataSketches implementation, we have deprecated Druid's original HLL aggregator.
|
||||
|
||||
Please note that DataSketches HLL aggregators and `hyperUnique` aggregators are not mutually compatible.
|
||||
|
||||
|
@ -289,9 +295,38 @@ Please note that DataSketches HLL aggregators and `hyperUnique` aggregators are
|
|||
|
||||
The [DataSketches Quantiles Sketch](../development/extensions-core/datasketches-quantiles.html) extension-provided aggregator provides quantile estimates and histogram approximations using the numeric quantiles DoublesSketch from the [datasketches](http://datasketches.github.io/) library.
|
||||
|
||||
#### Approximate Histogram
|
||||
We recommend this aggregator in general for quantiles/histogram use cases, as it provides formal error bounds and has distribution-independent accuracy.
|
||||
|
||||
The [Approximate Histogram](../development/extensions-core/approxiate-histograms.html) extension-provided aggregator also provides quantile estimates and histogram approximations, based on [http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf](http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf).
|
||||
#### Moments Sketch (Experimental)
|
||||
|
||||
The [Moments Sketch](../development/extensions-contrib/momentsketch-quantiles.html) extension-provided aggregator is an experimental aggregator that provides quantile estimates using the [Moments Sketch](https://github.com/stanford-futuredata/momentsketch).
|
||||
|
||||
The Moments Sketch aggregator is provided as an experimental option. It is optimized for merging speed and it can have higher aggregation performance compared to the DataSketches quantiles aggregator. However, the accuracy of the Moments Sketch is distribution-dependent, so users will need to empirically verify that the aggregator is suitable for their input data.
|
||||
|
||||
As a general guideline for experimentation, the [Moments Sketch paper](https://arxiv.org/pdf/1803.01969.pdf) points out that this algorithm works better on inputs with high entropy. In particular, the algorithm is not a good fit when the input data consists of a small number of clustered discrete values.
|
||||
|
||||
#### Fixed Buckets Histogram
|
||||
|
||||
Druid also provides a [simple histogram implementation]((../development/extensions-core/approxiate-histograms.html#fixed-buckets-histogram) that uses a fixed range and fixed number of buckets with support for quantile estimation, backed by an array of bucket count values.
|
||||
|
||||
The fixed buckets histogram can perform well when the distribution of the input data allows a small number of buckets to be used.
|
||||
|
||||
We do not recommend the fixed buckets histogram for general use, as its usefulness is extremely data dependent. However, it is made available for users that have already identified use cases where a fixed buckets histogram is suitable.
|
||||
|
||||
#### Approximate Histogram (Deprecated)
|
||||
|
||||
The [Approximate Histogram](../development/extensions-core/approximate-histograms.html) extension-provided aggregator also provides quantile estimates and histogram approximations, based on [http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf](http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf).
|
||||
|
||||
The algorithm used by this deprecated aggregator is highly distribution-dependent and its output is subject to serious distortions when the input does not fit within the algorithm's limitations.
|
||||
|
||||
A [study published by the DataSketches team](https://datasketches.github.io/docs/Quantiles/DruidApproxHistogramStudy.html) demonstrates some of the known failure modes of this algorithm:
|
||||
- The algorithm's quantile calculations can fail to provide results for a large range of rank values (all ranks less than 0.89 in the example used in the study), returning all zeroes instead.
|
||||
- The algorithm can completely fail to record spikes in the tail ends of the distribution
|
||||
- In general, the histogram produced by the algorithm can deviate significantly from the true histogram, with no bounds on the errors.
|
||||
|
||||
It is not possible to determine a priori how well this aggregator will behave for a given input stream, nor does the aggregator provide any indication that serious distortions are present in the output.
|
||||
|
||||
For these reasons, we have deprecated this aggregator and do not recommend its use.
|
||||
|
||||
## Miscellaneous Aggregations
|
||||
|
||||
|
|
|
@ -119,10 +119,10 @@ Only the COUNT aggregation can accept DISTINCT.
|
|||
|`MAX(expr)`|Takes the maximum of numbers.|
|
||||
|`AVG(expr)`|Averages numbers.|
|
||||
|`APPROX_COUNT_DISTINCT(expr)`|Counts distinct values of expr, which can be a regular column or a hyperUnique column. This is always approximate, regardless of the value of "useApproximateCountDistinct". See also `COUNT(DISTINCT expr)`.|
|
||||
|`APPROX_COUNT_DISTINCT_DS_HLL(expr, [lgK, tgtHllType])`|Counts distinct values of expr, which can be a regular column or an [HLL sketch](../development/extensions-core/datasketches-hll.html) column. The `lgK` and `tgtHllType` parameters are described in the HLL sketch documentation. This is always approximate, regardless of the value of "useApproximateCountDistinct". See also `COUNT(DISTINCT expr)`. The [DataSketches extension](../development/extensions-core/datasketches-extensions.html) must be loaded to use this function.|
|
||||
|`APPROX_COUNT_DISTINCT_DS_THETA(expr, [size])`|Counts distinct values of expr, which can be a regular column or a [Theta sketch](../development/extensions-core/datasketches-theta.html) column. The `size` parameter is described in the Theta sketch documentation. This is always approximate, regardless of the value of "useApproximateCountDistinct". See also `COUNT(DISTINCT expr)`. The [DataSketches extension](../development/extensions-core/datasketches-extensions.html) must be loaded to use this function.|
|
||||
|`APPROX_COUNT_DISTINCT_DS_HLL(expr, [lgK, tgtHllType])`|Counts distinct values of expr, which can be a regular column or an [HLL sketch](../development/extensions-core/datasketches-hll.html) column. The `lgK` and `tgtHllType` parameters are described in the HLL sketch documentation. This is always approximate, regardless of the value of "useApproximateCountDistinct". See also `COUNT(DISTINCT expr)`. The [DataSketches extension](../development/extensions-core/datasketches-extension.html) must be loaded to use this function.|
|
||||
|`APPROX_COUNT_DISTINCT_DS_THETA(expr, [size])`|Counts distinct values of expr, which can be a regular column or a [Theta sketch](../development/extensions-core/datasketches-theta.html) column. The `size` parameter is described in the Theta sketch documentation. This is always approximate, regardless of the value of "useApproximateCountDistinct". See also `COUNT(DISTINCT expr)`. The [DataSketches extension](../development/extensions-core/datasketches-extension.html) must be loaded to use this function.|
|
||||
|`APPROX_QUANTILE(expr, probability, [resolution])`|Computes approximate quantiles on numeric or [approxHistogram](../development/extensions-core/approximate-histograms.html#approximate-histogram-aggregator) exprs. The "probability" should be between 0 and 1 (exclusive). The "resolution" is the number of centroids to use for the computation. Higher resolutions will give more precise results but also have higher overhead. If not provided, the default resolution is 50. The [approximate histogram extension](../development/extensions-core/approximate-histograms.html) must be loaded to use this function.|
|
||||
|`APPROX_QUANTILE_DS(expr, probability, [k])`|Computes approximate quantiles on numeric or [Quantiles sketch](../development/extensions-core/datasketches-quantiles.html) exprs. The "probability" should be between 0 and 1 (exclusive). The `k` parameter is described in the Quantiles sketch documentation. The [DataSketches extension](../development/extensions-core/datasketches-extensions.html) must be loaded to use this function.|
|
||||
|`APPROX_QUANTILE_DS(expr, probability, [k])`|Computes approximate quantiles on numeric or [Quantiles sketch](../development/extensions-core/datasketches-quantiles.html) exprs. The "probability" should be between 0 and 1 (exclusive). The `k` parameter is described in the Quantiles sketch documentation. The [DataSketches extension](../development/extensions-core/datasketches-extension.html) must be loaded to use this function.|
|
||||
|`APPROX_QUANTILE_FIXED_BUCKETS(expr, probability, numBuckets, lowerLimit, upperLimit, [outlierHandlingMode])`|Computes approximate quantiles on numeric or [fixed buckets histogram](../development/extensions-core/approximate-histograms.html#fixed-buckets-histogram) exprs. The "probability" should be between 0 and 1 (exclusive). The `numBuckets`, `lowerLimit`, `upperLimit`, and `outlierHandlingMode` parameters are described in the fixed buckets histogram documentation. The [approximate histogram extension](../development/extensions-core/approximate-histograms.html) must be loaded to use this function.|
|
||||
|`BLOOM_FILTER(expr, numEntries)`|Computes a bloom filter from values produced by `expr`, with `numEntries` maximum number of distinct values before false positve rate increases. See [bloom filter extension](../development/extensions-core/bloom-filter.html) documentation for additional details.|
|
||||
|
||||
|
@ -147,6 +147,14 @@ Numeric functions will return 64 bit integers or 64 bit floats, depending on the
|
|||
|`x * y`|Multiplication.|
|
||||
|`x / y`|Division.|
|
||||
|`MOD(x, y)`|Modulo (remainder of x divided by y).|
|
||||
|`SIN(expr)`|Trigonometric sine of an angle expr.|
|
||||
|`COS(expr)`|Trigonometric cosine of an angle expr.|
|
||||
|`TAN(expr)`|Trigonometric tangent of an angle expr.|
|
||||
|`COT(expr)`|Trigonometric cotangent of an angle expr.|
|
||||
|`ASIN(expr)`|Arc sine of expr.|
|
||||
|`ACOS(expr)`|Arc cosine of expr.|
|
||||
|`ATAN(expr)`|Arc tangent of expr.|
|
||||
|`ATAN2(y, x)`|Angle theta from the conversion of rectangular coordinates (x, y) to polar * coordinates (r, theta).|
|
||||
|
||||
### String functions
|
||||
|
||||
|
@ -660,7 +668,7 @@ check out [ingestion tasks](#../ingestion/tasks.html)
|
|||
|Column|Notes|
|
||||
|------|-----|
|
||||
|task_id|Unique task identifier|
|
||||
|type|Task type, for example this value is "index" for indexing tasks. See [tasks-overview](../ingestion/tasks.md)|
|
||||
|type|Task type, for example this value is "index" for indexing tasks. See [tasks-overview](../ingestion/tasks.html)|
|
||||
|datasource|Datasource name being indexed|
|
||||
|created_time|Timestamp in ISO8601 format corresponding to when the ingestion task was created. Note that this value is populated for completed and waiting tasks. For running and pending tasks this value is set to 1970-01-01T00:00:00Z|
|
||||
|queue_insertion_time|Timestamp in ISO8601 format corresponding to when this task was added to the queue on the Overlord|
|
||||
|
|
|
@ -0,0 +1,122 @@
|
|||
---
|
||||
layout: doc_page
|
||||
title: "Configuring Druid to use a Kerberized Hadoop as Deep Storage"
|
||||
---
|
||||
|
||||
<!--
|
||||
~ Licensed to the Apache Software Foundation (ASF) under one
|
||||
~ or more contributor license agreements. See the NOTICE file
|
||||
~ distributed with this work for additional information
|
||||
~ regarding copyright ownership. The ASF licenses this file
|
||||
~ to you under the Apache License, Version 2.0 (the
|
||||
~ "License"); you may not use this file except in compliance
|
||||
~ with the License. You may obtain a copy of the License at
|
||||
~
|
||||
~ http://www.apache.org/licenses/LICENSE-2.0
|
||||
~
|
||||
~ Unless required by applicable law or agreed to in writing,
|
||||
~ software distributed under the License is distributed on an
|
||||
~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
~ KIND, either express or implied. See the License for the
|
||||
~ specific language governing permissions and limitations
|
||||
~ under the License.
|
||||
-->
|
||||
|
||||
# Configuring Druid to use a Kerberized Hadoop as Deep Storage
|
||||
|
||||
## Hadoop Setup
|
||||
|
||||
Following are the configurations files required to be copied over to Druid conf folders:
|
||||
|
||||
1. For HDFS as a deep storage, hdfs-site.xml, core-site.xml
|
||||
2. For ingestion, mapred-site.xml, yarn-site.xml
|
||||
|
||||
### HDFS Folders and permissions
|
||||
|
||||
1. Choose any folder name for the druid deep storage, for example 'druid'
|
||||
2. Create the folder in hdfs under the required parent folder. For example,
|
||||
`hdfs dfs -mkdir /druid`
|
||||
OR
|
||||
`hdfs dfs -mkdir /apps/druid`
|
||||
|
||||
3. Give druid processes appropriate permissions for the druid processes to access this folder. This would ensure that druid is able to create necessary folders like data and indexing_log in HDFS.
|
||||
For example, if druid processes run as user 'root', then
|
||||
|
||||
`hdfs dfs -chown root:root /apps/druid`
|
||||
|
||||
OR
|
||||
|
||||
`hdfs dfs -chmod 777 /apps/druid`
|
||||
|
||||
Druid creates necessary sub-folders to store data and index under this newly created folder.
|
||||
|
||||
## Druid Setup
|
||||
|
||||
Edit common.runtime.properties at conf/druid/_common/common.runtime.properties to include the HDFS properties. Folders used for the location are same as the ones used for example above.
|
||||
|
||||
### common_runtime_properties
|
||||
|
||||
```properties
|
||||
# Deep storage
|
||||
#
|
||||
# For HDFS:
|
||||
druid.storage.type=hdfs
|
||||
druid.storage.storageDirectory=/druid/segments
|
||||
# OR
|
||||
# druid.storage.storageDirectory=/apps/druid/segments
|
||||
|
||||
#
|
||||
# Indexing service logs
|
||||
#
|
||||
|
||||
# For HDFS:
|
||||
druid.indexer.logs.type=hdfs
|
||||
druid.indexer.logs.directory=/druid/indexing-logs
|
||||
# OR
|
||||
# druid.storage.storageDirectory=/apps/druid/indexing-logs
|
||||
```
|
||||
|
||||
Note: Comment out Local storage and S3 Storage parameters in the file
|
||||
|
||||
Also include hdfs-storage core extension to conf/druid/_common/common.runtime.
|
||||
|
||||
```properties
|
||||
#
|
||||
# Extensions
|
||||
#
|
||||
|
||||
druid.extensions.directory=dist/druid/extensions
|
||||
druid.extensions.hadoopDependenciesDir=dist/druid/hadoop-dependencies
|
||||
druid.extensions.loadList=["druid-parser-route", "mysql-metadata-storage", "druid-hdfs-storage", "druid-kerberos"]
|
||||
```
|
||||
### Hadoop Jars
|
||||
|
||||
Ensure that Druid has necessary jars to support the Hadoop version.
|
||||
|
||||
Find the hadoop version using command, `hadoop version`
|
||||
|
||||
In case there are other softwares used with hadoop, like WanDisco, ensure that
|
||||
1. the necessary libraries are available
|
||||
2. add the requisite extensions to `druid.extensions.loadlist` in `conf/druid/_common/common.runtime.properties`
|
||||
|
||||
### Kerberos setup
|
||||
|
||||
Create a headless keytab which would have access to the druid data and index.
|
||||
|
||||
Edit conf/druid/_common/common.runtime.properties and add the following properties:
|
||||
|
||||
```properties
|
||||
druid.hadoop.security.kerberos.principal
|
||||
druid.hadoop.security.kerberos.keytab
|
||||
```
|
||||
|
||||
For example
|
||||
|
||||
```properties
|
||||
druid.hadoop.security.kerberos.principal=hdfs-test@EXAMPLE.IO
|
||||
druid.hadoop.security.kerberos.keytab=/etc/security/keytabs/hdfs.headless.keytab
|
||||
```
|
||||
|
||||
### Restart Druid Services
|
||||
|
||||
With the above changes, restart Druid. This would ensure that Druid works with Kerberized Hadoop
|
File diff suppressed because it is too large
Load Diff
|
@ -42,13 +42,13 @@ import org.apache.druid.query.dimension.DefaultDimensionSpec;
|
|||
import org.apache.druid.query.dimension.DimensionSpec;
|
||||
import org.apache.druid.segment.VirtualColumn;
|
||||
import org.apache.druid.segment.column.ValueType;
|
||||
import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
|
||||
import org.apache.druid.sql.calcite.aggregation.Aggregation;
|
||||
import org.apache.druid.sql.calcite.aggregation.SqlAggregator;
|
||||
import org.apache.druid.sql.calcite.expression.DruidExpression;
|
||||
import org.apache.druid.sql.calcite.expression.Expressions;
|
||||
import org.apache.druid.sql.calcite.planner.Calcites;
|
||||
import org.apache.druid.sql.calcite.planner.PlannerContext;
|
||||
import org.apache.druid.sql.calcite.rel.DruidQuerySignature;
|
||||
import org.apache.druid.sql.calcite.table.RowSignature;
|
||||
|
||||
import javax.annotation.Nullable;
|
||||
|
@ -71,7 +71,7 @@ public class HllSketchSqlAggregator implements SqlAggregator
|
|||
@Override
|
||||
public Aggregation toDruidAggregation(
|
||||
PlannerContext plannerContext,
|
||||
RowSignature rowSignature,
|
||||
DruidQuerySignature querySignature,
|
||||
RexBuilder rexBuilder,
|
||||
String name,
|
||||
AggregateCall aggregateCall,
|
||||
|
@ -80,6 +80,7 @@ public class HllSketchSqlAggregator implements SqlAggregator
|
|||
boolean finalizeAggregations
|
||||
)
|
||||
{
|
||||
final RowSignature rowSignature = querySignature.getRowSignature();
|
||||
// Don't use Aggregations.getArgumentsForSimpleAggregator, since it won't let us use direct column access
|
||||
// for string columns.
|
||||
final RexNode columnRexNode = Expressions.fromFieldAccess(
|
||||
|
@ -147,10 +148,10 @@ public class HllSketchSqlAggregator implements SqlAggregator
|
|||
if (columnArg.isDirectColumnAccess()) {
|
||||
dimensionSpec = columnArg.getSimpleExtraction().toDimensionSpec(null, inputType);
|
||||
} else {
|
||||
final ExpressionVirtualColumn virtualColumn = columnArg.toVirtualColumn(
|
||||
Calcites.makePrefixedName(name, "v"),
|
||||
inputType,
|
||||
plannerContext.getExprMacroTable()
|
||||
VirtualColumn virtualColumn = querySignature.getOrCreateVirtualColumnForExpression(
|
||||
plannerContext,
|
||||
columnArg,
|
||||
sqlTypeName
|
||||
);
|
||||
dimensionSpec = new DefaultDimensionSpec(virtualColumn.getOutputName(), null, inputType);
|
||||
virtualColumns.add(virtualColumn);
|
||||
|
|
|
@ -38,13 +38,13 @@ import org.apache.druid.query.aggregation.datasketches.quantiles.DoublesSketchAg
|
|||
import org.apache.druid.query.aggregation.datasketches.quantiles.DoublesSketchToQuantilePostAggregator;
|
||||
import org.apache.druid.query.aggregation.post.FieldAccessPostAggregator;
|
||||
import org.apache.druid.segment.VirtualColumn;
|
||||
import org.apache.druid.segment.column.ValueType;
|
||||
import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
|
||||
import org.apache.druid.sql.calcite.aggregation.Aggregation;
|
||||
import org.apache.druid.sql.calcite.aggregation.SqlAggregator;
|
||||
import org.apache.druid.sql.calcite.expression.DruidExpression;
|
||||
import org.apache.druid.sql.calcite.expression.Expressions;
|
||||
import org.apache.druid.sql.calcite.planner.PlannerContext;
|
||||
import org.apache.druid.sql.calcite.rel.DruidQuerySignature;
|
||||
import org.apache.druid.sql.calcite.table.RowSignature;
|
||||
|
||||
import javax.annotation.Nullable;
|
||||
|
@ -66,7 +66,7 @@ public class DoublesSketchSqlAggregator implements SqlAggregator
|
|||
@Override
|
||||
public Aggregation toDruidAggregation(
|
||||
final PlannerContext plannerContext,
|
||||
final RowSignature rowSignature,
|
||||
final DruidQuerySignature querySignature,
|
||||
final RexBuilder rexBuilder,
|
||||
final String name,
|
||||
final AggregateCall aggregateCall,
|
||||
|
@ -75,6 +75,7 @@ public class DoublesSketchSqlAggregator implements SqlAggregator
|
|||
final boolean finalizeAggregations
|
||||
)
|
||||
{
|
||||
final RowSignature rowSignature = querySignature.getRowSignature();
|
||||
final DruidExpression input = Expressions.toDruidExpression(
|
||||
plannerContext,
|
||||
rowSignature,
|
||||
|
@ -178,10 +179,10 @@ public class DoublesSketchSqlAggregator implements SqlAggregator
|
|||
k
|
||||
);
|
||||
} else {
|
||||
final ExpressionVirtualColumn virtualColumn = input.toVirtualColumn(
|
||||
StringUtils.format("%s:v", name),
|
||||
ValueType.FLOAT,
|
||||
plannerContext.getExprMacroTable()
|
||||
VirtualColumn virtualColumn = querySignature.getOrCreateVirtualColumnForExpression(
|
||||
plannerContext,
|
||||
input,
|
||||
SqlTypeName.FLOAT
|
||||
);
|
||||
virtualColumns.add(virtualColumn);
|
||||
aggregatorFactory = new DoublesSketchAggregatorFactory(
|
||||
|
|
|
@ -41,13 +41,13 @@ import org.apache.druid.query.dimension.DefaultDimensionSpec;
|
|||
import org.apache.druid.query.dimension.DimensionSpec;
|
||||
import org.apache.druid.segment.VirtualColumn;
|
||||
import org.apache.druid.segment.column.ValueType;
|
||||
import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
|
||||
import org.apache.druid.sql.calcite.aggregation.Aggregation;
|
||||
import org.apache.druid.sql.calcite.aggregation.SqlAggregator;
|
||||
import org.apache.druid.sql.calcite.expression.DruidExpression;
|
||||
import org.apache.druid.sql.calcite.expression.Expressions;
|
||||
import org.apache.druid.sql.calcite.planner.Calcites;
|
||||
import org.apache.druid.sql.calcite.planner.PlannerContext;
|
||||
import org.apache.druid.sql.calcite.rel.DruidQuerySignature;
|
||||
import org.apache.druid.sql.calcite.table.RowSignature;
|
||||
|
||||
import javax.annotation.Nullable;
|
||||
|
@ -70,7 +70,7 @@ public class ThetaSketchSqlAggregator implements SqlAggregator
|
|||
@Override
|
||||
public Aggregation toDruidAggregation(
|
||||
PlannerContext plannerContext,
|
||||
RowSignature rowSignature,
|
||||
DruidQuerySignature querySignature,
|
||||
RexBuilder rexBuilder,
|
||||
String name,
|
||||
AggregateCall aggregateCall,
|
||||
|
@ -79,6 +79,7 @@ public class ThetaSketchSqlAggregator implements SqlAggregator
|
|||
boolean finalizeAggregations
|
||||
)
|
||||
{
|
||||
final RowSignature rowSignature = querySignature.getRowSignature();
|
||||
// Don't use Aggregations.getArgumentsForSimpleAggregator, since it won't let us use direct column access
|
||||
// for string columns.
|
||||
final RexNode columnRexNode = Expressions.fromFieldAccess(
|
||||
|
@ -135,10 +136,10 @@ public class ThetaSketchSqlAggregator implements SqlAggregator
|
|||
if (columnArg.isDirectColumnAccess()) {
|
||||
dimensionSpec = columnArg.getSimpleExtraction().toDimensionSpec(null, inputType);
|
||||
} else {
|
||||
final ExpressionVirtualColumn virtualColumn = columnArg.toVirtualColumn(
|
||||
Calcites.makePrefixedName(name, "v"),
|
||||
inputType,
|
||||
plannerContext.getExprMacroTable()
|
||||
VirtualColumn virtualColumn = querySignature.getOrCreateVirtualColumnForExpression(
|
||||
plannerContext,
|
||||
columnArg,
|
||||
sqlTypeName
|
||||
);
|
||||
dimensionSpec = new DefaultDimensionSpec(virtualColumn.getOutputName(), null, inputType);
|
||||
virtualColumns.add(virtualColumn);
|
||||
|
|
|
@ -246,13 +246,13 @@ public class HllSketchSqlAggregatorTest extends CalciteTestBase
|
|||
.granularity(Granularities.ALL)
|
||||
.virtualColumns(
|
||||
new ExpressionVirtualColumn(
|
||||
"a3:v",
|
||||
"v0",
|
||||
"substring(\"dim2\", 0, 1)",
|
||||
ValueType.STRING,
|
||||
TestExprMacroTable.INSTANCE
|
||||
),
|
||||
new ExpressionVirtualColumn(
|
||||
"a4:v",
|
||||
"v1",
|
||||
"concat(substring(\"dim2\", 0, 1),'x')",
|
||||
ValueType.STRING,
|
||||
TestExprMacroTable.INSTANCE
|
||||
|
@ -278,13 +278,13 @@ public class HllSketchSqlAggregatorTest extends CalciteTestBase
|
|||
),
|
||||
new HllSketchBuildAggregatorFactory(
|
||||
"a3",
|
||||
"a3:v",
|
||||
"v0",
|
||||
null,
|
||||
null
|
||||
),
|
||||
new HllSketchBuildAggregatorFactory(
|
||||
"a4",
|
||||
"a4:v",
|
||||
"v1",
|
||||
null,
|
||||
null
|
||||
),
|
||||
|
@ -330,7 +330,7 @@ public class HllSketchSqlAggregatorTest extends CalciteTestBase
|
|||
.setGranularity(Granularities.ALL)
|
||||
.setVirtualColumns(
|
||||
new ExpressionVirtualColumn(
|
||||
"d0:v",
|
||||
"v0",
|
||||
"timestamp_floor(\"__time\",'P1D',null,'UTC')",
|
||||
ValueType.LONG,
|
||||
TestExprMacroTable.INSTANCE
|
||||
|
@ -339,8 +339,8 @@ public class HllSketchSqlAggregatorTest extends CalciteTestBase
|
|||
.setDimensions(
|
||||
Collections.singletonList(
|
||||
new DefaultDimensionSpec(
|
||||
"d0:v",
|
||||
"d0",
|
||||
"v0",
|
||||
"v0",
|
||||
ValueType.LONG
|
||||
)
|
||||
)
|
||||
|
|
|
@ -228,7 +228,7 @@ public class DoublesSketchSqlAggregatorTest extends CalciteTestBase
|
|||
.granularity(Granularities.ALL)
|
||||
.virtualColumns(
|
||||
new ExpressionVirtualColumn(
|
||||
"a4:v",
|
||||
"v0",
|
||||
"(\"m1\" * 2)",
|
||||
ValueType.FLOAT,
|
||||
TestExprMacroTable.INSTANCE
|
||||
|
@ -238,7 +238,7 @@ public class DoublesSketchSqlAggregatorTest extends CalciteTestBase
|
|||
new DoublesSketchAggregatorFactory("a0:agg", "m1", null),
|
||||
new DoublesSketchAggregatorFactory("a1:agg", "m1", 64),
|
||||
new DoublesSketchAggregatorFactory("a2:agg", "m1", 256),
|
||||
new DoublesSketchAggregatorFactory("a4:agg", "a4:v", null),
|
||||
new DoublesSketchAggregatorFactory("a4:agg", "v0", null),
|
||||
new FilteredAggregatorFactory(
|
||||
new DoublesSketchAggregatorFactory("a5:agg", "m1", null),
|
||||
new SelectorDimFilter("dim1", "abc", null)
|
||||
|
|
|
@ -246,13 +246,13 @@ public class ThetaSketchSqlAggregatorTest extends CalciteTestBase
|
|||
.granularity(Granularities.ALL)
|
||||
.virtualColumns(
|
||||
new ExpressionVirtualColumn(
|
||||
"a3:v",
|
||||
"v0",
|
||||
"substring(\"dim2\", 0, 1)",
|
||||
ValueType.STRING,
|
||||
TestExprMacroTable.INSTANCE
|
||||
),
|
||||
new ExpressionVirtualColumn(
|
||||
"a4:v",
|
||||
"v1",
|
||||
"concat(substring(\"dim2\", 0, 1),'x')",
|
||||
ValueType.STRING,
|
||||
TestExprMacroTable.INSTANCE
|
||||
|
@ -282,7 +282,7 @@ public class ThetaSketchSqlAggregatorTest extends CalciteTestBase
|
|||
),
|
||||
new SketchMergeAggregatorFactory(
|
||||
"a3",
|
||||
"a3:v",
|
||||
"v0",
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
|
@ -290,7 +290,7 @@ public class ThetaSketchSqlAggregatorTest extends CalciteTestBase
|
|||
),
|
||||
new SketchMergeAggregatorFactory(
|
||||
"a4",
|
||||
"a4:v",
|
||||
"v1",
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
|
@ -337,7 +337,7 @@ public class ThetaSketchSqlAggregatorTest extends CalciteTestBase
|
|||
.setGranularity(Granularities.ALL)
|
||||
.setVirtualColumns(
|
||||
new ExpressionVirtualColumn(
|
||||
"d0:v",
|
||||
"v0",
|
||||
"timestamp_floor(\"__time\",'P1D',null,'UTC')",
|
||||
ValueType.LONG,
|
||||
TestExprMacroTable.INSTANCE
|
||||
|
@ -346,8 +346,8 @@ public class ThetaSketchSqlAggregatorTest extends CalciteTestBase
|
|||
.setDimensions(
|
||||
Collections.singletonList(
|
||||
new DefaultDimensionSpec(
|
||||
"d0:v",
|
||||
"d0",
|
||||
"v0",
|
||||
"v0",
|
||||
ValueType.LONG
|
||||
)
|
||||
)
|
||||
|
|
|
@ -46,6 +46,7 @@ import org.apache.druid.sql.calcite.expression.DruidExpression;
|
|||
import org.apache.druid.sql.calcite.expression.Expressions;
|
||||
import org.apache.druid.sql.calcite.planner.Calcites;
|
||||
import org.apache.druid.sql.calcite.planner.PlannerContext;
|
||||
import org.apache.druid.sql.calcite.rel.DruidQuerySignature;
|
||||
import org.apache.druid.sql.calcite.table.RowSignature;
|
||||
|
||||
import javax.annotation.Nullable;
|
||||
|
@ -67,7 +68,7 @@ public class BloomFilterSqlAggregator implements SqlAggregator
|
|||
@Override
|
||||
public Aggregation toDruidAggregation(
|
||||
PlannerContext plannerContext,
|
||||
RowSignature rowSignature,
|
||||
DruidQuerySignature querySignature,
|
||||
RexBuilder rexBuilder,
|
||||
String name,
|
||||
AggregateCall aggregateCall,
|
||||
|
@ -76,6 +77,7 @@ public class BloomFilterSqlAggregator implements SqlAggregator
|
|||
boolean finalizeAggregations
|
||||
)
|
||||
{
|
||||
final RowSignature rowSignature = querySignature.getRowSignature();
|
||||
final RexNode inputOperand = Expressions.fromFieldAccess(
|
||||
rowSignature,
|
||||
project,
|
||||
|
@ -166,10 +168,10 @@ public class BloomFilterSqlAggregator implements SqlAggregator
|
|||
input.getSimpleExtraction().getExtractionFn()
|
||||
);
|
||||
} else {
|
||||
final ExpressionVirtualColumn virtualColumn = input.toVirtualColumn(
|
||||
StringUtils.format("%s:v", aggName),
|
||||
valueType,
|
||||
plannerContext.getExprMacroTable()
|
||||
VirtualColumn virtualColumn = querySignature.getOrCreateVirtualColumnForExpression(
|
||||
plannerContext,
|
||||
input,
|
||||
inputOperand.getType().getSqlTypeName()
|
||||
);
|
||||
virtualColumns.add(virtualColumn);
|
||||
spec = new DefaultDimensionSpec(virtualColumn.getOutputName(), virtualColumn.getOutputName());
|
||||
|
|
|
@ -33,12 +33,13 @@ import org.apache.druid.query.filter.BloomDimFilter;
|
|||
import org.apache.druid.query.filter.BloomKFilter;
|
||||
import org.apache.druid.query.filter.BloomKFilterHolder;
|
||||
import org.apache.druid.query.filter.DimFilter;
|
||||
import org.apache.druid.segment.VirtualColumn;
|
||||
import org.apache.druid.sql.calcite.expression.DirectOperatorConversion;
|
||||
import org.apache.druid.sql.calcite.expression.DruidExpression;
|
||||
import org.apache.druid.sql.calcite.expression.Expressions;
|
||||
import org.apache.druid.sql.calcite.expression.OperatorConversions;
|
||||
import org.apache.druid.sql.calcite.planner.PlannerContext;
|
||||
import org.apache.druid.sql.calcite.table.RowSignature;
|
||||
import org.apache.druid.sql.calcite.rel.DruidQuerySignature;
|
||||
|
||||
import javax.annotation.Nullable;
|
||||
import java.io.IOException;
|
||||
|
@ -67,17 +68,17 @@ public class BloomFilterOperatorConversion extends DirectOperatorConversion
|
|||
@Override
|
||||
public DimFilter toDruidFilter(
|
||||
final PlannerContext plannerContext,
|
||||
final RowSignature rowSignature,
|
||||
final DruidQuerySignature querySignature,
|
||||
final RexNode rexNode
|
||||
)
|
||||
{
|
||||
final List<RexNode> operands = ((RexCall) rexNode).getOperands();
|
||||
final DruidExpression druidExpression = Expressions.toDruidExpression(
|
||||
plannerContext,
|
||||
rowSignature,
|
||||
querySignature.getRowSignature(),
|
||||
operands.get(0)
|
||||
);
|
||||
if (druidExpression == null || !druidExpression.isSimpleExtraction()) {
|
||||
if (druidExpression == null) {
|
||||
return null;
|
||||
}
|
||||
|
||||
|
@ -100,8 +101,19 @@ public class BloomFilterOperatorConversion extends DirectOperatorConversion
|
|||
druidExpression.getSimpleExtraction().getExtractionFn()
|
||||
);
|
||||
} else {
|
||||
// expression virtual columns not currently supported
|
||||
return null;
|
||||
VirtualColumn virtualColumn = querySignature.getOrCreateVirtualColumnForExpression(
|
||||
plannerContext,
|
||||
druidExpression,
|
||||
operands.get(0).getType().getSqlTypeName()
|
||||
);
|
||||
if (virtualColumn == null) {
|
||||
return null;
|
||||
}
|
||||
return new BloomDimFilter(
|
||||
virtualColumn.getOutputName(),
|
||||
holder,
|
||||
null
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -493,7 +493,7 @@ public class BloomFilterSqlAggregatorTest
|
|||
.granularity(Granularities.ALL)
|
||||
.virtualColumns(
|
||||
new ExpressionVirtualColumn(
|
||||
"a0:agg:v",
|
||||
"v0",
|
||||
"(\"l1\" * 2)",
|
||||
ValueType.LONG,
|
||||
TestExprMacroTable.INSTANCE
|
||||
|
@ -503,7 +503,7 @@ public class BloomFilterSqlAggregatorTest
|
|||
ImmutableList.of(
|
||||
new BloomFilterAggregatorFactory(
|
||||
"a0:agg",
|
||||
new DefaultDimensionSpec("a0:agg:v", "a0:agg:v"),
|
||||
new DefaultDimensionSpec("v0", "v0"),
|
||||
TEST_NUM_ENTRIES
|
||||
)
|
||||
)
|
||||
|
@ -556,7 +556,7 @@ public class BloomFilterSqlAggregatorTest
|
|||
.granularity(Granularities.ALL)
|
||||
.virtualColumns(
|
||||
new ExpressionVirtualColumn(
|
||||
"a0:agg:v",
|
||||
"v0",
|
||||
"(\"f1\" * 2)",
|
||||
ValueType.FLOAT,
|
||||
TestExprMacroTable.INSTANCE
|
||||
|
@ -566,7 +566,7 @@ public class BloomFilterSqlAggregatorTest
|
|||
ImmutableList.of(
|
||||
new BloomFilterAggregatorFactory(
|
||||
"a0:agg",
|
||||
new DefaultDimensionSpec("a0:agg:v", "a0:agg:v"),
|
||||
new DefaultDimensionSpec("v0", "v0"),
|
||||
TEST_NUM_ENTRIES
|
||||
)
|
||||
)
|
||||
|
@ -619,7 +619,7 @@ public class BloomFilterSqlAggregatorTest
|
|||
.granularity(Granularities.ALL)
|
||||
.virtualColumns(
|
||||
new ExpressionVirtualColumn(
|
||||
"a0:agg:v",
|
||||
"v0",
|
||||
"(\"d1\" * 2)",
|
||||
ValueType.DOUBLE,
|
||||
TestExprMacroTable.INSTANCE
|
||||
|
@ -629,7 +629,7 @@ public class BloomFilterSqlAggregatorTest
|
|||
ImmutableList.of(
|
||||
new BloomFilterAggregatorFactory(
|
||||
"a0:agg",
|
||||
new DefaultDimensionSpec("a0:agg:v", "a0:agg:v"),
|
||||
new DefaultDimensionSpec("v0", "v0"),
|
||||
TEST_NUM_ENTRIES
|
||||
)
|
||||
)
|
||||
|
|
|
@ -46,6 +46,7 @@ import org.apache.druid.query.filter.ExpressionDimFilter;
|
|||
import org.apache.druid.query.filter.OrDimFilter;
|
||||
import org.apache.druid.query.lookup.LookupReferencesManager;
|
||||
import org.apache.druid.segment.TestHelper;
|
||||
import org.apache.druid.segment.column.ValueType;
|
||||
import org.apache.druid.server.security.AuthenticationResult;
|
||||
import org.apache.druid.sql.calcite.BaseCalciteQueryTest;
|
||||
import org.apache.druid.sql.calcite.filtration.Filtration;
|
||||
|
@ -130,7 +131,7 @@ public class BloomDimFilterSqlTest extends BaseCalciteQueryTest
|
|||
}
|
||||
|
||||
@Test
|
||||
public void testBloomFilterVirtualColumn() throws Exception
|
||||
public void testBloomFilterExprFilter() throws Exception
|
||||
{
|
||||
BloomKFilter filter = new BloomKFilter(1500);
|
||||
filter.addString("a-foo");
|
||||
|
@ -141,17 +142,17 @@ public class BloomDimFilterSqlTest extends BaseCalciteQueryTest
|
|||
byte[] bytes = BloomFilterSerializersModule.bloomKFilterToBytes(filter);
|
||||
String base64 = StringUtils.encodeBase64String(bytes);
|
||||
|
||||
// fool the planner to make an expression virtual column to test bloom filter Druid expression
|
||||
testQuery(
|
||||
StringUtils.format("SELECT COUNT(*) FROM druid.foo WHERE bloom_filter_test(concat(dim2, '-foo'), '%s')", base64),
|
||||
StringUtils.format("SELECT COUNT(*) FROM druid.foo WHERE bloom_filter_test(concat(dim2, '-foo'), '%s') = TRUE", base64),
|
||||
ImmutableList.of(
|
||||
Druids.newTimeseriesQueryBuilder()
|
||||
.dataSource(CalciteTests.DATASOURCE1)
|
||||
.intervals(querySegmentSpec(Filtration.eternity()))
|
||||
.granularity(Granularities.ALL)
|
||||
.virtualColumns()
|
||||
.filters(
|
||||
new ExpressionDimFilter(
|
||||
StringUtils.format("bloom_filter_test(concat(\"dim2\",'-foo'),'%s')", base64),
|
||||
StringUtils.format("(bloom_filter_test(concat(\"dim2\",'-foo'),'%s') == 1)", base64),
|
||||
createExprMacroTable()
|
||||
)
|
||||
)
|
||||
|
@ -165,11 +166,41 @@ public class BloomDimFilterSqlTest extends BaseCalciteQueryTest
|
|||
);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testBloomFilterVirtualColumn() throws Exception
|
||||
{
|
||||
BloomKFilter filter = new BloomKFilter(1500);
|
||||
filter.addString("def-foo");
|
||||
byte[] bytes = BloomFilterSerializersModule.bloomKFilterToBytes(filter);
|
||||
String base64 = StringUtils.encodeBase64String(bytes);
|
||||
|
||||
testQuery(
|
||||
StringUtils.format("SELECT COUNT(*) FROM druid.foo WHERE bloom_filter_test(concat(dim1, '-foo'), '%s')", base64),
|
||||
ImmutableList.of(
|
||||
Druids.newTimeseriesQueryBuilder()
|
||||
.dataSource(CalciteTests.DATASOURCE1)
|
||||
.intervals(querySegmentSpec(Filtration.eternity()))
|
||||
.granularity(Granularities.ALL)
|
||||
.virtualColumns(expressionVirtualColumn("v0", "concat(\"dim1\",'-foo')", ValueType.STRING))
|
||||
.filters(
|
||||
new BloomDimFilter("v0", BloomKFilterHolder.fromBloomKFilter(filter), null)
|
||||
)
|
||||
.aggregators(aggregators(new CountAggregatorFactory("a0")))
|
||||
.context(TIMESERIES_CONTEXT_DEFAULT)
|
||||
.build()
|
||||
),
|
||||
ImmutableList.of(
|
||||
new Object[]{1L}
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
@Test
|
||||
public void testBloomFilterVirtualColumnNumber() throws Exception
|
||||
{
|
||||
BloomKFilter filter = new BloomKFilter(1500);
|
||||
filter.addDouble(20.2);
|
||||
filter.addFloat(20.2f);
|
||||
byte[] bytes = BloomFilterSerializersModule.bloomKFilterToBytes(filter);
|
||||
String base64 = StringUtils.encodeBase64String(bytes);
|
||||
|
||||
|
@ -180,12 +211,11 @@ public class BloomDimFilterSqlTest extends BaseCalciteQueryTest
|
|||
.dataSource(CalciteTests.DATASOURCE1)
|
||||
.intervals(querySegmentSpec(Filtration.eternity()))
|
||||
.granularity(Granularities.ALL)
|
||||
.virtualColumns()
|
||||
.virtualColumns(
|
||||
expressionVirtualColumn("v0", "(2 * CAST(\"dim1\", 'DOUBLE'))", ValueType.FLOAT)
|
||||
)
|
||||
.filters(
|
||||
new ExpressionDimFilter(
|
||||
StringUtils.format("bloom_filter_test((2 * CAST(\"dim1\", 'DOUBLE')),'%s')", base64),
|
||||
createExprMacroTable()
|
||||
)
|
||||
new BloomDimFilter("v0", BloomKFilterHolder.fromBloomKFilter(filter), null)
|
||||
)
|
||||
.aggregators(aggregators(new CountAggregatorFactory("a0")))
|
||||
.context(TIMESERIES_CONTEXT_DEFAULT)
|
||||
|
|
|
@ -38,13 +38,13 @@ import org.apache.druid.query.aggregation.histogram.FixedBucketsHistogram;
|
|||
import org.apache.druid.query.aggregation.histogram.FixedBucketsHistogramAggregatorFactory;
|
||||
import org.apache.druid.query.aggregation.histogram.QuantilePostAggregator;
|
||||
import org.apache.druid.segment.VirtualColumn;
|
||||
import org.apache.druid.segment.column.ValueType;
|
||||
import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
|
||||
import org.apache.druid.sql.calcite.aggregation.Aggregation;
|
||||
import org.apache.druid.sql.calcite.aggregation.SqlAggregator;
|
||||
import org.apache.druid.sql.calcite.expression.DruidExpression;
|
||||
import org.apache.druid.sql.calcite.expression.Expressions;
|
||||
import org.apache.druid.sql.calcite.planner.PlannerContext;
|
||||
import org.apache.druid.sql.calcite.rel.DruidQuerySignature;
|
||||
import org.apache.druid.sql.calcite.table.RowSignature;
|
||||
|
||||
import javax.annotation.Nullable;
|
||||
|
@ -66,7 +66,7 @@ public class FixedBucketsHistogramQuantileSqlAggregator implements SqlAggregator
|
|||
@Override
|
||||
public Aggregation toDruidAggregation(
|
||||
PlannerContext plannerContext,
|
||||
RowSignature rowSignature,
|
||||
DruidQuerySignature querySignature,
|
||||
RexBuilder rexBuilder,
|
||||
String name,
|
||||
AggregateCall aggregateCall,
|
||||
|
@ -75,6 +75,7 @@ public class FixedBucketsHistogramQuantileSqlAggregator implements SqlAggregator
|
|||
boolean finalizeAggregations
|
||||
)
|
||||
{
|
||||
final RowSignature rowSignature = querySignature.getRowSignature();
|
||||
final DruidExpression input = Expressions.toDruidExpression(
|
||||
plannerContext,
|
||||
rowSignature,
|
||||
|
@ -233,10 +234,10 @@ public class FixedBucketsHistogramQuantileSqlAggregator implements SqlAggregator
|
|||
outlierHandlingMode
|
||||
);
|
||||
} else {
|
||||
final ExpressionVirtualColumn virtualColumn = input.toVirtualColumn(
|
||||
StringUtils.format("%s:v", name),
|
||||
ValueType.FLOAT,
|
||||
plannerContext.getExprMacroTable()
|
||||
VirtualColumn virtualColumn = querySignature.getOrCreateVirtualColumnForExpression(
|
||||
plannerContext,
|
||||
input,
|
||||
SqlTypeName.FLOAT
|
||||
);
|
||||
virtualColumns.add(virtualColumn);
|
||||
aggregatorFactory = new FixedBucketsHistogramAggregatorFactory(
|
||||
|
|
|
@ -46,6 +46,7 @@ import org.apache.druid.sql.calcite.aggregation.SqlAggregator;
|
|||
import org.apache.druid.sql.calcite.expression.DruidExpression;
|
||||
import org.apache.druid.sql.calcite.expression.Expressions;
|
||||
import org.apache.druid.sql.calcite.planner.PlannerContext;
|
||||
import org.apache.druid.sql.calcite.rel.DruidQuerySignature;
|
||||
import org.apache.druid.sql.calcite.table.RowSignature;
|
||||
|
||||
import javax.annotation.Nullable;
|
||||
|
@ -67,7 +68,7 @@ public class QuantileSqlAggregator implements SqlAggregator
|
|||
@Override
|
||||
public Aggregation toDruidAggregation(
|
||||
final PlannerContext plannerContext,
|
||||
final RowSignature rowSignature,
|
||||
DruidQuerySignature querySignature,
|
||||
final RexBuilder rexBuilder,
|
||||
final String name,
|
||||
final AggregateCall aggregateCall,
|
||||
|
@ -76,6 +77,7 @@ public class QuantileSqlAggregator implements SqlAggregator
|
|||
final boolean finalizeAggregations
|
||||
)
|
||||
{
|
||||
final RowSignature rowSignature = querySignature.getRowSignature();
|
||||
final DruidExpression input = Expressions.toDruidExpression(
|
||||
plannerContext,
|
||||
rowSignature,
|
||||
|
@ -193,11 +195,8 @@ public class QuantileSqlAggregator implements SqlAggregator
|
|||
);
|
||||
}
|
||||
} else {
|
||||
final ExpressionVirtualColumn virtualColumn = input.toVirtualColumn(
|
||||
StringUtils.format("%s:v", name),
|
||||
ValueType.FLOAT,
|
||||
plannerContext.getExprMacroTable()
|
||||
);
|
||||
final VirtualColumn virtualColumn =
|
||||
querySignature.getOrCreateVirtualColumnForExpression(plannerContext, input, SqlTypeName.FLOAT);
|
||||
virtualColumns.add(virtualColumn);
|
||||
aggregatorFactory = new ApproximateHistogramAggregatorFactory(
|
||||
histogramName,
|
||||
|
|
|
@ -231,7 +231,7 @@ public class FixedBucketsHistogramQuantileSqlAggregatorTest extends CalciteTestB
|
|||
.granularity(Granularities.ALL)
|
||||
.virtualColumns(
|
||||
new ExpressionVirtualColumn(
|
||||
"a4:v",
|
||||
"v0",
|
||||
"(\"m1\" * 2)",
|
||||
ValueType.FLOAT,
|
||||
TestExprMacroTable.INSTANCE
|
||||
|
@ -242,7 +242,7 @@ public class FixedBucketsHistogramQuantileSqlAggregatorTest extends CalciteTestB
|
|||
"a0:agg", "m1", 20, 0.0d, 10.0d, FixedBucketsHistogram.OutlierHandlingMode.IGNORE
|
||||
),
|
||||
new FixedBucketsHistogramAggregatorFactory(
|
||||
"a4:agg", "a4:v", 40, 0.0d, 20.0d, FixedBucketsHistogram.OutlierHandlingMode.IGNORE
|
||||
"a4:agg", "v0", 40, 0.0d, 20.0d, FixedBucketsHistogram.OutlierHandlingMode.IGNORE
|
||||
),
|
||||
new FilteredAggregatorFactory(
|
||||
new FixedBucketsHistogramAggregatorFactory(
|
||||
|
|
|
@ -232,7 +232,7 @@ public class QuantileSqlAggregatorTest extends CalciteTestBase
|
|||
.granularity(Granularities.ALL)
|
||||
.virtualColumns(
|
||||
new ExpressionVirtualColumn(
|
||||
"a4:v",
|
||||
"v0",
|
||||
"(\"m1\" * 2)",
|
||||
ValueType.FLOAT,
|
||||
TestExprMacroTable.INSTANCE
|
||||
|
@ -241,7 +241,7 @@ public class QuantileSqlAggregatorTest extends CalciteTestBase
|
|||
.aggregators(ImmutableList.of(
|
||||
new ApproximateHistogramAggregatorFactory("a0:agg", "m1", null, null, null, null),
|
||||
new ApproximateHistogramAggregatorFactory("a2:agg", "m1", 200, null, null, null),
|
||||
new ApproximateHistogramAggregatorFactory("a4:agg", "a4:v", null, null, null, null),
|
||||
new ApproximateHistogramAggregatorFactory("a4:agg", "v0", null, null, null, null),
|
||||
new FilteredAggregatorFactory(
|
||||
new ApproximateHistogramAggregatorFactory("a5:agg", "m1", null, null, null, null),
|
||||
new SelectorDimFilter("dim1", "abc", null)
|
||||
|
|
|
@ -387,6 +387,13 @@ public abstract class HyperLogLogCollector implements Comparable<HyperLogLogColl
|
|||
|
||||
storageBuffer.duplicate().put(other.storageBuffer.asReadOnlyBuffer());
|
||||
|
||||
if (other.storageBuffer.remaining() != other.getNumBytesForDenseStorage()) {
|
||||
// The other buffer was sparse, densify it
|
||||
final int newLImit = storageBuffer.position() + other.storageBuffer.remaining();
|
||||
storageBuffer.limit(newLImit);
|
||||
convertToDenseStorage();
|
||||
}
|
||||
|
||||
other = HyperLogLogCollector.makeCollector(tmpBuffer);
|
||||
}
|
||||
|
||||
|
|
|
@ -22,6 +22,7 @@ package org.apache.druid.hll;
|
|||
import com.google.common.collect.Collections2;
|
||||
import com.google.common.collect.Lists;
|
||||
import com.google.common.hash.HashFunction;
|
||||
import com.google.common.hash.Hasher;
|
||||
import com.google.common.hash.Hashing;
|
||||
import org.apache.druid.java.util.common.StringUtils;
|
||||
import org.apache.druid.java.util.common.logger.Logger;
|
||||
|
@ -30,14 +31,17 @@ import org.junit.Ignore;
|
|||
import org.junit.Test;
|
||||
|
||||
import java.nio.ByteBuffer;
|
||||
import java.nio.ByteOrder;
|
||||
import java.security.MessageDigest;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collection;
|
||||
import java.util.List;
|
||||
import java.util.Random;
|
||||
import java.util.concurrent.ThreadLocalRandom;
|
||||
import java.util.function.Predicate;
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
public class HyperLogLogCollectorTest
|
||||
{
|
||||
|
@ -45,6 +49,18 @@ public class HyperLogLogCollectorTest
|
|||
|
||||
private final HashFunction fn = Hashing.murmur3_128();
|
||||
|
||||
private static void fillBuckets(HyperLogLogCollector collector, byte startOffset, byte endOffset)
|
||||
{
|
||||
byte offset = startOffset;
|
||||
while (offset <= endOffset) {
|
||||
// fill buckets to shift registerOffset
|
||||
for (short bucket = 0; bucket < 2048; ++bucket) {
|
||||
collector.add(bucket, offset);
|
||||
}
|
||||
offset++;
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testFolding()
|
||||
{
|
||||
|
@ -78,14 +94,13 @@ public class HyperLogLogCollectorTest
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* This is a very long running test, disabled by default.
|
||||
* It is meant to catch issues when combining a large numer of HLL objects.
|
||||
*
|
||||
* It compares adding all the values to one HLL vs.
|
||||
* splitting up values into HLLs of 100 values each, and folding those HLLs into a single main HLL.
|
||||
*
|
||||
*
|
||||
* When reaching very large cardinalities (>> 50,000,000), offsets are mismatched between the main HLL and the ones
|
||||
* with 100 values, requiring a floating max as described in
|
||||
* http://druid.io/blog/2014/02/18/hyperloglog-optimizations-for-real-world-systems.html
|
||||
|
@ -502,7 +517,8 @@ public class HyperLogLogCollectorTest
|
|||
return retVal;
|
||||
}
|
||||
|
||||
@Ignore @Test // This test can help when finding potential combinations that are weird, but it's non-deterministic
|
||||
@Ignore
|
||||
@Test // This test can help when finding potential combinations that are weird, but it's non-deterministic
|
||||
public void testFoldingwithDifferentOffsets()
|
||||
{
|
||||
// final Random random = new Random(37); // this seed will cause this test to fail because of slightly larger errors
|
||||
|
@ -533,7 +549,8 @@ public class HyperLogLogCollectorTest
|
|||
}
|
||||
}
|
||||
|
||||
@Ignore @Test
|
||||
@Ignore
|
||||
@Test
|
||||
public void testFoldingwithDifferentOffsets2() throws Exception
|
||||
{
|
||||
final Random random = new Random(0);
|
||||
|
@ -707,6 +724,81 @@ public class HyperLogLogCollectorTest
|
|||
Assert.assertEquals(0, collector.getNumNonZeroRegisters());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testRegisterSwapWithSparse()
|
||||
{
|
||||
final HyperLogLogCollector collector = HyperLogLogCollector.makeLatestCollector();
|
||||
// Skip the first bucket
|
||||
for (int i = 1; i < HyperLogLogCollector.NUM_BUCKETS; i++) {
|
||||
collector.add((short) i, (byte) 1);
|
||||
Assert.assertEquals(i, collector.getNumNonZeroRegisters());
|
||||
Assert.assertEquals(0, collector.getRegisterOffset());
|
||||
}
|
||||
Assert.assertEquals(
|
||||
15615.219683654448D,
|
||||
HyperLogLogCollector.makeCollector(collector.toByteBuffer().asReadOnlyBuffer())
|
||||
.estimateCardinality(),
|
||||
1e-5D
|
||||
);
|
||||
|
||||
final byte[] hash = new byte[10];
|
||||
hash[0] = 1; // Bucket 0, 1 offset of 0
|
||||
collector.add(hash);
|
||||
Assert.assertEquals(0, collector.getNumNonZeroRegisters());
|
||||
Assert.assertEquals(1, collector.getRegisterOffset());
|
||||
|
||||
// We have a REALLY bad distribution, Sketch as 0 is fine.
|
||||
Assert.assertEquals(
|
||||
0.0D,
|
||||
HyperLogLogCollector.makeCollector(collector.toByteBuffer().asReadOnlyBuffer())
|
||||
.estimateCardinality(),
|
||||
1e-5D
|
||||
);
|
||||
final ByteBuffer buffer = collector.toByteBuffer();
|
||||
Assert.assertEquals(collector.getNumHeaderBytes(), buffer.remaining());
|
||||
|
||||
final HyperLogLogCollector denseCollector = HyperLogLogCollector.makeLatestCollector();
|
||||
for (int i = 0; i < HyperLogLogCollector.NUM_BUCKETS - 1; i++) {
|
||||
denseCollector.add((short) i, (byte) 1);
|
||||
}
|
||||
|
||||
Assert.assertEquals(HyperLogLogCollector.NUM_BUCKETS - 1, denseCollector.getNumNonZeroRegisters());
|
||||
final HyperLogLogCollector folded = denseCollector.fold(HyperLogLogCollector.makeCollector(buffer));
|
||||
Assert.assertNotNull(folded.toByteBuffer());
|
||||
Assert.assertEquals(folded.getStorageBuffer().remaining(), denseCollector.getNumBytesForDenseStorage());
|
||||
}
|
||||
|
||||
// Example of a terrible sampling filter. Don't use this method
|
||||
@Test
|
||||
public void testCanFillUpOnMod()
|
||||
{
|
||||
final HashFunction fn = Hashing.murmur3_128();
|
||||
final HyperLogLogCollector hyperLogLogCollector = HyperLogLogCollector.makeLatestCollector();
|
||||
final byte[] b = new byte[10];
|
||||
b[0] = 1;
|
||||
hyperLogLogCollector.add(b);
|
||||
final Random random = new Random(347893248701078L);
|
||||
long loops = 0;
|
||||
// Do a 1% "sample" where the mod of the hash is 43
|
||||
final Predicate<Integer> pass = i -> {
|
||||
// ByteOrder.nativeOrder() on lots of systems is ByteOrder.LITTLE_ENDIAN
|
||||
final ByteBuffer bb = ByteBuffer.wrap(fn.hashInt(i).asBytes()).order(ByteOrder.LITTLE_ENDIAN);
|
||||
return (bb.getInt() % 100) == 43;
|
||||
};
|
||||
final long loopLimit = 1_000_000_000L;
|
||||
do {
|
||||
final int rnd = random.nextInt();
|
||||
if (!pass.test(rnd)) {
|
||||
continue;
|
||||
}
|
||||
final Hasher hasher = fn.newHasher();
|
||||
hasher.putInt(rnd);
|
||||
hyperLogLogCollector.add(hasher.hash().asBytes());
|
||||
} while (hyperLogLogCollector.getNumNonZeroRegisters() > 0 && ++loops < loopLimit);
|
||||
Assert.assertNotEquals(loopLimit, loops);
|
||||
Assert.assertEquals(hyperLogLogCollector.getNumHeaderBytes(), hyperLogLogCollector.toByteBuffer().remaining());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testMergeMaxOverflow()
|
||||
{
|
||||
|
@ -736,19 +828,6 @@ public class HyperLogLogCollectorTest
|
|||
Assert.assertEquals(67, collector.getMaxOverflowValue());
|
||||
}
|
||||
|
||||
|
||||
private static void fillBuckets(HyperLogLogCollector collector, byte startOffset, byte endOffset)
|
||||
{
|
||||
byte offset = startOffset;
|
||||
while (offset <= endOffset) {
|
||||
// fill buckets to shift registerOffset
|
||||
for (short bucket = 0; bucket < 2048; ++bucket) {
|
||||
collector.add(bucket, offset);
|
||||
}
|
||||
offset++;
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testFoldOrder()
|
||||
{
|
||||
|
|
|
@ -405,7 +405,16 @@ public class IndexTask extends AbstractTask implements ChatHandler
|
|||
try {
|
||||
if (chatHandlerProvider.isPresent()) {
|
||||
log.info("Found chat handler of class[%s]", chatHandlerProvider.get().getClass().getName());
|
||||
chatHandlerProvider.get().register(getId(), this, false);
|
||||
|
||||
if (chatHandlerProvider.get().get(getId()).isPresent()) {
|
||||
// This is a workaround for ParallelIndexSupervisorTask to avoid double registering when it runs in the
|
||||
// sequential mode. See ParallelIndexSupervisorTask.runSequential().
|
||||
// Note that all HTTP endpoints are not available in this case. This works only for
|
||||
// ParallelIndexSupervisorTask because it doesn't support APIs for live ingestion reports.
|
||||
log.warn("Chat handler is already registered. Skipping chat handler registration.");
|
||||
} else {
|
||||
chatHandlerProvider.get().register(getId(), this, false);
|
||||
}
|
||||
} else {
|
||||
log.warn("No chat handler detected");
|
||||
}
|
||||
|
|
|
@ -256,13 +256,23 @@ public class ParallelIndexSupervisorTask extends AbstractTask implements ChatHan
|
|||
chatHandlerProvider.register(getId(), this, false);
|
||||
|
||||
try {
|
||||
if (baseFirehoseFactory.isSplittable()) {
|
||||
if (isParallelMode()) {
|
||||
return runParallel(toolbox);
|
||||
} else {
|
||||
log.warn(
|
||||
"firehoseFactory[%s] is not splittable. Running sequentially",
|
||||
baseFirehoseFactory.getClass().getSimpleName()
|
||||
);
|
||||
if (!baseFirehoseFactory.isSplittable()) {
|
||||
log.warn(
|
||||
"firehoseFactory[%s] is not splittable. Running sequentially.",
|
||||
baseFirehoseFactory.getClass().getSimpleName()
|
||||
);
|
||||
} else if (ingestionSchema.getTuningConfig().getMaxNumSubTasks() == 1) {
|
||||
log.warn(
|
||||
"maxNumSubTasks is 1. Running sequentially. "
|
||||
+ "Please set maxNumSubTasks to something higher than 1 if you want to run in parallel ingestion mode."
|
||||
);
|
||||
} else {
|
||||
throw new ISE("Unknown reason for sequentail mode. Failing this task.");
|
||||
}
|
||||
|
||||
return runSequential(toolbox);
|
||||
}
|
||||
}
|
||||
|
@ -271,6 +281,15 @@ public class ParallelIndexSupervisorTask extends AbstractTask implements ChatHan
|
|||
}
|
||||
}
|
||||
|
||||
private boolean isParallelMode()
|
||||
{
|
||||
if (baseFirehoseFactory.isSplittable() && ingestionSchema.getTuningConfig().getMaxNumSubTasks() > 1) {
|
||||
return true;
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
@VisibleForTesting
|
||||
void setToolbox(TaskToolbox toolbox)
|
||||
{
|
||||
|
@ -280,7 +299,7 @@ public class ParallelIndexSupervisorTask extends AbstractTask implements ChatHan
|
|||
private TaskStatus runParallel(TaskToolbox toolbox) throws Exception
|
||||
{
|
||||
createRunner(toolbox);
|
||||
return TaskStatus.fromCode(getId(), runner.run());
|
||||
return TaskStatus.fromCode(getId(), Preconditions.checkNotNull(runner, "runner").run());
|
||||
}
|
||||
|
||||
private TaskStatus runSequential(TaskToolbox toolbox)
|
||||
|
@ -479,11 +498,7 @@ public class ParallelIndexSupervisorTask extends AbstractTask implements ChatHan
|
|||
public Response getMode(@Context final HttpServletRequest req)
|
||||
{
|
||||
IndexTaskUtils.datasourceAuthorizationCheck(req, Action.READ, getDataSource(), authorizerMapper);
|
||||
if (runner == null) {
|
||||
return Response.status(Response.Status.SERVICE_UNAVAILABLE).entity("task is not running yet").build();
|
||||
} else {
|
||||
return Response.ok(baseFirehoseFactory.isSplittable() ? "parallel" : "sequential").build();
|
||||
}
|
||||
return Response.ok(isParallelMode() ? "parallel" : "sequential").build();
|
||||
}
|
||||
|
||||
@GET
|
||||
|
|
|
@ -22,6 +22,7 @@ package org.apache.druid.indexing.common.task.batch.parallel;
|
|||
import com.fasterxml.jackson.annotation.JsonCreator;
|
||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
import com.fasterxml.jackson.annotation.JsonTypeName;
|
||||
import com.google.common.base.Preconditions;
|
||||
import org.apache.druid.indexing.common.task.IndexTask.IndexTuningConfig;
|
||||
import org.apache.druid.segment.IndexSpec;
|
||||
import org.apache.druid.segment.writeout.SegmentWriteOutMediumFactory;
|
||||
|
@ -34,7 +35,7 @@ import java.util.Objects;
|
|||
@JsonTypeName("index_parallel")
|
||||
public class ParallelIndexTuningConfig extends IndexTuningConfig
|
||||
{
|
||||
private static final int DEFAULT_MAX_NUM_BATCH_TASKS = Integer.MAX_VALUE; // unlimited
|
||||
private static final int DEFAULT_MAX_NUM_BATCH_TASKS = 1;
|
||||
private static final int DEFAULT_MAX_RETRY = 3;
|
||||
private static final long DEFAULT_TASK_STATUS_CHECK_PERIOD_MS = 1000;
|
||||
|
||||
|
@ -131,6 +132,8 @@ public class ParallelIndexTuningConfig extends IndexTuningConfig
|
|||
|
||||
this.chatHandlerTimeout = DEFAULT_CHAT_HANDLER_TIMEOUT;
|
||||
this.chatHandlerNumRetries = DEFAULT_CHAT_HANDLER_NUM_RETRIES;
|
||||
|
||||
Preconditions.checkArgument(this.maxNumSubTasks > 0, "maxNumSubTasks must be positive");
|
||||
}
|
||||
|
||||
@JsonProperty
|
||||
|
|
|
@ -294,22 +294,6 @@ public class AbstractParallelIndexSupervisorTaskTest extends IngestionTestBase
|
|||
new DropwizardRowIngestionMetersFactory()
|
||||
);
|
||||
}
|
||||
|
||||
@Override
|
||||
public TaskStatus run(TaskToolbox toolbox) throws Exception
|
||||
{
|
||||
return TaskStatus.fromCode(
|
||||
getId(),
|
||||
new TestParallelIndexTaskRunner(
|
||||
toolbox,
|
||||
getId(),
|
||||
getGroupId(),
|
||||
getIngestionSchema(),
|
||||
getContext(),
|
||||
new NoopIndexingServiceClient()
|
||||
).run()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
static class TestParallelIndexTaskRunner extends SinglePhaseParallelIndexTaskRunner
|
||||
|
|
|
@ -281,9 +281,8 @@ public class ParallelIndexSupervisorTaskKillTest extends AbstractParallelIndexSu
|
|||
}
|
||||
|
||||
@Override
|
||||
public TaskStatus run(TaskToolbox toolbox) throws Exception
|
||||
ParallelIndexTaskRunner createRunner(TaskToolbox toolbox)
|
||||
{
|
||||
setToolbox(toolbox);
|
||||
setRunner(
|
||||
new TestRunner(
|
||||
toolbox,
|
||||
|
@ -291,10 +290,7 @@ public class ParallelIndexSupervisorTaskKillTest extends AbstractParallelIndexSu
|
|||
indexingServiceClient
|
||||
)
|
||||
);
|
||||
return TaskStatus.fromCode(
|
||||
getId(),
|
||||
getRunner().run()
|
||||
);
|
||||
return getRunner();
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -500,7 +500,7 @@ public class ParallelIndexSupervisorTaskResourceTest extends AbstractParallelInd
|
|||
}
|
||||
|
||||
@Override
|
||||
public TaskStatus run(TaskToolbox toolbox) throws Exception
|
||||
ParallelIndexTaskRunner createRunner(TaskToolbox toolbox)
|
||||
{
|
||||
setRunner(
|
||||
new TestRunner(
|
||||
|
@ -509,10 +509,7 @@ public class ParallelIndexSupervisorTaskResourceTest extends AbstractParallelInd
|
|||
indexingServiceClient
|
||||
)
|
||||
);
|
||||
return TaskStatus.fromCode(
|
||||
getId(),
|
||||
getRunner().run()
|
||||
);
|
||||
return getRunner();
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -24,7 +24,6 @@ import org.apache.druid.data.input.FiniteFirehoseFactory;
|
|||
import org.apache.druid.data.input.InputSplit;
|
||||
import org.apache.druid.data.input.impl.StringInputRowParser;
|
||||
import org.apache.druid.indexer.TaskState;
|
||||
import org.apache.druid.indexer.TaskStatus;
|
||||
import org.apache.druid.indexing.common.TaskToolbox;
|
||||
import org.apache.druid.indexing.common.actions.TaskActionClient;
|
||||
import org.apache.druid.indexing.common.task.TaskResource;
|
||||
|
@ -229,10 +228,87 @@ public class ParallelIndexSupervisorTaskTest extends AbstractParallelIndexSuperv
|
|||
Assert.assertEquals(TaskState.SUCCESS, task.run(toolbox).getStatusCode());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testWith1MaxNumSubTasks() throws Exception
|
||||
{
|
||||
final ParallelIndexSupervisorTask task = newTask(
|
||||
Intervals.of("2017/2018"),
|
||||
new ParallelIndexIOConfig(
|
||||
new LocalFirehoseFactory(inputDir, "test_*", null),
|
||||
false
|
||||
),
|
||||
new ParallelIndexTuningConfig(
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
1,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null
|
||||
)
|
||||
);
|
||||
actionClient = createActionClient(task);
|
||||
toolbox = createTaskToolbox(task);
|
||||
|
||||
prepareTaskForLocking(task);
|
||||
Assert.assertTrue(task.isReady(actionClient));
|
||||
Assert.assertEquals(TaskState.SUCCESS, task.run(toolbox).getStatusCode());
|
||||
Assert.assertNull("Runner must be null if the task was in the sequential mode", task.getRunner());
|
||||
}
|
||||
|
||||
private ParallelIndexSupervisorTask newTask(
|
||||
Interval interval,
|
||||
ParallelIndexIOConfig ioConfig
|
||||
)
|
||||
{
|
||||
return newTask(
|
||||
interval,
|
||||
ioConfig,
|
||||
new ParallelIndexTuningConfig(
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
2,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
private ParallelIndexSupervisorTask newTask(
|
||||
Interval interval,
|
||||
ParallelIndexIOConfig ioConfig,
|
||||
ParallelIndexTuningConfig tuningConfig
|
||||
)
|
||||
{
|
||||
// set up ingestion spec
|
||||
final ParallelIndexIngestionSpec ingestionSpec = new ParallelIndexIngestionSpec(
|
||||
|
@ -257,29 +333,7 @@ public class ParallelIndexSupervisorTaskTest extends AbstractParallelIndexSuperv
|
|||
getObjectMapper()
|
||||
),
|
||||
ioConfig,
|
||||
new ParallelIndexTuningConfig(
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
2,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
null
|
||||
)
|
||||
tuningConfig
|
||||
);
|
||||
|
||||
// set up test tools
|
||||
|
@ -315,9 +369,8 @@ public class ParallelIndexSupervisorTaskTest extends AbstractParallelIndexSuperv
|
|||
}
|
||||
|
||||
@Override
|
||||
public TaskStatus run(TaskToolbox toolbox) throws Exception
|
||||
ParallelIndexTaskRunner createRunner(TaskToolbox toolbox)
|
||||
{
|
||||
setToolbox(toolbox);
|
||||
setRunner(
|
||||
new TestRunner(
|
||||
toolbox,
|
||||
|
@ -325,10 +378,7 @@ public class ParallelIndexSupervisorTaskTest extends AbstractParallelIndexSuperv
|
|||
indexingServiceClient
|
||||
)
|
||||
);
|
||||
return TaskStatus.fromCode(
|
||||
getId(),
|
||||
getRunner().run()
|
||||
);
|
||||
return getRunner();
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -215,3 +215,8 @@ This will tell the test framework that the test class needs to be constructed us
|
|||
2) FromFileTestQueryHelper - reads queries with expected results from file and executes them and verifies the results using ResultVerifier
|
||||
|
||||
Refer ITIndexerTest as an example on how to use dependency Injection
|
||||
|
||||
### Register new tests for Travis CI
|
||||
|
||||
Once you add new integration tests, don't forget to add them to `{DRUID_ROOT}/ci/travis_script_integration.sh`
|
||||
or `{DRUID_ROOT}/ci/travis_script_integration_part2.sh` for Travis CI to run them.
|
||||
|
|
|
@ -153,6 +153,11 @@ public class OverlordResourceTestClient
|
|||
return getTasks("pendingTasks");
|
||||
}
|
||||
|
||||
public List<TaskResponseObject> getCompleteTasksForDataSource(final String dataSource)
|
||||
{
|
||||
return getTasks(StringUtils.format("tasks?state=complete&datasource=%s", StringUtils.urlEncode(dataSource)));
|
||||
}
|
||||
|
||||
private List<TaskResponseObject> getTasks(String identifier)
|
||||
{
|
||||
try {
|
||||
|
@ -233,7 +238,14 @@ public class OverlordResourceTestClient
|
|||
{
|
||||
try {
|
||||
StatusResponseHolder response = httpClient.go(
|
||||
new Request(HttpMethod.POST, new URL(StringUtils.format("%ssupervisor/%s/shutdown", getIndexerURL(), StringUtils.urlEncode(id)))),
|
||||
new Request(
|
||||
HttpMethod.POST,
|
||||
new URL(StringUtils.format(
|
||||
"%ssupervisor/%s/shutdown",
|
||||
getIndexerURL(),
|
||||
StringUtils.urlEncode(id)
|
||||
))
|
||||
),
|
||||
responseHandler
|
||||
).get();
|
||||
if (!response.getStatus().equals(HttpResponseStatus.OK)) {
|
||||
|
|
|
@ -28,6 +28,7 @@ public class TaskResponseObject
|
|||
{
|
||||
|
||||
private final String id;
|
||||
private final String type;
|
||||
private final DateTime createdTime;
|
||||
private final DateTime queueInsertionTime;
|
||||
private final TaskState status;
|
||||
|
@ -35,12 +36,14 @@ public class TaskResponseObject
|
|||
@JsonCreator
|
||||
private TaskResponseObject(
|
||||
@JsonProperty("id") String id,
|
||||
@JsonProperty("type") String type,
|
||||
@JsonProperty("createdTime") DateTime createdTime,
|
||||
@JsonProperty("queueInsertionTime") DateTime queueInsertionTime,
|
||||
@JsonProperty("status") TaskState status
|
||||
)
|
||||
{
|
||||
this.id = id;
|
||||
this.type = type;
|
||||
this.createdTime = createdTime;
|
||||
this.queueInsertionTime = queueInsertionTime;
|
||||
this.status = status;
|
||||
|
@ -52,6 +55,12 @@ public class TaskResponseObject
|
|||
return id;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused") // Used by Jackson serialization?
|
||||
public String getType()
|
||||
{
|
||||
return type;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused") // Used by Jackson serialization?
|
||||
public DateTime getCreatedTime()
|
||||
{
|
||||
|
|
|
@ -144,9 +144,16 @@ public class AbstractITBatchIndexTest extends AbstractIndexerTest
|
|||
String dataSource,
|
||||
String indexTaskFilePath,
|
||||
String queryFilePath
|
||||
)
|
||||
) throws IOException
|
||||
{
|
||||
submitTaskAndWait(indexTaskFilePath, dataSource, false);
|
||||
final String fullDatasourceName = dataSource + config.getExtraDatasourceNameSuffix();
|
||||
final String taskSpec = StringUtils.replace(
|
||||
getTaskAsString(indexTaskFilePath),
|
||||
"%%DATASOURCE%%",
|
||||
fullDatasourceName
|
||||
);
|
||||
|
||||
submitTaskAndWait(taskSpec, fullDatasourceName, false);
|
||||
try {
|
||||
sqlQueryHelper.testQueriesFromFile(queryFilePath, 2);
|
||||
}
|
||||
|
@ -160,10 +167,25 @@ public class AbstractITBatchIndexTest extends AbstractIndexerTest
|
|||
{
|
||||
final Set<String> oldVersions = waitForNewVersion ? coordinator.getSegmentVersions(dataSourceName) : null;
|
||||
|
||||
long startSubTaskCount = -1;
|
||||
final boolean assertRunsSubTasks = taskSpec.contains("index_parallel");
|
||||
if (assertRunsSubTasks) {
|
||||
startSubTaskCount = countCompleteSubTasks(dataSourceName);
|
||||
}
|
||||
|
||||
final String taskID = indexer.submitTask(taskSpec);
|
||||
LOG.info("TaskID for loading index task %s", taskID);
|
||||
indexer.waitUntilTaskCompletes(taskID);
|
||||
|
||||
if (assertRunsSubTasks) {
|
||||
final long newSubTasks = countCompleteSubTasks(dataSourceName) - startSubTaskCount;
|
||||
Assert.assertTrue(
|
||||
StringUtils.format(
|
||||
"The supervisor task[%s] didn't create any sub tasks. Was it executed in the parallel mode?",
|
||||
taskID
|
||||
), newSubTasks > 0);
|
||||
}
|
||||
|
||||
// ITParallelIndexTest does a second round of ingestion to replace segements in an existing
|
||||
// data source. For that second round we need to make sure the coordinator actually learned
|
||||
// about the new segments befor waiting for it to report that all segments are loaded; otherwise
|
||||
|
@ -179,4 +201,12 @@ public class AbstractITBatchIndexTest extends AbstractIndexerTest
|
|||
() -> coordinator.areSegmentsLoaded(dataSourceName), "Segment Load"
|
||||
);
|
||||
}
|
||||
|
||||
private long countCompleteSubTasks(final String dataSource)
|
||||
{
|
||||
return indexer.getCompleteTasksForDataSource(dataSource)
|
||||
.stream()
|
||||
.filter(t -> t.getType().equals("index_sub"))
|
||||
.count();
|
||||
}
|
||||
}
|
||||
|
|
|
@ -30,7 +30,7 @@ import java.io.Closeable;
|
|||
public class ITSystemTableBatchIndexTaskTest extends AbstractITBatchIndexTest
|
||||
{
|
||||
|
||||
private static final Logger LOG = new Logger(ITCompactionTaskTest.class);
|
||||
private static final Logger LOG = new Logger(ITSystemTableBatchIndexTaskTest.class);
|
||||
private static String INDEX_TASK = "/indexer/wikipedia_index_task.json";
|
||||
private static String SYSTEM_QUERIES_RESOURCE = "/indexer/sys_segment_batch_index_queries.json";
|
||||
private static String INDEX_DATASOURCE = "wikipedia_index_test";
|
||||
|
@ -40,7 +40,7 @@ public class ITSystemTableBatchIndexTaskTest extends AbstractITBatchIndexTest
|
|||
{
|
||||
LOG.info("Starting batch index sys table queries");
|
||||
try (
|
||||
final Closeable indexCloseable = unloader(INDEX_DATASOURCE)
|
||||
final Closeable indexCloseable = unloader(INDEX_DATASOURCE + config.getExtraDatasourceNameSuffix());
|
||||
) {
|
||||
doIndexTestSqlTest(
|
||||
INDEX_DATASOURCE,
|
||||
|
|
|
@ -19,7 +19,6 @@
|
|||
|
||||
package org.apache.druid.tests.query;
|
||||
|
||||
import com.google.common.base.Throwables;
|
||||
import com.google.inject.Inject;
|
||||
import org.apache.druid.testing.IntegrationTestingConfig;
|
||||
import org.apache.druid.testing.clients.CoordinatorResourceTestClient;
|
||||
|
@ -65,7 +64,7 @@ public class ITSystemTableQueryTest
|
|||
this.queryHelper.testQueriesFromFile(SYSTEM_QUERIES_RESOURCE, 2);
|
||||
}
|
||||
catch (Exception e) {
|
||||
throw Throwables.propagate(e);
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -24,7 +24,6 @@ import com.google.common.base.Throwables;
|
|||
import com.google.inject.Inject;
|
||||
import org.apache.druid.guice.annotations.Client;
|
||||
import org.apache.druid.guice.http.DruidHttpClientConfig;
|
||||
import org.apache.druid.guice.http.LifecycleUtils;
|
||||
import org.apache.druid.https.SSLClientConfig;
|
||||
import org.apache.druid.java.util.common.ISE;
|
||||
import org.apache.druid.java.util.common.StringUtils;
|
||||
|
@ -391,7 +390,7 @@ public class ITTLSTest
|
|||
|
||||
HttpClient client = HttpClientInit.createClient(
|
||||
builder.build(),
|
||||
LifecycleUtils.asMmxLifecycle(lifecycle)
|
||||
lifecycle
|
||||
);
|
||||
|
||||
HttpClient adminClient = new CredentialedHttpClient(
|
||||
|
@ -418,7 +417,7 @@ public class ITTLSTest
|
|||
|
||||
HttpClient client = HttpClientInit.createClient(
|
||||
builder.build(),
|
||||
LifecycleUtils.asMmxLifecycle(lifecycle)
|
||||
lifecycle
|
||||
);
|
||||
|
||||
HttpClient adminClient = new CredentialedHttpClient(
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[
|
||||
{
|
||||
"query": {
|
||||
"query": "SELECT count(*) FROM sys.segments WHERE datasource='wikipedia_index_test'"
|
||||
"query": "SELECT count(*) FROM sys.segments WHERE datasource LIKE 'wikipedia_index_test%'"
|
||||
},
|
||||
"expectedResults": [
|
||||
{
|
||||
|
@ -21,7 +21,7 @@
|
|||
},
|
||||
{
|
||||
"query": {
|
||||
"query": "SELECT status FROM sys.tasks"
|
||||
"query": "SELECT status AS status FROM sys.tasks WHERE datasource LIKE 'wikipedia_index_test%' GROUP BY 1"
|
||||
},
|
||||
"expectedResults": [
|
||||
{
|
||||
|
|
|
@ -61,6 +61,10 @@
|
|||
"baseDir": "/resources/data/batch_index",
|
||||
"filter": "wikipedia_index_data*"
|
||||
}
|
||||
},
|
||||
"tuningConfig": {
|
||||
"type": "index_parallel",
|
||||
"maxNumSubTasks": 10
|
||||
}
|
||||
}
|
||||
}
|
|
@ -60,6 +60,10 @@
|
|||
"baseDir": "/resources/data/batch_index",
|
||||
"filter": "wikipedia_index_data2*"
|
||||
}
|
||||
},
|
||||
"tuningConfig": {
|
||||
"type": "index_parallel",
|
||||
"maxNumSubTasks": 10
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,17 +1,13 @@
|
|||
[
|
||||
{
|
||||
"query": {
|
||||
"query": "SELECT datasource, count(*) FROM sys.segments GROUP BY 1"
|
||||
"query": "SELECT datasource, count(*) FROM sys.segments WHERE datasource='wikipedia_editstream' OR datasource='twitterstream' GROUP BY 1 "
|
||||
},
|
||||
"expectedResults": [
|
||||
{
|
||||
"datasource": "wikipedia_editstream",
|
||||
"EXPR$1": 1
|
||||
},
|
||||
{
|
||||
"datasource": "wikipedia",
|
||||
"EXPR$1": 1
|
||||
},
|
||||
{
|
||||
"datasource": "twitterstream",
|
||||
"EXPR$1": 3
|
||||
|
|
|
@ -0,0 +1,22 @@
|
|||
MIT License
|
||||
|
||||
Copyright (c) 2014-present Sebastian McKenzie and other contributors
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
"Software"), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
@ -0,0 +1,87 @@
|
|||
Eclipse Public License - v 1.0
|
||||
|
||||
THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
|
||||
|
||||
1. DEFINITIONS
|
||||
|
||||
"Contribution" means:
|
||||
|
||||
a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and
|
||||
|
||||
b) in the case of each subsequent Contributor:
|
||||
|
||||
i) changes to the Program, and
|
||||
|
||||
ii) additions to the Program;
|
||||
|
||||
where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program.
|
||||
|
||||
"Contributor" means any person or entity that distributes the Program.
|
||||
|
||||
"Licensed Patents" mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program.
|
||||
|
||||
"Program" means the Contributions distributed in accordance with this Agreement.
|
||||
|
||||
"Recipient" means anyone who receives the Program under this Agreement, including all Contributors.
|
||||
|
||||
2. GRANT OF RIGHTS
|
||||
|
||||
a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form.
|
||||
|
||||
b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder.
|
||||
|
||||
c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program.
|
||||
|
||||
d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement.
|
||||
|
||||
3. REQUIREMENTS
|
||||
|
||||
A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that:
|
||||
|
||||
a) it complies with the terms and conditions of this Agreement; and
|
||||
|
||||
b) its license agreement:
|
||||
|
||||
i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose;
|
||||
|
||||
ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits;
|
||||
|
||||
iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and
|
||||
|
||||
iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
|
||||
|
||||
When the Program is made available in source code form:
|
||||
|
||||
a) it must be made available under this Agreement; and
|
||||
|
||||
b) a copy of this Agreement must be included with each copy of the Program.
|
||||
|
||||
Contributors may not remove or alter any copyright notices contained within the Program.
|
||||
|
||||
Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution.
|
||||
|
||||
4. COMMERCIAL DISTRIBUTION
|
||||
|
||||
Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense.
|
||||
|
||||
For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages.
|
||||
|
||||
5. NO WARRANTY
|
||||
|
||||
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement , including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
|
||||
|
||||
6. DISCLAIMER OF LIABILITY
|
||||
|
||||
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
7. GENERAL
|
||||
|
||||
If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
|
||||
|
||||
If Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
|
||||
|
||||
All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
|
||||
|
||||
Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. The Eclipse Foundation is the initial Agreement Steward. The Eclipse Foundation may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
|
||||
|
||||
This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.
|
|
@ -0,0 +1,26 @@
|
|||
[The "BSD licence"]
|
||||
Copyright (c) 2003-2008 Terence Parr
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions
|
||||
are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
3. The name of the author may not be used to endorse or promote products
|
||||
derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
|
||||
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
|
||||
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
||||
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
|
||||
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
|
||||
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
@ -0,0 +1,9 @@
|
|||
[The BSD License]
|
||||
Copyright (c) 2012 Terence Parr and Sam Harwell
|
||||
All rights reserved.
|
||||
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
|
||||
|
||||
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
|
||||
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
|
||||
Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.YRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
@ -0,0 +1,26 @@
|
|||
[The "BSD 3-clause license"]
|
||||
Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions
|
||||
are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
3. Neither the name of the copyright holder nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software
|
||||
without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
|
||||
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
|
||||
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
||||
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
|
||||
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
|
||||
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (C) 2015 Jordan Harband
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
|
@ -0,0 +1,21 @@
|
|||
|
||||
Copyright 2009–2014 Contributors. All rights reserved.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to
|
||||
deal in the Software without restriction, including without limitation the
|
||||
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
|
||||
sell copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
||||
IN THE SOFTWARE.
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
ASM: a very small and fast Java bytecode manipulation framework
|
||||
Copyright (c) 2000-2011 INRIA, France Telecom
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions
|
||||
are met:
|
||||
1. Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
3. Neither the name of the copyright holders nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
||||
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
|
||||
THE POSSIBILITY OF SUCH DAMAGE.
|
|
@ -0,0 +1,19 @@
|
|||
Copyright (c) 2014-present Matt Zabriskie
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2011-2014 Twitter, Inc
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
|
@ -0,0 +1,23 @@
|
|||
Copyright 2013 Thorsten Lorenz.
|
||||
All rights reserved.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person
|
||||
obtaining a copy of this software and associated documentation
|
||||
files (the "Software"), to deal in the Software without
|
||||
restriction, including without limitation the rights to use,
|
||||
copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the
|
||||
Software is furnished to do so, subject to the following
|
||||
conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
|
||||
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
|
||||
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
OTHER DEALINGS IN THE SOFTWARE.
|
|
@ -0,0 +1,9 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2015 Jason Quense
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
@ -0,0 +1 @@
|
|||
the Checker Framework developers
|
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2017 Jed Watson
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
|
@ -0,0 +1,60 @@
|
|||
|
||||
Attribution-NonCommercial 2.5
|
||||
|
||||
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS LICENSE DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE INFORMATION PROVIDED, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM ITS USE.
|
||||
License
|
||||
|
||||
THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.
|
||||
|
||||
BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO BE BOUND BY THE TERMS OF THIS LICENSE. THE LICENSOR GRANTS YOU THE RIGHTS CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND CONDITIONS.
|
||||
|
||||
1. Definitions
|
||||
|
||||
"Collective Work" means a work, such as a periodical issue, anthology or encyclopedia, in which the Work in its entirety in unmodified form, along with a number of other contributions, constituting separate and independent works in themselves, are assembled into a collective whole. A work that constitutes a Collective Work will not be considered a Derivative Work (as defined below) for the purposes of this License.
|
||||
"Derivative Work" means a work based upon the Work or upon the Work and other pre-existing works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgment, condensation, or any other form in which the Work may be recast, transformed, or adapted, except that a work that constitutes a Collective Work will not be considered a Derivative Work for the purpose of this License. For the avoidance of doubt, where the Work is a musical composition or sound recording, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered a Derivative Work for the purpose of this License.
|
||||
"Licensor" means the individual or entity that offers the Work under the terms of this License.
|
||||
"Original Author" means the individual or entity who created the Work.
|
||||
"Work" means the copyrightable work of authorship offered under the terms of this License.
|
||||
"You" means an individual or entity exercising rights under this License who has not previously violated the terms of this License with respect to the Work, or who has received express permission from the Licensor to exercise rights under this License despite a previous violation.
|
||||
2. Fair Use Rights. Nothing in this license is intended to reduce, limit, or restrict any rights arising from fair use, first sale or other limitations on the exclusive rights of the copyright owner under copyright law or other applicable laws.
|
||||
|
||||
3. License Grant. Subject to the terms and conditions of this License, Licensor hereby grants You a worldwide, royalty-free, non-exclusive, perpetual (for the duration of the applicable copyright) license to exercise the rights in the Work as stated below:
|
||||
|
||||
to reproduce the Work, to incorporate the Work into one or more Collective Works, and to reproduce the Work as incorporated in the Collective Works;
|
||||
to create and reproduce Derivative Works;
|
||||
to distribute copies or phonorecords of, display publicly, perform publicly, and perform publicly by means of a digital audio transmission the Work including as incorporated in Collective Works;
|
||||
to distribute copies or phonorecords of, display publicly, perform publicly, and perform publicly by means of a digital audio transmission Derivative Works;
|
||||
The above rights may be exercised in all media and formats whether now known or hereafter devised. The above rights include the right to make such modifications as are technically necessary to exercise the rights in other media and formats. All rights not expressly granted by Licensor are hereby reserved, including but not limited to the rights set forth in Sections 4(d) and 4(e).
|
||||
|
||||
4. Restrictions.The license granted in Section 3 above is expressly made subject to and limited by the following restrictions:
|
||||
|
||||
You may distribute, publicly display, publicly perform, or publicly digitally perform the Work only under the terms of this License, and You must include a copy of, or the Uniform Resource Identifier for, this License with every copy or phonorecord of the Work You distribute, publicly display, publicly perform, or publicly digitally perform. You may not offer or impose any terms on the Work that alter or restrict the terms of this License or the recipients' exercise of the rights granted hereunder. You may not sublicense the Work. You must keep intact all notices that refer to this License and to the disclaimer of warranties. You may not distribute, publicly display, publicly perform, or publicly digitally perform the Work with any technological measures that control access or use of the Work in a manner inconsistent with the terms of this License Agreement. The above applies to the Work as incorporated in a Collective Work, but this does not require the Collective Work apart from the Work itself to be made subject to the terms of this License. If You create a Collective Work, upon notice from any Licensor You must, to the extent practicable, remove from the Collective Work any credit as required by clause 4(c), as requested. If You create a Derivative Work, upon notice from any Licensor You must, to the extent practicable, remove from the Derivative Work any credit as required by clause 4(c), as requested.
|
||||
You may not exercise any of the rights granted to You in Section 3 above in any manner that is primarily intended for or directed toward commercial advantage or private monetary compensation. The exchange of the Work for other copyrighted works by means of digital file-sharing or otherwise shall not be considered to be intended for or directed toward commercial advantage or private monetary compensation, provided there is no payment of any monetary compensation in connection with the exchange of copyrighted works.
|
||||
If you distribute, publicly display, publicly perform, or publicly digitally perform the Work or any Derivative Works or Collective Works, You must keep intact all copyright notices for the Work and provide, reasonable to the medium or means You are utilizing: (i) the name of Original Author (or pseudonym, if applicable) if supplied, and/or (ii) if the Original Author and/or Licensor designate another party or parties (e.g. a sponsor institute, publishing entity, journal) for attribution in Licensor's copyright notice, terms of service or by other reasonable means, the name of such party or parties; the title of the Work if supplied; to the extent reasonably practicable, the Uniform Resource Identifier, if any, that Licensor specifies to be associated with the Work, unless such URI does not refer to the copyright notice or licensing information for the Work; and in the case of a Derivative Work, a credit identifying the use of the Work in the Derivative Work (e.g., "French translation of the Work by Original Author," or "Screenplay based on original Work by Original Author"). Such credit may be implemented in any reasonable manner; provided, however, that in the case of a Derivative Work or Collective Work, at a minimum such credit will appear where any other comparable authorship credit appears and in a manner at least as prominent as such other comparable authorship credit.
|
||||
For the avoidance of doubt, where the Work is a musical composition:
|
||||
|
||||
Performance Royalties Under Blanket Licenses. Licensor reserves the exclusive right to collect, whether individually or via a performance rights society (e.g. ASCAP, BMI, SESAC), royalties for the public performance or public digital performance (e.g. webcast) of the Work if that performance is primarily intended for or directed toward commercial advantage or private monetary compensation.
|
||||
Mechanical Rights and Statutory Royalties. Licensor reserves the exclusive right to collect, whether individually or via a music rights agency or designated agent (e.g. Harry Fox Agency), royalties for any phonorecord You create from the Work ("cover version") and distribute, subject to the compulsory license created by 17 USC Section 115 of the US Copyright Act (or the equivalent in other jurisdictions), if Your distribution of such cover version is primarily intended for or directed toward commercial advantage or private monetary compensation.
|
||||
Webcasting Rights and Statutory Royalties. For the avoidance of doubt, where the Work is a sound recording, Licensor reserves the exclusive right to collect, whether individually or via a performance-rights society (e.g. SoundExchange), royalties for the public digital performance (e.g. webcast) of the Work, subject to the compulsory license created by 17 USC Section 114 of the US Copyright Act (or the equivalent in other jurisdictions), if Your public digital performance is primarily intended for or directed toward commercial advantage or private monetary compensation.
|
||||
5. Representations, Warranties and Disclaimer
|
||||
|
||||
UNLESS OTHERWISE MUTUALLY AGREED TO BY THE PARTIES IN WRITING, LICENSOR OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
|
||||
|
||||
6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
7. Termination
|
||||
|
||||
This License and the rights granted hereunder will terminate automatically upon any breach by You of the terms of this License. Individuals or entities who have received Derivative Works or Collective Works from You under this License, however, will not have their licenses terminated provided such individuals or entities remain in full compliance with those licenses. Sections 1, 2, 5, 6, 7, and 8 will survive any termination of this License.
|
||||
Subject to the above terms and conditions, the license granted here is perpetual (for the duration of the applicable copyright in the Work). Notwithstanding the above, Licensor reserves the right to release the Work under different license terms or to stop distributing the Work at any time; provided, however that any such election will not serve to withdraw this License (or any other license that has been, or is required to be, granted under the terms of this License), and this License will continue in full force and effect unless terminated as stated above.
|
||||
8. Miscellaneous
|
||||
|
||||
Each time You distribute or publicly digitally perform the Work or a Collective Work, the Licensor offers to the recipient a license to the Work on the same terms and conditions as the license granted to You under this License.
|
||||
Each time You distribute or publicly digitally perform a Derivative Work, Licensor offers to the recipient a license to the original Work on the same terms and conditions as the license granted to You under this License.
|
||||
If any provision of this License is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this License, and without further action by the parties to this agreement, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
|
||||
No term or provision of this License shall be deemed waived and no breach consented to unless such waiver or consent shall be in writing and signed by the party to be charged with such waiver or consent.
|
||||
This License constitutes the entire agreement between the parties with respect to the Work licensed here. There are no understandings, agreements or representations with respect to the Work not specified here. Licensor shall not be bound by any additional provisions that may appear in any communication from You. This License may not be modified without the mutual written agreement of the Licensor and You.
|
||||
Creative Commons is not a party to this License, and makes no warranty whatsoever in connection with the Work. Creative Commons will not be liable to You or any party on any legal theory for any damages whatsoever, including without limitation any general, special, incidental or consequential damages arising in connection to this license. Notwithstanding the foregoing two (2) sentences, if Creative Commons has expressly identified itself as the Licensor hereunder, it shall have all rights and obligations of Licensor.
|
||||
|
||||
Except for the limited purpose of indicating to the public that the Work is licensed under the CCPL, neither party will use the trademark "Creative Commons" or any related trademark or logo of Creative Commons without the prior written consent of Creative Commons. Any permitted use will be in compliance with Creative Commons' then-current trademark usage guidelines, as may be published on its website or otherwise made available upon request from time to time.
|
||||
|
||||
Creative Commons may be contacted at https://creativecommons.org/.
|
|
@ -0,0 +1,20 @@
|
|||
Copyright JS Foundation and other contributors
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
'Software'), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
||||
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||||
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
||||
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
@ -0,0 +1,27 @@
|
|||
Copyright 2010-2018 Mike Bostock
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification,
|
||||
are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
* Neither the name of the author nor the names of contributors may be used to
|
||||
endorse or promote products derived from this software without specific prior
|
||||
written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
|
||||
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (C) 2015 Jordan Harband
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2015 Jason Quense
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
|
@ -0,0 +1,19 @@
|
|||
Copyright (C) 2013-2015 by Andrea Giammarchi - @WebReflection
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014 Metamarkets
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue