* [#1332] Fix - select failing if milis used for idx.
* Formating correction.
* Address comment: throw original exception.
* Using constant values in tests
- Try converting to Integer and then multiply by 1000L to achieve milis.
- If not successful try converting to Long or rethrow original
exception.
* DateTime#of has to support "2011-01-01T00:00:00"
- in addition to seconds and milisecs, this method currently supports
even a date string.
* Handle only milisec timestamps and ISO8601 strings
* Added checkstyle for "Methods starting with Capital Letters" and changed the method names violating this.
* Un-abbreviate the method names in the calcite tests
* Fixed checkstyle errors
* Changed asserts position in the code
IndexTask had special-cased code to properly send a TaskToolbox to a
IngestSegmentFirehoseFactory that's nested inside a CombiningFirehoseFactory,
but ParallelIndexSubTask didn't.
This change refactors IngestSegmentFirehoseFactory so that it doesn't need a
TaskToolbox; it instead gets a CoordinatorClient and a SegmentLoaderFactory
directly injected into it.
This also refactors SegmentLoaderFactory so it doesn't depend on
an injectable SegmentLoaderConfig, since its only method always
replaces the preconfigured SegmentLoaderConfig anyway.
This makes it possible to use SegmentLoaderFactory without setting
druid.segmentCaches.locations to some dummy value.
Another goal of this PR is to make it possible for IngestSegmentFirehoseFactory
to list data segments outside of connect() --- specifically, to make it a
FiniteFirehoseFactory which can query the coordinator in order to calculate its
splits. See #7048.
This also adds missing datasource name URL-encoding to an API used by
CoordinatorBasedSegmentHandoffNotifier.
* Remove DataSegmentFinder, InsertSegmentToDb, and descriptor.json file
* delete descriptor.file when killing segments
* fix test
* Add doc for ha
* improve warning
* index_parallel: support !appendToExisting with no explicit intervals
This enables ParallelIndexSupervisorTask to dynamically request locks at runtime
if it is run without explicit intervals in the granularity spec and with
appendToExisting set to false. Previously, it behaved as if appendToExisting
was set to true, which was undocumented and inconsistent with IndexTask and
Hadoop indexing.
Also, when ParallelIndexSupervisorTask allocates segments in the explicit
interval case, fail if its locks on the interval have been revoked.
Also make a few other additions/clarifications to native ingestion docs.
Fixes#6989.
* Review feedback.
PR description on GitHub updated to match.
* Make native batch ingestion partitions start at 0
* Fix to previous commit
* Unit test. Verified to fail without the other commits on this branch.
* Another round of review
* Slightly scarier warning
* bugfix: when building materialized-view, if taskCount >1, may cause ConcurrentModificationException
* remove entry after iteration instead of using ConcurrentMap, and add unit test
* small change
* modify unit test for coverage
* remove unused method
* document middle manager api
* re-arrange
* correction
* document more missing overlord api calls, minor re-arrange of some code i was referencing
* fix it
* this will fix it
* fixup
* link to other docs
* Support kafka transactional topics
* update kafka to version 2.0.0
* Remove the skipOffsetGaps option since it's not used anymore
* Adjust kafka consumer to use transactional semantics
* Update tests
* Remove unused import from test
* Fix compilation
* Invoke transaction api to fix a unit test
* temporary modification of travis.yml for debugging
* another attempt to get travis tasklogs
* update kafka to 2.0.1 at all places
* Remove druid-kafka-eight dependency from integration-tests, remove the kafka firehose test and deprecate kafka-eight classes
* Add deprecated in docs for kafka-eight and kafka-simple extensions
* Remove skipOffsetGaps and code changes for transaction support
* Fix indentation
* remove skipOffsetGaps from kinesis
* Add transaction api to KafkaRecordSupplierTest
* Fix indent
* Fix test
* update kafka version to 2.1.0
* Fix the bug with num_rows in sys.segments
* Fix segmentMetadataInfo update in DruidSchema
* Add numRows to SegmentMetadataHolder builder's constructor, so it's not overwritten
* Rename SegSegmentSignature to setSegmentMetadataHolder and fix it so nested map is appended instead of recreated
* Replace Map<String, Set<String>> segmentServerMap with Set<String> for num_replica
* Remove unnecessary code and update test
* Add unit test for num_rows
* PR comments
* change access modifier to default package level
* minor changes to comments
* PR comments
* Fix:
1. hadoop-common dependency for druid-hdfs and druid-kerberos extensions
Refactoring:
2. Hadoop config call in the inner static class to avoid class path conflicts for stopGracefully kill
* Fix:
1. hadoop-common test dependency
* Fix:
1. Avoid issue of kill command once the job is actually completed
* Add docker container for druid
This container is an 'omnibus' (since there is such a high
overlap with the various services). It includes all contrib
extension as well as the msql connector.
It is intended to be run as `docker run NAME SERVICE`
(e.g. docker run druid:latest broker)
* Add Apache license header
* Resolve issues from Pull Request review
* Add comments at top of script per PR comments
* Revert BUILDKIT. Not available everywhere.
* Don't set hostname, allow default (IP)
Some environments (e.g. Kubernetes Deployments) don't resolve
hostname to IP.
* Switch to amd64 glibc-based busybox from 32-bit uclibc
* Override service-specific configuration
* Replace MAINTAINER w/ LABEL
* Add mysql connector to application classpath
This works around issue #3770https://github.com/apache/incubator-druid/issues/3770
* Add docker-compose and sample environment
Signed-off-by: Don Bowman <don@agilicus.com>
* Moved Scan Builder to Druids class and started on Scan Benchmark setup
* Need to form queries
* It runs.
* Remove todos
* Change number of benchmark iterations
* Changed benchmark params
* More param changes
* Made Jon's changes and removed TODOs
* Broke some long lines into two lines
* Decrease segment size for less memory usage
* Committing a param change to kick teamcity
* Improper equals override is fixed to prevent NullPointerException
* Fixed curly brace indentation.
* Test method is added for equals method of TaskLockPosse class.
* maintenance mode for Historical
forbidden api fix, config deserialization fix
logging fix, unit tests
* addressed comments
* addressed comments
* a style fix
* addressed comments
* a unit-test fix due to recent code-refactoring
* docs & refactoring
* addressed comments
* addressed a LoadRule drop flaw
* post merge cleaning up
Native batch indexing doesn't yet support the maxParseExceptions,
maxSavedParseExceptions, and logParseExceptions tuning config options, so
ParallelIndexSupervisorTask logs if these are set. But the default value for
maxParseExceptions is Integer.MAX_VALUE, which means that you'll get the
maxParseExceptions flavor of this warning even if you don't configure
maxParseExceptions.
This PR changes all three warnings to occur if you change the settings from the
default; this mostly affects the maxParseExceptions warning.
* Prohibit assigning concurrent maps into Map-types variables and fields; Fix a race condition in CoordinatorRuleManager; improve logic in DirectDruidClient and ResourcePool
* Enforce that if compute(), computeIfAbsent(), computeIfPresent() or merge() is called on a ConcurrentHashMap, it's stored in a ConcurrentHashMap-typed variable, not ConcurrentMap; add comments explaining get()-before-computeIfAbsent() optimization; refactor Counters; fix a race condition in Intialization.java
* Remove unnecessary comment
* Checkstyle
* Fix getFromExtensions()
* Add a reference to the comment about guarded computeIfAbsent() optimization; IdentityHashMap optimization
* Fix UriCacheGeneratorTest
* Workaround issue with MaterializedViewQueryQueryToolChest
* Strengthen Appenderator's contract regarding concurrency