- Shedding locks at startup is bad, we actually want to keep them. Stop doing that.
- stopGracefully now interrupts the run thread if had started running finishJob. This avoids
waiting for handoff unnecessarily.
- fixes#1970
- extracted out segment handoff callbacks in SegmentHandoffNotifier
which is responsible for tracking segment handoffs and doing callbacks
when handoff is complete.
- Coordinator now maintains a view of segments in the cluster, this
will affect the jam heap requirements for the overlord for large
clusters.
realtime index task and nodes now use HTTP end points exposed by the
coordinator to get serverView
review comment
fix realtime node guide injection
review comments
make test not rely on scheduled exec
fix compilation
fix import
review comment
introduce immutableSegmentLoadInfo
fix son reading
remove unnecessary logging
This is done by killing and respawning the jvms rather than reconnecting to existing
jvms, for a couple reasons. One is that it lets you restore tasks after server reboots
too, and another is that it lets you upgrade all the software on a box at once by just
restarting everything.
The main changes are,
1) Add "canRestore" and "stopGracefully" methods to Tasks that say if a task can
stop gracefully, and actually do a graceful stop. RealtimeIndexTask is the only
one that currently implements this.
2) Add "stop" method to TaskRunners that attempts to do an orderly shutdown.
ThreadPoolTaskRunner- call stopGracefully on restorable tasks, wait for exit
ForkingTaskRunner- close output stream to restorable tasks, wait for exit
RemoteTaskRunner- do nothing special, we actually don't want to shutdown
3) Add "restore" method to TaskRunners that attempts to bootstrap tasks from last run.
Only ForkingTaskRunner does anything here. It maintains a "restore.json" file with
a list of restorable tasks.
4) Have the CliPeon's ExecutorLifecycle lock the task base directory to avoid a restored
task and a zombie old task from stomping on each other.
Deserialization of Optionals does not work quite right- they come back as actual
nulls, rather than absent Optionals. So these probably only ever worked for the local
task action client.
This is a feature meant to allow realtime tasks to work without being told upfront
what shardSpec they should use (so we can potentially publish a variable number
of segments per interval).
The idea is that there is a "pendingSegments" table in the metadata store that
tracks allocated segments. Each one has a segment id (the same segment id we know
and love) and is also part of a sequence.
The sequences are an idea from @cheddar that offers a way of doing replication.
If there are N tasks reading exactly the same data with exactly the same logic
(think Kafka tasks reading a fixed range of offsets) then you can place them
in the same sequence, and they will generate the same sequence of segments.
In #933 the ForkingTaskRunner's logging was changed to buffered from
unbuffered. This means that the last few KB of the logs are generally
not visible while a task is running, which makes debugging running
tasks difficult.
This is accomplished by making sure that scheduleTasksCleanupForWorker is only called
from the PathChildrenCache event thread, having it cancel existing cleanup tasks when
it adds a new one, and having tasks check on finish that the thing they are removing
from the task list is actually themselves.
1) Remove maven client from downloading extensions at runtime.
2) Provide a way to load Druid extensions and hadoop dependencies through file system.
3) Refactor pull-deps so that it can download extensions into extension directories.
4) Add documents on how to use this new extension loading mechanism.
5) Change the way how Druid tarball is generated. Now all the extensions + hadoop-client 2.3.0
are packaged within the Druid tarball.
* Still places `druid.indexer.runner.javaOpts` on the command line, but the Peon no longer tries to have the property `druid.indexer.runner.javaOpts` set
* Fixes https://github.com/druid-io/druid/issues/1841
Fixes#1727.
revert to doing merging for results for union queries on broker.
revert unrelated changes
Add test for union query runner
Add test
remove unused imports
fix imports
fix renamed file
fix test
update docs.
fixes#1715
- TaskLockBox has a set of active tasks
- lock requests throws exception for if they are from a task not in
active task set.
- TaskQueue is responsible for updating the active task set on
tasklockbox
fix#1715fixes#1715
- TaskLockBox has a set of active tasks
- lock requests throws exception for if they are from a task not in
active task set.
- TaskQueue is responsible for updating the active task set on
tasklockbox
review comment
remove duplicate line
use ISE instead
organise imports
so that user hadoop related InputRowParsers are created only when needed
this allows overlord to accept a HadoopIndexTask with a hadoopy InputRowParser
and not fail because hadoopy InputRowParser might need hadoop libraries
1) Persist using Committer instead of Runnable. (Although the metadata object
is ignored in this patch)
2) Remove the getSink method.
3) Plumbers are now responsible for time-based and hydrant-full-based periodic
committing. (FireChief, RealtimeIndexTask, and IndexTask used to do this)
allowOlderVersions has been stuck true for a while due to a bug (introduced in
566a3a61), but I think it's actually OK this way. I think it's reasonable to
expect tasks to choose versions in some way that makes sense, so long as they
don't choose one larger than their taskLock version. This is still verified.
The interval constraint was introduced to force tasks to break up their
segment insert lists into manageable chunks. They are already doing this, and
I think it's reasonable to expect them to do so without enforcement.
Lifting these constraints paves the way for transactional insertion of segments
that have varying versions and may be for varying intervals.
Also, harmonize exception handling between the RealtimeIndexTask and the RealtimeManager.
Conditions other than null inputRows and parse errors bubble up in both.
this PR adds a JavaScriptWorkerSelectStrategy which allows defining
arbitrary logic for selecting workers to run task using a JavaScript
function.
This gives users full control to implement complex worker selection
strategies based on task attributes.
more tests and a complex javascript config
fix for java8 modify for nashorn compatibility
- pass rowFlushboundary correctly instead of using default.
- fixes indexTask failing with
io.druid.segment.incremental.IndexSizeExceededException when
rowFlushboundary is set higher than
RealtimeTuningConfig.defaultMaxRowsInMemory
rename test method
* Add `invokeForeignLoader` to commonize the way tasks are attempted to be launched in a foreign class loader
* Add `buildClassLoader` to accomplish the common tasks for hadoop jobs when building a ClassLoader
- compression for single-value dimensions using CompressedVSizeIntsIndexedSupplier
- makes dimension compression configurable via IndexSpec
- IndexSpec also enables configuring bitmap and metric compression
* Requires https://github.com/druid-io/druid-api/pull/37
* Requires https://github.com/metamx/java-util/pull/22
* Moves the puller logic to use a more standard workflow going through java-util helpers instead of re-writing the handlers for each impl
* General workflow goes like this: 1) LoadSpec makes sure the correct Puller is called with the correct parameters. 2) The Puller sets up general information like how to make an InputStream, how to find a file name (for .gz files for example), and when to retry. 3) CompressionUtils does most of the heavy lifting when it can
review comments
more refactoring and cleaning of redundant code
add UT + docs + more refactoring
fixes + review comments
more cleanup
end points to fetch history
review comments
remove unnecessary changes
review comments rename header name
review comments + add test for MetadataRulesManager
review comments docs
- Moves DimExtractionFn under a more generic ExtractionFn interface to
support extracting dimension values other than strings
- pushes down extractionFn to the storage adapter from query engine
- 'dimExtractionFn' parameter has been deprecated in favor of 'extractionFn'
- adds a TimeFormatExtractionFn, allowing to project the '__time' dimension
- JavascriptDimExtractionFn renamed to JavascriptExtractionFn, adding
support for any dimension value types that map directly to Javascript
- update documentation for time column extraction and related changes
- Switch to using Druid parent POM
- Add required fields for Sonatype
- Common plugin versions and settings have been moved to the parent pom
- Cleanup artifacts and POMs for consistent formatting
- Remove org.hyperic.sigar dependency and update docs to reflect necessary jars to add at runtime when sigar is needed