This commit fixes issues with delayed supervisor termination during certain transient states.
Tasks can be created during supervisor termination and left behind since the cleanup may
not consider these newly added tasks.
#12178 added a lock for the entire process of task creation to prevent such dangling tasks.
But it also introduced a deadlock scenario as follows:
- An invocation of `runInternal` is in progress.
- A `stop` request comes, acquires `stateChangeLock` and submit a `ShutdownNotice`
- `runInternal` keeps waiting to acquire the `stateChangeLock`
- `ShutdownNotice` remains stuck in the notice queue because `runInternal` is still running
- After some timeout, the supervisor goes through a forced termination
Fix:
* `SeekableStreamSupervisor.runInternal` - do not try to acquire lock if supervisor is already stopping
* `SupervisorStateManager.maybeSetState` - do not allow transitions from STOPPING state
* Move web-console dependency from druid-server to distribution
* Add a test to check if the web-console is correctly integrated
* exclude web-console from 'other integration tests'
* Revert "exclude web-console from 'other integration tests'"
This reverts commit 8d72225544.
* Revert "Add a test to check if the web-console is correctly integrated"
This reverts commit d6ac8f3087.
* Update Snappy to 1.1.8.4.
Prior to this, because snappy-java wasn't included in dependencyManagement,
we actually shipped multiple different versions for different extensions,
ranging from 1.1.7.1 to 1.1.8.4. Now, we standardize on 1.1.8.4.
Among other things, this enables the tests to pass on M1 Macs.
* Update snappy-java versions in licenses.yaml.
1. remove unnecessary generic type from CompressedBigDecimal
2. support Number input types
3. support aggregator reading supported input types directly (uningested
data)
4. fix scaling bug in buffer aggregator
* SQL: Fix round-trips of floating point literals.
When writing RexLiterals into Druid expressions, we now write non-integer
numeric literals in such a way that ensures they are parsed as doubles
on the other end.
* Updates from code review, and some additional stuff inspired by the
investigation.
- Remove unnecessary formatting code from DruidExpression.doubleLiteral:
it handles things just fine with its default behavior.
- Fix a problem where expression literals could not represent Long.MIN_VALUE.
Now, integer literals start life off as BigIntegerExpr instead of LongExpr,
and are converted to LongExpr during flattening. This is necessary because,
in order to avoid ambiguity between unary minus and negative literals, our
grammar does not actually have true negative literals. Negative numbers must
be represented as unary minus next to a positive literal.
- Fix a bug introduced in #12230 where shuttle.visitAll(args) delegated
to shuttle.visit(arg) instead of arg.visit(shuttle). The latter does
a recursive visitation, which is the intended behavior.
* Style fixes.
* Move regexp to the right place.
* Cleaner JSON for various input sources and formats.
Add JsonInclude to various properties, to avoid population of default
values in serialized JSON.
Also fixes a bug in OrcInputFormat: it was not writing binaryAsString,
so the property would be lost on serde.
* Additonal test cases.
* Expose HTTP Response headers from SqlResource
This change makes the SqlResource expose HTTP response
headers in the same way that the QueryResource exposes them.
Fundamentally, the change is to pipe the QueryResponse
object all the way through to the Resource so that it can
populate response headers. There is also some code
cleanup around DI, as there was a superfluous FactoryFactory
class muddying things up.
* prometheus-emitter supports sending metrics to pushgateway regularly and continuously
* spell check fix
* Optimization variable name and related documents
* Update docs/development/extensions-contrib/prometheus.md
OK, it looks more conspicuous
Co-authored-by: Frank Chen <frankchen@apache.org>
* Update doc
* Update docs/development/extensions-contrib/prometheus.md
Co-authored-by: Frank Chen <frankchen@apache.org>
* When PrometheusEmitter is closed, close the scheduler
* Ensure that registeredMetrics is thread safe.
* Local variable name optimization
* Remove unnecessary white space characters
Co-authored-by: Frank Chen <frankchen@apache.org>
* Update tutorial-kafka.md
Added missing command to the doc for zookeeper before starting kafka
* Update docs/tutorials/tutorial-kafka.md
Co-authored-by: Frank Chen <frankchen@apache.org>
changes:
* long and double value columns are now written directly, at the same time as writing out the 'intermediary' dictionaryid column with unsorted ids
* remove reverse value lookup from GlobalDictionaryIdLookup since it is no longer needed
* fix bug in /status/properties filtering
* Refactor tests to use jackson for parsing druid.server.hiddenProperties instead of hacky string modifications
* make javadoc more descriptive using example
* add in a sanity assertion that raw properties keyset size is greater than filtered properties keyset size
* MSQ extension: Fix over-capacity write in ScanQueryFrameProcessor.
Frame processors are meant to write only one output frame per cycle.
The ScanQueryFrameProcessor would write two when reading from a channel
if the input frame cursor cycled and then the output frame filled up
while reading from the next frame.
This patch fixes the bug, and adds a test. It also makes some adjustments
to the processor code in order to make it easier to test.
* Add license header.
* Add interpolation to JsonConfigurator
* Fix checkstyle
* Fix tests by removing common-text override
* Add back commons-text without version
* Remove unused hadoopDir configs
* Move some stuff to hopefully pass coverage
* more consistent expression error messages
* review stuff
* add NamedFunction for Function, ApplyFunction, and ExprMacro to share common stuff
* fixes
* add expression transform name to transformer failure, better parse_json error messaging
-Add classes for writing cell values in LZ4 block compressed format.
Payloads are indexed by element number for efficient random lookup
-update SerializablePairLongStringComplexMetricSerde to use block
compression
-SerializablePairLongStringComplexMetricSerde also uses delta encoding
of the Long by doing 2-pass encoding: buffers first to find min/max
numbers and delta-encodes as integers if possible
Entry points for doing block-compressed storage of byte[] payloads
are the CellWriter and CellReader class. See
SerializablePairLongStringComplexMetricSerde for how these are used
along with how to do full column-based storage (delta encoding here)
which includes 2-pass encoding to compute a column header
Co-authored-by: Clint Wylie <cjwylie@gmail.com>
Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com>
Co-authored-by: Charles Smith <techdocsmith@gmail.com>
Co-authored-by: brian.le <brian.le@imply.io>
Compressed Big Decimal is an extension which provides support for
Mutable big decimal value that can be used to accumulate values
without losing precision or reallocating memory. This type helps in
absolute precision arithmetic on large numbers in applications,
where greater level of accuracy is required, such as financial
applications, currency based transactions. This helps avoid rounding
issues where in potentially large amount of money can be lost.
Accumulation requires that the two numbers have the same scale,
but does not require that they are of the same size. If the value
being accumulated has a larger underlying array than this value
(the result), then the higher order bits are dropped, similar to what
happens when adding a long to an int and storing the result in an
int. A compressed big decimal that holds its data with an embedded
array.
Compressed big decimal is an absolute number based complex type
based on big decimal in Java. This supports all the functionalities
supported by Java Big Decimal. Java Big Decimal is not mutable in
order to avoid big garbage collection issues. Compressed big decimal
is needed to mutate the value in the accumulator.
* apache#12063 Ease of hidding sensitive properties from /status/properties endpoint
* apache#12063 Ease of hidding sensitive properties from /status/properties endpoint
* apache#12063 Ease of hidding sensitive properties from /status/properties endpoint
using one property for hiding properties, updated the index.md to document hiddenProperties
* apache#12063 Ease of hidding sensitive properties from /status/properties endpoint
Added java docs
* apache#12063 Ease of hidding sensitive properties from /status/properties endpoint
Add "password", "key", "token", "pwd" as default druid.server.hiddenProperties
fixed typo and removed redundant space
Co-authored-by: zemin <zemin.piao@adyen.com>
Two improvements:
- Use a realistic targetRowsPerSegment, so if people copy and paste
the example from the docs, it will generate reasonable segments.
- Spell "countryName" correctly.
The docs say Java 17 support is experimental, and give tips on running
successfully with Java 17.
This patch also removes java.base/jdk.internal.perf and
jdk.management/com.sun.management.internal from the list of required
exports and opens, because they were formerly needed for JvmMonitor,
which was rewritten in #12481 to use MXBeans instead.
* FrameFile: Java 17 compatibility.
DataSketches Memory.map is not Java 17 compatible, and from discussions
with the team, is challenging to make compatible with 17 while also
retaining compatibility with 8 and 11. So, in this patch, we switch away
from Memory.map and instead use the builtin JDK mmap functionality. Since
it only supports maps up to Integer.MAX_VALUE, we also implement windowing
in FrameFile, such that we can still handle large files.
Other changes:
1) Add two new "map" functions to FileUtils, which we use in this patch.
2) Add a footer checksum to the FrameFile format. Individual frames
already have checksums, but the footer was missing one.
* Changes for static analysis.
* wip
* Fixes.
* Building druid-it-tools and running for travis in it.sh
* Addressing comments
* Updating druid-it-image pom to point to correct it-tools
* Updating all it-tools references to druid-it-tools
* Adding dist back to it.sh travis
* Trigger Build
* Disabling batchIndex tests and commenting out user specific code
* Fixing checkstyle and intellij inspection errors
* Replacing tabs with spaces in it.sh
* Enabling old batch index tests with indexer