mirror of https://github.com/apache/druid.git
5ce3185b9c
* Fix three bugs with segment publishing. 1. In AppenderatorImpl: always use a unique path if requested, even if the segment was already pushed. This is important because if we don't do this, it causes the issue mentioned in #6124. 2. In IndexerSQLMetadataStorageCoordinator: Fix a bug that could cause it to return a "not published" result instead of throwing an exception, when there was one metadata update failure, followed by some random exception. This is done by resetting the AtomicBoolean that tracks what case we're in, each time the callback runs. 3. In BaseAppenderatorDriver: Only kill segments if we get an affirmative false publish result. Skip killing if we just got some exception. The reason for this is that we want to avoid killing segments if they are in an unknown state. Two other changes to clarify the contracts a bit and hopefully prevent future bugs: 1. Return SegmentPublishResult from TransactionalSegmentPublisher, to make it more similar to announceHistoricalSegments. 2. Make it explicit, at multiple levels of javadocs, that a "false" publish result must indicate that the publish _definitely_ did not happen. Unknown states must be exceptions. This helps BaseAppenderatorDriver do the right thing. * Remove javadoc-only import. * Updates. * Fix test. * Fix tests. |
||
---|---|---|
.. | ||
avro-extensions | ||
datasketches | ||
druid-basic-security | ||
druid-kerberos | ||
hdfs-storage | ||
histogram | ||
kafka-eight | ||
kafka-extraction-namespace | ||
kafka-indexing-service | ||
lookups-cached-global | ||
lookups-cached-single | ||
mysql-metadata-storage | ||
postgresql-metadata-storage | ||
protobuf-extensions | ||
s3-extensions | ||
simple-client-sslcontext | ||
stats |