mirror of https://github.com/apache/druid.git
HdfsDataSegmentPusher: Close tmpIndexFile before copying it. (#5873)
It seems that copy-before-close works OK on HDFS, but it doesn't work on all filesystems. In particular, we observed this not working properly with Google Cloud Storage. And anyway, it's better hygiene to close files before attempting to copy them somewhere else.
This commit is contained in:
parent
fe4d678aac
commit
0ae4aba4e2
|
@ -125,8 +125,11 @@ public class HdfsDataSegmentPusher implements DataSegmentPusher
|
|||
|
||||
final long size;
|
||||
final DataSegment dataSegment;
|
||||
try (FSDataOutputStream out = fs.create(tmpIndexFile)) {
|
||||
size = CompressionUtils.zip(inDir, out);
|
||||
try {
|
||||
try (FSDataOutputStream out = fs.create(tmpIndexFile)) {
|
||||
size = CompressionUtils.zip(inDir, out);
|
||||
}
|
||||
|
||||
final String uniquePrefix = useUniquePath ? DataSegmentPusher.generateUniquePath() + "_" : "";
|
||||
final Path outIndexFile = new Path(StringUtils.format(
|
||||
"%s/%s/%d_%sindex.zip",
|
||||
|
|
Loading…
Reference in New Issue