This switches the SnappyCodec to use the java-snappy codec, rather than the native one.
To use the codec, snappy-java.jar (from org.xerial.snappy) needs to be on the classpath.
This comesin as an avro dependency, so it is already on the hadoop-common classpath,
as well as in hadoop-common/lib.
The version used is now managed in the hadoop-project POM; initially 1.1.7.7
Contributed by DB Tsai and Liang-Chi Hsieh
Change-Id: Id52a404a0005480e68917cd17f0a27b7744aea4e
When a filesystem is closed, the FileSystem log will, at debug level,
log the method calling close/closeAll.
At trace level: the full calling stack.
Contributed by Karen Coppage.
Change-Id: I1444f065c171fd31d42b497c92ba4517969f67f0
This changes directory tree deletion so that only files are incrementally deleted
from S3Guard after the objects are deleted; the directories are left alone
until metadataStore.deleteSubtree(path) is invoke.
This avoids directory tombstones being added above files/child directories,
which stop the treewalk and delete phase from working.
Also:
* Callback to delete objects splits files and dirs so that
any problems deleting the dirs doesn't trigger s3guard updates
* New statistic to measure #of objects deleted, alongside request count.
* Callback listFilesAndEmptyDirectories renamed listFilesAndDirectoryMarkers
to clarify behavior.
* Test enhancements to replicate the failure and verify the fix
Contributed by Steve Loughran
Change-Id: I0e6ea2c35e487267033b1664228c8837279a35c7
Contributed by Steve Loughran.
* Fixes AbstractContractSeekTest test to use readFully
* Doesn't do this to AbstractContractUnbufferTest test as it changes the test too much.
Instead just notes in the error that this may be transient
The issue is that read(buffer) doesn't guarantee that the buffer is filled, only that it will
read up to a point, and that may be just be the amount of data left in the TCP packet.
readFully corrects for this, but using it in the unbuffer test runs the risk that what
is tested for in terms of unbuffering doesn't actually get validated.
Change-Id: I046eadb69b80ba0aac468b354c82c4d510dc3699
This adds an option to disable "empty directory" marker deletion,
so avoid throttling and other scale problems.
This feature is *not* backwards compatible.
Consult the documentation and use with care.
Contributed by Steve Loughran.
Change-Id: I69a61e7584dc36e485d5e39ff25b1e3e559a1958
* Passing C/C++ standard flags -std is
not cross-compiler friendly as not all
compilers support all values.
* Thus, we need to make use of the
appropriate flags provided by CMake in
order to specify the C/C++ standards.
Signed-off-by: Akira Ajisaka <aajisaka@apache.org>
(cherry picked from commit 909f1e82d3)
* HDFS-15436. Default mount table name used by ViewFileSystem should be configurable
* Replace Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE use in tests
* Address Uma's comments on PR#2100
* Sort lists in test to match without concern to order
* Address comments, fix checkstyle and fix failing tests
* Fix checkstyle
(cherry picked from commit bed0a3a374)
Contributed by Steve Loughran.
This is a successor to HADOOP-16346, which enabled the S3A connector
to load the native openssl SSL libraries for better HTTPS performance.
That patch required wildfly.jar to be on the classpath. This
update:
* Makes wildfly.jar optional except in the special case that
"fs.s3a.ssl.channel.mode" is set to "openssl"
* Retains the declaration of wildfly.jar as a compile-time
dependency in the hadoop-aws POM. This means that unless
explicitly excluded, applications importing that published
maven artifact will, transitively, add the specified
wildfly JAR into their classpath for compilation/testing/
distribution.
This is done for packaging and to offer that optional
speedup. It is not mandatory: applications importing
the hadoop-aws POM can exclude it if they choose.
Change-Id: I7ed3e5948d1e10ce21276b3508871709347e113d
Contributed by Steve Loughran.
This moves the new API of HDFS-13616 into a interface which is implemented by
HDFS RPC filesystem client (not WebHDFS or any other connector)
This new interface, BatchListingOperations, is in hadoop-common,
so applications do not need to be compiled with HDFS on the classpath.
They must cast the FS into the interface.
instanceof can probe the client for having the new interface -the patch
also adds a new path capability to probe for this.
The FileSystem implementation is cut; tests updated as appropriate.
All new interfaces/classes/constants are marked as @unstable.
Change-Id: I5623c51f2c75804f58f915dd7e60cb2cffdac681
Contributed by Steve Loughran.
Not all stores do complete validation here; in particular the S3A
Connector does not: checking up the entire directory tree to see if a path matches
is a file significantly slows things down.
This check does take place in S3A mkdirs(), which walks backwards up the list of
parent paths until it finds a directory (success) or a file (failure).
In practice production applications invariably create destination directories
before writing 1+ file into them -restricting check purely to the mkdirs()
call deliver significant speed up while implicitly including the checks.
Change-Id: I2c9df748e92b5655232e7d888d896f1868806eb0
Followup to HADOOP-16885: Encryption zone file copy failure leaks a temp file
Moving the delete() call broke a mocking test, which slipped through the review process.
Contributed by Steve Loughran.
Change-Id: Ia13faf0f4fffb1c99ddd616d823e4f4d0b7b0cbb
Contributed by Xiaoyu Yao.
Contains HDFS-14892. Close the output stream if createWrappedOutputStream() fails
Copying file through the FsShell command into an HDFS encryption zone where
the caller lacks permissions is leaks a temp ._COPYING file
and potentially a wrapped stream unclosed.
This is a convergence of a fix for S3 meeting an issue in HDFS.
S3: a HEAD against a file can cache a 404,
-you must not do any existence checks, including deleteOnExit(),
until the file is written.
Hence: HADOOP-16490, only register files for deletion the create worked
and the upload is not direct.
HDFS-14892. HDFS doesn't close wrapped streams when IOEs are raised on
create() failures. Which means that an entry is retained on the NN.
-you need to register a file with deleteOnExit() even if the file wasn't
created.
This patch:
* Moves the deleteOnExit to ensure the created file get deleted cleanly.
* Fixes HDFS to close the wrapped stream on failures.