This adds an option to disable "empty directory" marker deletion,
so avoid throttling and other scale problems.
This feature is *not* backwards compatible.
Consult the documentation and use with care.
Contributed by Steve Loughran.
Change-Id: I69a61e7584dc36e485d5e39ff25b1e3e559a1958
* Passing C/C++ standard flags -std is
not cross-compiler friendly as not all
compilers support all values.
* Thus, we need to make use of the
appropriate flags provided by CMake in
order to specify the C/C++ standards.
Signed-off-by: Akira Ajisaka <aajisaka@apache.org>
(cherry picked from commit 909f1e82d3)
* HDFS-15436. Default mount table name used by ViewFileSystem should be configurable
* Replace Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE use in tests
* Address Uma's comments on PR#2100
* Sort lists in test to match without concern to order
* Address comments, fix checkstyle and fix failing tests
* Fix checkstyle
(cherry picked from commit bed0a3a374)
Contributed by Steve Loughran.
This is a successor to HADOOP-16346, which enabled the S3A connector
to load the native openssl SSL libraries for better HTTPS performance.
That patch required wildfly.jar to be on the classpath. This
update:
* Makes wildfly.jar optional except in the special case that
"fs.s3a.ssl.channel.mode" is set to "openssl"
* Retains the declaration of wildfly.jar as a compile-time
dependency in the hadoop-aws POM. This means that unless
explicitly excluded, applications importing that published
maven artifact will, transitively, add the specified
wildfly JAR into their classpath for compilation/testing/
distribution.
This is done for packaging and to offer that optional
speedup. It is not mandatory: applications importing
the hadoop-aws POM can exclude it if they choose.
Change-Id: I7ed3e5948d1e10ce21276b3508871709347e113d
Contributed by Steve Loughran.
This moves the new API of HDFS-13616 into a interface which is implemented by
HDFS RPC filesystem client (not WebHDFS or any other connector)
This new interface, BatchListingOperations, is in hadoop-common,
so applications do not need to be compiled with HDFS on the classpath.
They must cast the FS into the interface.
instanceof can probe the client for having the new interface -the patch
also adds a new path capability to probe for this.
The FileSystem implementation is cut; tests updated as appropriate.
All new interfaces/classes/constants are marked as @unstable.
Change-Id: I5623c51f2c75804f58f915dd7e60cb2cffdac681
Contributed by Steve Loughran.
Not all stores do complete validation here; in particular the S3A
Connector does not: checking up the entire directory tree to see if a path matches
is a file significantly slows things down.
This check does take place in S3A mkdirs(), which walks backwards up the list of
parent paths until it finds a directory (success) or a file (failure).
In practice production applications invariably create destination directories
before writing 1+ file into them -restricting check purely to the mkdirs()
call deliver significant speed up while implicitly including the checks.
Change-Id: I2c9df748e92b5655232e7d888d896f1868806eb0
Followup to HADOOP-16885: Encryption zone file copy failure leaks a temp file
Moving the delete() call broke a mocking test, which slipped through the review process.
Contributed by Steve Loughran.
Change-Id: Ia13faf0f4fffb1c99ddd616d823e4f4d0b7b0cbb
Contributed by Xiaoyu Yao.
Contains HDFS-14892. Close the output stream if createWrappedOutputStream() fails
Copying file through the FsShell command into an HDFS encryption zone where
the caller lacks permissions is leaks a temp ._COPYING file
and potentially a wrapped stream unclosed.
This is a convergence of a fix for S3 meeting an issue in HDFS.
S3: a HEAD against a file can cache a 404,
-you must not do any existence checks, including deleteOnExit(),
until the file is written.
Hence: HADOOP-16490, only register files for deletion the create worked
and the upload is not direct.
HDFS-14892. HDFS doesn't close wrapped streams when IOEs are raised on
create() failures. Which means that an entry is retained on the NN.
-you need to register a file with deleteOnExit() even if the file wasn't
created.
This patch:
* Moves the deleteOnExit to ensure the created file get deleted cleanly.
* Fixes HDFS to close the wrapped stream on failures.
Followup to the main openFile().withStatus() patch.
It turns out that this broke the hive builds, which
was not well appreciated.
This patch lists places to review in the hadoop codebase,
and external projects where changes are likely to cause problems.
Contributed by Steve Loughran
Change-Id: Ifac815c65b74d083cd277764b780ac2b5b0f6b36
Contributed by Steve Loughran.
During S3A rename() and delete() calls, the list of objects delete is
built up into batches of a thousand and then POSTed in a single large
DeleteObjects request.
But as the IO capacity allowed on an S3 partition may only be 3500 writes
per second *and* each entry in that POST counts as a single write, then
one of those posts alone can trigger throttling on an already loaded
S3 directory tree. Which can trigger backoff and retry, with the same
thousand entry post, and so recreate the exact same problem.
Fixes
* Page size for delete object requests is set in
fs.s3a.bulk.delete.page.size; the default is 250.
* The property fs.s3a.experimental.aws.s3.throttling (default=true)
can be set to false to disable throttle retry logic in the AWS
client SDK -it is all handled in the S3A client. This
gives more visibility in to when operations are being throttled
* Bulk delete throttling events are logged to the log
org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears
often then choose a smaller page size.
* The metric "store_io_throttled" adds the entire count of delete
requests when a single DeleteObjects request is throttled.
* A new quantile, "store_io_throttle_rate" can track throttling
load over time.
* DynamoDB metastore throttle resilience issues have also been
identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling
flag does not apply to DDB IO precisely because there may still be
lurking issues there and it safest to rely on the DynamoDB client
SDK.
Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84
This is a regression caused by HADOOP-16759.
The test TestHarFileSystem uses introspection to verify that HarFileSystem
Does not implement methods to which there is a suitable implementation in
the base FileSystem class. Because of the way it checks this, refactoring
(protected) FileSystem methods in an IDE do not automatically change
the probes in TestHarFileSystem.
The changes in HADOOP-16759 did exactly that, and somehow managed
to get through the build/test process without this being noticed.
This patch fixes that failure.
Caused by and fixed by Steve Loughran.
Change-Id: If60d9c97058242871c02ad1addd424478f84f446
Signed-off-by: Mingliang Liu <liuml07@apache.org>
Contributed by Mustafa Iman.
This adds a new configuration option fs.s3a.connection.request.timeout
to declare the time out on HTTP requests to the AWS service;
0 means no timeout.
Measured in seconds; the usual time suffixes are all supported
Important: this is the maximum duration of any AWS service call,
including upload and copy operations. If non-zero, it must be larger
than the time to upload multi-megabyte blocks to S3 from the client,
and to rename many-GB files. Use with care.
Change-Id: I407745341068b702bf8f401fb96450a9f987c51c
* Enhanced builder + FS spec
* s3a FS to use this to skip HEAD on open
* and to use version/etag when opening the file
works with S3AFileStatus FS and S3ALocatedFileStatus
Introduces `openssl` as an option for `fs.s3a.ssl.channel.mode`.
The new option is documented and marked as experimental.
For details on how to use this, consult the peformance document
in the s3a documentation.
This patch is the successor to HADOOP-16050 "S3A SSL connections
should use OpenSSL" -which was reverted because of
incompatibilities between the wildfly OpenSSL client and the AWS
HTTPS servers (HADOOP-16347). With the Wildfly release moved up
to 1.0.7.Final (HADOOP-16405) everything should now work.
Related issues:
* HADOOP-15669. ABFS: Improve HTTPS Performance
* HADOOP-16050: S3A SSL connections should use OpenSSL
* HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
* HADOOP-16405: Upgrade Wildfly Openssl version to 1.0.7.Final
Contributed by Sahil Takiar
Change-Id: I80a4bc5051519f186b7383b2c1cea140be42444e
Contributed by David Mollitor.
This adds operations in FileUtil to write text to a file via
either a FileSystem or FileContext instance.
Change-Id: I5fe8fcf1bdbdbc734e137f922a75a822f2b88410
Contains:
HADOOP-16474. S3Guard ProgressiveRenameTracker to mark destination
dirirectory as authoritative on success.
HADOOP-16684. S3guard bucket info to list a bit more about
authoritative paths.
HADOOP-16722. S3GuardTool to support FilterFileSystem.
This patch improves the marking of newly created/import directory
trees in S3Guard DynamoDB tables as authoritative.
Specific changes:
* Renamed directories are marked as authoritative if the entire
operation succeeded (HADOOP-16474).
* When updating parent table entries as part of any table write,
there's no overwriting of their authoritative flag.
s3guard import changes:
* new -verbose flag to print out what is going on.
* The "s3guard import" command lets you declare that a directory tree
is to be marked as authoritative
hadoop s3guard import -authoritative -verbose s3a://bucket/path
When importing a listing and a file is found, the import tool queries
the metastore and only updates the entry if the file is different from
before, where different == new timestamp, etag, or length. S3Guard can get
timestamp differences due to clock skew in PUT operations.
As the recursive list performed by the import command doesn't retrieve the
versionID, the existing entry may in fact be more complete.
When updating an existing due to clock skew the existing version ID
is propagated to the new entry (note: the etags must match; this is needed
to deal with inconsistent listings).
There is a new s3guard command to audit a s3guard bucket/path's
authoritative state:
hadoop s3guard authoritative -check-config s3a://bucket/path
This is primarily for testing/auditing.
The s3guard bucket-info command also provides some more details on the
authoritative state of a store (HADOOP-16684).
Change-Id: I58001341c04f6f3597fcb4fcb1581ccefeb77d91
This hardens the wasb and abfs output streams' resilience to being invoked
in/after close().
wasb:
Explicity raise IOEs on operations invoked after close,
rather than implicitly raise NPEs.
This ensures that invocations which catch and swallow IOEs will perform as
expected.
abfs:
When rethrowing an IOException in the close() call, explicitly wrap it
with a new instance of the same subclass.
This is needed to handle failures in try-with-resources clauses, where
any exception in closed() is added as a suppressed exception to the one
thrown in the try {} clause
*and you cannot attach the same exception to itself*
Contributed by Steve Loughran.
Change-Id: Ic44b494ff5da332b47d6c198ceb67b965d34dd1b
MBeanInfoBuilder's get() method DEBUG logs all the MBeanAttributeInfo attributes that it gathered. This can have a high memory churn that can be easily avoided.
Contributed by Steve Loughran.
This FileSystem instantiation so if an IOException or RuntimeException is
raised in the invocation of FileSystem.initialize() then a best-effort
attempt is made to close the FS instance; exceptions raised that there
are swallowed.
The S3AFileSystem is also modified to do its own cleanup if an
IOException is raised during its initialize() process, it being the
FS we know has the "potential" to leak threads, especially in
extension points (e.g AWS Authenticators) which spawn threads.
Change-Id: Ib84073a606c9d53bf53cbfca4629876a03894f04
* HADOOP-16579 - Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.5
- Add a static initializer for the unit tests using ZooKeeper to enable
the four-letter-words diagnostic telnet commands. (this is an interface
that become disabled by default, so to keep the ZooKeeper 3.4.x behavior
we enabled it for the tests)
- Also fix ZKFailoverController to look for relevant fail-over ActiveAttempt
records. The new ZooKeeper seems to respond quicker during the fail-over
tests than the ZooKeeper, so we made sure to catch all the relevant records
by adding a new parameter to ZKFailoverontroller.waitForActiveAttempt().
Co-authored-by: Norbert Kalmár <nkalmar@cloudera.com>
Contributed by Steve Loughran.
Includes
-S3A glob scans don't bother trying to resolve symlinks
-stack traces don't get lost in getFileStatuses() when exceptions are wrapped
-debug level logging of what is up in Globber
-Contains HADOOP-13373. Add S3A implementation of FSMainOperationsBaseTest.
-ITestRestrictedReadAccess tests incomplete read access to files.
This adds a builder API for constructing globbers which other stores can use
so that they too can skip symlink resolution when not needed.
Change-Id: I23bcdb2783d6bd77cf168fdc165b1b4b334d91c7
Contributed by Steve Loughran.
This complements the StreamCapabilities Interface by allowing applications to probe for a specific path on a specific instance of a FileSystem client
to offer a specific capability.
This is intended to allow applications to determine
* Whether a method is implemented before calling it and dealing with UnsupportedOperationException.
* Whether a specific feature is believed to be available in the remote store.
As well as a common set of capabilities defined in CommonPathCapabilities,
file systems are free to add their own capabilities, prefixed with
fs. + schema + .
The plan is to identify and document more capabilities -and for file systems which add new features, for a declaration of the availability of the feature to always be available.
Note
* The remote store is not expected to be checked for the feature;
It is more a check of client API and the client's configuration/knowledge
of the state of the remote system.
* Permissions are not checked.
Change-Id: I80bfebe94f4a8bdad8f3ac055495735b824968f5
In the existing implementation, the ValueQueue::getAtMost() method will only trigger a refill on a key queue if it has gone empty, instead of triggering a refill when it has gone below the watermark. Revise the test suite to correctly verify this behavior.
Contributed by Sahil Takiar.
This moves the SSLSocketFactoryEx class from hadoop-azure into hadoop-common
as the DelegatingSSLSocketFactory and binds the S3A connector to it so that
it can avoid using those HTTPS algorithms which are underperformant on Java 8.
Change-Id: Ie9e6ac24deac1aa05e136e08899620efa7d22abd
Contributed by Steve Loughran.
This patch avoids issuing any HEAD path request when creating a file with overwrite=true,
so 404s will not end up in the S3 load balancers unless someone calls getFileStatus/exists/isFile
in their own code.
The Hadoop FsShell CommandWithDestination class is modified to not register uncreated files
for deleteOnExit(), because that calls exists() and so can place the 404 in the cache, even
after S3A is patched to not do it itself.
Because S3Guard knows when a file should be present, it adds a special FileNotFound retry policy
independently configurable from other retry policies; it is also exponential, but with
different parameters. This is because every HEAD request will refresh any 404 cached in
the S3 Load Balancers. It's not enough to retry: we have to have a suitable gap between
attempts to (hopefully) ensure any cached entry wil be gone.
The options and values are:
fs.s3a.s3guard.consistency.retry.interval: 2s
fs.s3a.s3guard.consistency.retry.limit: 7
The S3A copy() method used during rename() raises a RemoteFileChangedException which is not caught
so not downgraded to false. Thus: when a rename is unrecoverable, this fact is propagated.
Copy operations without S3Guard lack the confidence that the file exists, so don't retry the same way:
it will fail fast with a different error message. However, because create(path, overwrite=false) no
longer does HEAD path, we can at least be confident that S3A itself is not creating those cached
404 markers.
Change-Id: Ia7807faad8b9a8546836cb19f816cccf17cca26d
Contributed by Steve Loughran.
This overlaps the scanning for directory entries with batched calls to S3 DELETE and updates of the S3Guard tables.
It also uses S3Guard to list the files to delete, so find newly created files even when S3 listings are not use consistent.
For path which the client considers S3Guard to be authoritative, we also do a recursive LIST of the store and delete files; this is to find unindexed files and do guarantee that the delete(path, true) call really does delete everything underneath.
Change-Id: Ice2f6e940c506e0b3a78fa534a99721b1698708e
Contributed by Steve Loughran and Gabor Bota.
This
* Asks S3Guard to determine the empty directory status.
* Has S3A's root directory rm("/") command to always return false (as abfs does)
* Documents that object stores MAY do this
* Overloads ContractTestUtils.assertDeleted to let assertions declare that the source directory does not need to exist. This stops inconsistencies in directory listings failing a root test.
It avoids a recent regression (HADOOP-16279) where if there was a tombstone above the first element found in a directory listing, the directory would be considered empty, when in fact there were child entries. That could downgrade an rm(path, recursive) to a no-op, while also confusing rename(src, dest), as dest could be mistaken for an empty directory and so permit the copy above it, rather than reject it "destination path exists and is not empty".
Change-Id: I136a3d1a5a48a67e6155d790a40ff558d0d2c108
Contributed by Steve Loughran
Contains
- HADOOP-16397. Hadoop S3Guard Prune command to support a -tombstone option.
- HADOOP-16406. ITestDynamoDBMetadataStore.testProvisionTable times out intermittently
This patch doesn't fix the underlying problem but it
* changes some tests to clean up better
* does a lot more in logging operations in against DDB, if enabled
* adds an entry point to dump the state of the metastore and s3 tables (precursor to fsck)
* adds a purge entry point to help clean up after a test run has got a store into a mess
* s3guard prune command adds -tombstone option to only clear tombstones
The outcome is that tests should pass consistently and if problems occur we have better diagnostics.
Change-Id: I3eca3f5529d7f6fec398c0ff0472919f08f054eb
Contributed by Steve Loughran.
This patch
* changes the default for the staging committer to append, as we get for the classic FileOutputFormat committer
* adds a check for the dest path being a file not a dir
* adds tests for this
* Changes AbstractCommitTerasortIT. to not use the simple parser, so fails if the file is present.
Change-Id: Id53742958ed1cf321ff96c9063505d64f3254f53