* Router to support resolving monitored namenodes with DNS
* Style
* fix style and test failure
* Add test for NNHAServiceTarget const
* Resolve comments
* Fix test
* Comments and style
* Create a simple function to extract port
* Use LambdaTestUtils.intercept
* fix javadoc
* Trigger Build
* CredentialProviderFactory to detect and report on recursion.
* S3AFS to remove incompatible providers.
* Integration Test for this.
Contributed by Steve Loughran.
Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is not loaded. So, without Hadoop native codec installed, saving SequenceFile using GzipCodec will throw exception like "SequenceFile doesn't work with GzipCodec without native-hadoop code!"
Same as other codecs which we migrated to using prepared packages (lz4, snappy), it will be better if we support GzipCodec generally without Hadoop native codec installed. Similar to BuiltInGzipDecompressor, we can use Java Deflater to support BuiltInGzipCompressor.
Fixes the regression caused by HADOOP-17511 by moving where the
option fs.s3a.acl.default is read -doing it before the RequestFactory
is created.
Adds
* A unit test in TestRequestFactory to verify the ACLs are set
on all file write operations.
* A new ITestS3ACannedACLs test which verifies that ACLs really
do get all the way through.
* S3A Assumed Role delegation tokens to include the IAM permission
s3:PutObjectAcl in the generated role.
Contributed by Steve Loughran
This patch cuts down the size of directory trees used for
distcp contract tests against object stores, so making
them much faster against distant/slow stores.
On abfs, the test only runs with -Dscale (as was the case for s3a already),
and has the larger scale test timeout.
After every test case, the FileSystem IOStatistics are logged,
to provide information about what IO is taking place and
what it's performance is.
There are some test cases which upload files of 1+ MiB; you can
increase the size of the upload in the option
"scale.test.distcp.file.size.kb"
Set it to zero and the large file tests are skipped.
Contributed by Steve Loughran.
This work
* Defines the behavior of FileSystem.copyFromLocal in filesystem.md
* Implements a high performance implementation of copyFromLocalOperation
for S3
* Adds a contract test for the operation: AbstractContractCopyFromLocalTest
* Implements the contract tests for Local and S3A FileSystems
Contributed by: Bogdan Stolojan
This (big!) patch adds support for client side encryption in AWS S3,
with keys managed by AWS-KMS.
Read the documentation in encryption.md very, very carefully before
use and consider it unstable.
S3-CSE is enabled in the existing configuration option
"fs.s3a.server-side-encryption-algorithm":
fs.s3a.server-side-encryption-algorithm=CSE-KMS
fs.s3a.server-side-encryption.key=<KMS_KEY_ID>
You cannot enable CSE and SSE in the same client, although
you can still enable a default SSE option in the S3 console.
* Filesystem list/get status operations subtract 16 bytes from the length
of all files >= 16 bytes long to compensate for the padding which CSE
adds.
* The SDK always warns about the specific algorithm chosen being
deprecated. It is critical to use this algorithm for ranged
GET requests to work (i.e. random IO). Ignore.
* Unencrypted files CANNOT BE READ.
The entire bucket SHOULD be encrypted with S3-CSE.
* Uploading files may be a bit slower as blocks are now
written sequentially.
* The Multipart Upload API is disabled when S3-CSE is active.
Contributed by Mehakmeet Singh
* Rebase trunk
* Fix to use FQDN and update config name
* Fix javac
* Style and trigger build
* Trigger Build after force push
* Trigger Build
* Fix config names
The S3A connector supports
"an auditor", a plugin which is invoked
at the start of every filesystem API call,
and whose issued "audit span" provides a context
for all REST operations against the S3 object store.
The standard auditor sets the HTTP Referrer header
on the requests with information about the API call,
such as process ID, operation name, path,
and even job ID.
If the S3 bucket is configured to log requests, this
information will be preserved there and so can be used
to analyze and troubleshoot storage IO.
Contributed by Steve Loughran.
When the S3A and ABFS filesystems are closed,
their IOStatistics are logged at debug in the log:
org.apache.hadoop.fs.statistics.IOStatisticsLogging
Set `fs.iostatistics.logging.level` to `info` for the statistics
to be logged at info. (also: `warn` or `error` for even higher
log levels).
Contributed by: Mehakmeet Singh
1. puts NVDIMM to the end of storage type enum to make sure compatibility.
2. adds check to make sure the software layout version is satisfied
Co-authored-by: su xu <kevinbrandon@163.com>
Co-authored-by: huangtianhua <huangtianhua223@gmail.com>
Co-authored-by: YaYun-Wang <34060507+YaYun-Wang@users.noreply.github.com>
Signed-off-by: Mingliang Liu <liuml07@apache.org>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
Signed-off-by: Vinayakumar B <vinayakumarb@apache.org>
Change-Id: I3c58beef50730827a09b3c968e9ad637baa57d44
This reverts commit 394b9f7a5c.
Ref: HDFS-15995.
Had to revert this commit, so we can commit HDFS-15566 (a critical bug preventing rolling upgrade to Hadoop 3.3)
Will re-work this fix again later.
Followup to HADOOP-13327, which changed S3A output stream hsync/hflush calls
to raise an exception.
Adds a new option fs.s3a.downgrade.syncable.exceptions
When true, calls to Syncable hsync/hflush on S3A output streams will
log once at warn (for entire process life, not just the stream), then
increment IOStats with the relevant operation counter
With the downgrade option false (default)
* IOStats are incremented
* The UnsupportedOperationException current raised includes a link to the
JIRA.
Contributed by Steve Loughran.
The ABFS Filesystem and its input and output streams now implement
the IOStatisticSource interface and provide IOStatistics on
their interactions with Azure Storage.
This includes the min/max/mean durations of all REST API calls.
Contributed by Mehakmeet Singh <mehakmeet.singh@cloudera.com>
Reapply of issue reverted first because it caused yarn failures and
then again because the commit message was incorrectly formatted
(and yet again because of commit message format).
Signed-off-by: stack <stack@apache.org>
The S3A connector's rename() operation now raises FileNotFoundException if
the source doesn't exist; a FileAlreadyExistsException if the destination
exists and is unsuitable for the source file/directory.
When renaming to a path which does not exist, the connector no longer checks
for the destination parent directory existing -instead it simply verifies
that there is no file immediately above the destination path.
This is needed to avoid race conditions with delete() and rename()
calls working on adjacent subdirectories.
Contributed by Steve Loughran.
Adds an Abortable.abort() interface for streams to enable output streams to be terminated; this
is implemented by the S3A connector's output stream. It allows for commit protocols
to be implemented which commit/abort work by writing to the final destination and
using the abort() call to cancel any write which is not intended to be committed.
Consult the specification document for information about the interface and its use.
Contributed by Jungtaek Lim and Steve Loughran.
This defines what output streams and especially those which implement
Syncable are meant to do, and documents where implementations (HDFS; S3)
don't. With tests.
The file:// FileSystem now supports Syncable if an application calls
FileSystem.setWriteChecksum(false) before creating a file -checksumming
and Syncable.hsync() are incompatible.
Contributed by Steve Loughran.
1. puts NVDIMM to the end of storage type enum to make sure compatibility.
2. adds check to make sure the software layout version is satisfied
Co-authored-by: su xu <kevinbrandon@163.com>
Co-authored-by: huangtianhua <huangtianhua223@gmail.com>
Signed-off-by: Mingliang Liu <liuml07@apache.org>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
Signed-off-by: Vinayakumar B <vinayakumarb@apache.org>
* core-default.xml updated so that fs.s3a.committer.magic.enabled = true
* CommitConstants updated to match
* All tests which previously enabled the magic committer now rely on
default settings. This helps make sure it is enabled.
* Docs cover the switch, mention its enabled and explain why you may
want to disable it.
Note: this doesn't switch to using the committer -it just enables the path
rewriting magic which it depends on.
Contributed by Steve Loughran.
Caused by HADOOP-16830 and HADOOP-17271.
Fixes tests which fail intermittently based on configs and
in the case of the HugeFile tests, bulk runs with existing
FS instances meant statistic probes sometimes ended up probing those
of a previous FS.
Contributed by Steve Loughran.
Change-Id: I65ba3f44444e59d298df25ac5c8dc5a8781dfb7d
Caused by HADOOP-16380 and HADOOP-17271.
Fixes tests which fail intermittently based on configs and
in the case of the HugeFile tests, bulk runs with existing
FS instances meant statistic probes sometimes ended up probing those
of a previous FS.
Contributed by Steve Loughran.
This is the API and implementation classes of HADOOP-16830,
which allows callers to query IO object instances
(filesystems, streams, remote iterators, ...) and other classes
for statistics on their I/O Usage: operation count and min/max/mean
durations.
New Packages
org.apache.hadoop.fs.statistics.
Public API, including:
IOStatisticsSource
IOStatistics
IOStatisticsSnapshot (seralizable to java objects and json)
+helper classes for logging and integration
BufferedIOStatisticsInputStream
implements IOStatisticsSource and StreamCapabilities
BufferedIOStatisticsOutputStream
implements IOStatisticsSource, Syncable and StreamCapabilities
org.apache.hadoop.fs.statistics.impl
Implementation classes for internal use.
org.apache.hadoop.util.functional
functional programming support for RemoteIterators and
other operations which raise IOEs; all wrapper classes
implement and propagate IOStatisticsSource
Contributed by Steve Loughran.