hadoop/hadoop-project-dist/hadoop-common/filesystem/fsdataoutputstreambuilder.html
2023-03-07 17:17:30 +00:00

718 lines
41 KiB
HTML

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
| Generated by Apache Maven Doxia at 2023-03-07
| Rendered using Apache Maven Stylus Skin 1.5
-->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 3.4.0-SNAPSHOT &#x2013; class org.apache.hadoop.fs.FSDataOutputStreamBuilder</title>
<style type="text/css" media="all">
@import url("../css/maven-base.css");
@import url("../css/maven-theme.css");
@import url("../css/site.css");
</style>
<link rel="stylesheet" href="../css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20230307" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2023-03-07
&nbsp;| Version: 3.4.0-SNAPSHOT
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
</li>
<li class="none">
<a href="../../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html">Async Profiler</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
</li>
<li class="none">
<a href="../../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
</li>
</ul>
<h5>YARN Service</h5>
<ul>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
</li>
</ul>
<h5>Hadoop Compatible File Systems</h5>
<ul>
<li class="none">
<a href="../../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
</li>
<li class="none">
<a href="../../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
</li>
<li class="none">
<a href="../../../hadoop-azure/index.html">Azure Blob Storage</a>
</li>
<li class="none">
<a href="../../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
</li>
<li class="none">
<a href="../../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
</li>
<li class="none">
<a href="../../../hadoop-huaweicloud/cloud-storage/index.html">Huaweicloud OBS</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Tools</h5>
<ul>
<li class="none">
<a href="../../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
</li>
<li class="none">
<a href="../../../hadoop-distcp/DistCp.html">DistCp</a>
</li>
<li class="none">
<a href="../../../hadoop-federation-balance/HDFSFederationBalance.html">HDFS Federation Balance</a>
</li>
<li class="none">
<a href="../../../hadoop-gridmix/GridMix.html">GridMix</a>
</li>
<li class="none">
<a href="../../../hadoop-rumen/Rumen.html">Rumen</a>
</li>
<li class="none">
<a href="../../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
</li>
<li class="none">
<a href="../../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
</li>
<li class="none">
<a href="../../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
</li>
<li class="none">
<a href="../../../api/index.html">Java API docs</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-kms/kms-default.html">kms-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="../images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- ============================================================= -->
<!-- CLASS: FSDataOutputStreamBuilder -->
<!-- ============================================================= -->
<h1>class <code>org.apache.hadoop.fs.FSDataOutputStreamBuilder</code></h1>
<ul>
<li><a href="#Invariants">Invariants</a></li>
<li><a href="#Implementation-agnostic_parameters.">Implementation-agnostic parameters.</a>
<ul>
<li><a href="#FSDataOutputStreamBuilder_create.28.29"> FSDataOutputStreamBuilder create()</a></li>
<li><a href="#FSDataOutputStreamBuilder_append.28.29"> FSDataOutputStreamBuilder append()</a></li>
<li><a href="#FSDataOutputStreamBuilder_overwrite.28boolean_overwrite.29"> FSDataOutputStreamBuilder overwrite(boolean overwrite)</a></li>
<li><a href="#FSDataOutputStreamBuilder_permission.28FsPermission_permission.29"> FSDataOutputStreamBuilder permission(FsPermission permission)</a></li>
<li><a href="#FSDataOutputStreamBuilder_bufferSize.28int_bufSize.29"> FSDataOutputStreamBuilder bufferSize(int bufSize)</a></li>
<li><a href="#FSDataOutputStreamBuilder_replication.28short_replica.29"> FSDataOutputStreamBuilder replication(short replica)</a></li>
<li><a href="#FSDataOutputStreamBuilder_blockSize.28long_size.29"> FSDataOutputStreamBuilder blockSize(long size)</a></li>
<li><a href="#FSDataOutputStreamBuilder_recursive.28.29"> FSDataOutputStreamBuilder recursive()</a></li>
<li><a href="#FSDataOutputStreamBuilder_progress.28Progresable_prog.29"> FSDataOutputStreamBuilder progress(Progresable prog)</a></li>
<li><a href="#FSDataOutputStreamBuilder_checksumOpt.28ChecksumOpt_chksumOpt.29"> FSDataOutputStreamBuilder checksumOpt(ChecksumOpt chksumOpt)</a></li>
<li><a href="#Set_optional_or_mandatory_parameters">Set optional or mandatory parameters</a></li></ul></li>
<li><a href="#HDFS-specific_parameters.">HDFS-specific parameters.</a>
<ul>
<li><a href="#FSDataOutpuStreamBuilder_favoredNodes.28InetSocketAddress.5B.5D_nodes.29">FSDataOutpuStreamBuilder favoredNodes(InetSocketAddress[] nodes)</a></li>
<li><a href="#FSDataOutputStreamBuilder_syncBlock.28.29">FSDataOutputStreamBuilder syncBlock()</a></li>
<li><a href="#FSDataOutputStreamBuilder_lazyPersist.28.29">FSDataOutputStreamBuilder lazyPersist()</a></li>
<li><a href="#FSDataOutputStreamBuilder_newBlock.28.29">FSDataOutputStreamBuilder newBlock()</a></li>
<li><a href="#FSDataOutputStreamBuilder_noLocalWrite.28.29">FSDataOutputStreamBuilder noLocalWrite()</a></li>
<li><a href="#FSDataOutputStreamBuilder_ecPolicyName.28.29">FSDataOutputStreamBuilder ecPolicyName()</a></li>
<li><a href="#FSDataOutputStreamBuilder_replicate.28.29">FSDataOutputStreamBuilder replicate()</a></li></ul></li>
<li><a href="#Builder_interface">Builder interface</a>
<ul>
<li><a href="#FSDataOutputStream_build.28.29"> FSDataOutputStream build()</a></li></ul></li>
<li><a href="#S3A-specific_options"> S3A-specific options</a>
<ul>
<li><a href="#fs.s3a.create.performance">fs.s3a.create.performance</a></li>
<li><a href="#fs.s3a.create.header_User-supplied_header_support">fs.s3a.create.header User-supplied header support</a></li></ul></li></ul>
<p>Builder pattern for <code>FSDataOutputStream</code> and its subclasses. It is used to create a new file or open an existing file on <code>FileSystem</code> for write.</p><section>
<h2><a name="Invariants"></a>Invariants</h2>
<p>The <code>FSDataOutputStreamBuilder</code> interface does not validate parameters and modify the state of <code>FileSystem</code> until <code>build()</code> is invoked.</p></section><section>
<h2><a name="Implementation-agnostic_parameters."></a>Implementation-agnostic parameters.</h2><section>
<h3><a name="FSDataOutputStreamBuilder_create.28.29"></a><a name="Builder.create"></a> <code>FSDataOutputStreamBuilder create()</code></h3>
<p>Specify <code>FSDataOutputStreamBuilder</code> to create a file on <code>FileSystem</code>, equivalent to <code>CreateFlag#CREATE</code>.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_append.28.29"></a><a name="Builder.append"></a> <code>FSDataOutputStreamBuilder append()</code></h3>
<p>Specify <code>FSDataOutputStreamBuilder</code> to append to an existing file on <code>FileSystem</code>, equivalent to <code>CreateFlag#APPEND</code>.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_overwrite.28boolean_overwrite.29"></a><a name="Builder.overwrite"></a> <code>FSDataOutputStreamBuilder overwrite(boolean overwrite)</code></h3>
<p>Specify <code>FSDataOutputStreamBuilder</code> to overwrite an existing file or not. If giving <code>overwrite==true</code>, it truncates an existing file, equivalent to <code>CreateFlag#OVERWITE</code>.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_permission.28FsPermission_permission.29"></a><a name="Builder.permission"></a> <code>FSDataOutputStreamBuilder permission(FsPermission permission)</code></h3>
<p>Set permission for the file.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_bufferSize.28int_bufSize.29"></a><a name="Builder.bufferSize"></a> <code>FSDataOutputStreamBuilder bufferSize(int bufSize)</code></h3>
<p>Set the size of the buffer to be used.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_replication.28short_replica.29"></a><a name="Builder.replication"></a> <code>FSDataOutputStreamBuilder replication(short replica)</code></h3>
<p>Set the replication factor.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_blockSize.28long_size.29"></a><a name="Builder.blockSize"></a> <code>FSDataOutputStreamBuilder blockSize(long size)</code></h3>
<p>Set block size in bytes.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_recursive.28.29"></a><a name="Builder.recursive"></a> <code>FSDataOutputStreamBuilder recursive()</code></h3>
<p>Create parent directories if they do not exist.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_progress.28Progresable_prog.29"></a><a name="Builder.progress"></a> <code>FSDataOutputStreamBuilder progress(Progresable prog)</code></h3>
<p>Set the facility of reporting progress.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_checksumOpt.28ChecksumOpt_chksumOpt.29"></a><a name="Builder.checksumOpt"></a> <code>FSDataOutputStreamBuilder checksumOpt(ChecksumOpt chksumOpt)</code></h3>
<p>Set checksum opt.</p></section><section>
<h3><a name="Set_optional_or_mandatory_parameters"></a>Set optional or mandatory parameters</h3>
<div class="source">
<div class="source">
<pre>FSDataOutputStreamBuilder opt(String key, ...)
FSDataOutputStreamBuilder must(String key, ...)
</pre></div></div>
<p>Set optional or mandatory parameters to the builder. Using <code>opt()</code> or <code>must()</code>, client can specify FS-specific parameters without inspecting the concrete type of <code>FileSystem</code>.</p>
<div class="source">
<div class="source">
<pre>// Don't
if (fs instanceof FooFileSystem) {
FooFileSystem fs = (FooFileSystem) fs;
out = dfs.createFile(path)
.optionA()
.optionB(&quot;value&quot;)
.cache()
.build()
} else if (fs instanceof BarFileSystem) {
...
}
// Do
out = fs.createFile(path)
.permission(perm)
.bufferSize(bufSize)
.opt(&quot;foofs:option.a&quot;, true)
.opt(&quot;foofs:option.b&quot;, &quot;value&quot;)
.opt(&quot;barfs:cache&quot;, true)
.must(&quot;foofs:cache&quot;, true)
.must(&quot;barfs:cache-size&quot;, 256 * 1024 * 1024)
.build();
</pre></div></div>
<section>
<h4><a name="Implementation_Notes"></a>Implementation Notes</h4>
<p>The concrete <code>FileSystem</code> and/or <code>FSDataOutputStreamBuilder</code> implementation MUST verify that implementation-agnostic parameters (i.e., &quot;syncable<code>) or implementation-specific parameters (i.e., &quot;foofs:cache&quot;) are supported.</code>FileSystem<code>will satisfy optional parameters (via</code>opt(key, &#x2026;)<code>) on best effort. If the mandatory parameters (via</code>must(key, &#x2026;)<code>) can not be satisfied in the</code>FileSystem<code>,</code>IllegalArgumentException<code>must be thrown in</code>build()`.</p>
<p>The behavior of resolving the conflicts between the parameters set by builder methods (i.e., <code>bufferSize()</code>) and <code>opt()</code>/<code>must()</code> is as follows:</p>
<blockquote>
<p>The last option specified defines the value and its optional/mandatory state.</p>
</blockquote></section></section></section><section>
<h2><a name="HDFS-specific_parameters."></a>HDFS-specific parameters.</h2>
<p><code>HdfsDataOutputStreamBuilder extends FSDataOutputStreamBuilder</code> provides additional HDFS-specific parameters, for further customize file creation / append behavior.</p><section>
<h3><a name="FSDataOutpuStreamBuilder_favoredNodes.28InetSocketAddress.5B.5D_nodes.29"></a><code>FSDataOutpuStreamBuilder favoredNodes(InetSocketAddress[] nodes)</code></h3>
<p>Set favored DataNodes for new blocks.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_syncBlock.28.29"></a><code>FSDataOutputStreamBuilder syncBlock()</code></h3>
<p>Force closed blocks to the disk device. See <code>CreateFlag#SYNC_BLOCK</code></p></section><section>
<h3><a name="FSDataOutputStreamBuilder_lazyPersist.28.29"></a><code>FSDataOutputStreamBuilder lazyPersist()</code></h3>
<p>Create the block on transient storage if possible.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_newBlock.28.29"></a><code>FSDataOutputStreamBuilder newBlock()</code></h3>
<p>Append data to a new block instead of the end of the last partial block.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_noLocalWrite.28.29"></a><code>FSDataOutputStreamBuilder noLocalWrite()</code></h3>
<p>Advise that a block replica NOT be written to the local DataNode.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_ecPolicyName.28.29"></a><code>FSDataOutputStreamBuilder ecPolicyName()</code></h3>
<p>Enforce the file to be a striped file with erasure coding policy &#x2018;policyName&#x2019;, no matter what its parent directory&#x2019;s replication or erasure coding policy is.</p></section><section>
<h3><a name="FSDataOutputStreamBuilder_replicate.28.29"></a><code>FSDataOutputStreamBuilder replicate()</code></h3>
<p>Enforce the file to be a replicated file, no matter what its parent directory&#x2019;s replication or erasure coding policy is.</p></section></section><section>
<h2><a name="Builder_interface"></a>Builder interface</h2><section>
<h3><a name="FSDataOutputStream_build.28.29"></a><a name="Builder.build"></a> <code>FSDataOutputStream build()</code></h3>
<p>Create a new file or append an existing file on the underlying <code>FileSystem</code>, and return <code>FSDataOutputStream</code> for write.</p><section>
<h4><a name="Preconditions"></a>Preconditions</h4>
<p>The following combinations of parameters are not supported:</p>
<div class="source">
<div class="source">
<pre>if APPEND|OVERWRITE: raise HadoopIllegalArgumentException
if CREATE|APPEND|OVERWRITE: raise HadoopIllegalArgumentExdeption
</pre></div></div>
<p><code>FileSystem</code> may reject the request for other reasons and throw <code>IOException</code>, see <code>FileSystem#create(path, ...)</code> and <code>FileSystem#append()</code>.</p></section><section>
<h4><a name="Postconditions"></a>Postconditions</h4>
<div class="source">
<div class="source">
<pre>FS' where :
FS'.Files'[p] == []
ancestors(p) is-subset-of FS'.Directories'
result = FSDataOutputStream
</pre></div></div>
<p>The result is <code>FSDataOutputStream</code> to be used to write data to filesystem.</p></section></section></section><section>
<h2><a name="S3A-specific_options"></a><a name="s3a"></a> S3A-specific options</h2>
<p>Here are the custom options which the S3A Connector supports.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th> Name </th>
<th> Type </th>
<th> Meaning </th></tr>
</thead><tbody>
<tr class="b">
<td> <code>fs.s3a.create.performance</code> </td>
<td> <code>boolean</code> </td>
<td> create a file with maximum performance </td></tr>
<tr class="a">
<td> <code>fs.s3a.create.header</code> </td>
<td> <code>string</code> </td>
<td> prefix for user supplied headers </td></tr>
</tbody>
</table><section>
<h3><a name="fs.s3a.create.performance"></a><code>fs.s3a.create.performance</code></h3>
<p>Prioritize file creation performance over safety checks for filesystem consistency.</p>
<p>This: 1. Skips the <code>LIST</code> call which makes sure a file is being created over a directory. Risk: a file is created over a directory. 1. Ignores the overwrite flag. 1. Never issues a <code>DELETE</code> call to delete parent directory markers.</p>
<p>It is possible to probe an S3A Filesystem instance for this capability through the <code>hasPathCapability(path, &quot;fs.s3a.create.performance&quot;)</code> check.</p>
<p>Creating files with this option over existing directories is likely to make S3A filesystem clients behave inconsistently.</p>
<p>Operations optimized for directories (e.g. listing calls) are likely to see the directory tree not the file; operations optimized for files (<code>getFileStatus()</code>, <code>isFile()</code>) more likely to see the file. The exact form of the inconsistencies, and which operations/parameters trigger this are undefined and may change between even minor releases.</p>
<p>Using this option is the equivalent of pressing and holding down the &#x201c;Electronic Stability Control&#x201d; button on a rear-wheel drive car for five seconds: the safety checks are off. Things wil be faster if the driver knew what they were doing. If they didn&#x2019;t, the fact they had held the button down will be used as evidence at the inquest as proof that they made a conscious decision to choose speed over safety and that the outcome was their own fault.</p>
<p>Accordingly: <i>Use if and only if you are confident that the conditions are met.</i></p></section><section>
<h3><a name="fs.s3a.create.header_User-supplied_header_support"></a><code>fs.s3a.create.header</code> User-supplied header support</h3>
<p>Options with the prefix <code>fs.s3a.create.header.</code> will be added to the S3 object metadata as &#x201c;user defined metadata&#x201d;. This metadata is visible to all applications. It can also be retrieved through the FileSystem/FileContext <code>listXAttrs()</code> and <code>getXAttrs()</code> API calls with the prefix <code>header.</code></p>
<p>When an object is renamed, the metadata is propagated the copy created.</p>
<p>It is possible to probe an S3A Filesystem instance for this capability through the <code>hasPathCapability(path, &quot;fs.s3a.create.header&quot;)</code> check.</p></section></section>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">
&#169; 2008-2023
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
</div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>