<p>The abfs has a critical bug fix <aclass="externalLink"href="https://issues.apache.org/jira/browse/HADOOP-18546">HADOOP-18546</a>. <i>ABFS. Disable purging list of in-progress reads in abfs stream close().</i></p>
<p>All users of the abfs connector in hadoop releases 3.3.2+ MUST either upgrade or disable prefetching by setting <code>fs.azure.readaheadqueue.depth</code> to <code>0</code></p>
<p>Consult the parent JIRA <aclass="externalLink"href="https://issues.apache.org/jira/browse/HADOOP-18521">HADOOP-18521</a><i>ABFS ReadBufferManager buffer sharing across concurrent HTTP requests</i> for root cause analysis, details on what is affected, and mitigations.</p></section><section>
<p><aclass="externalLink"href="https://issues.apache.org/jira/browse/HADOOP-18103">HADOOP-18103</a>. <i>High performance vectored read API in Hadoop</i></p>
<p>The <code>PositionedReadable</code> interface has now added an operation for Vectored IO (also known as Scatter/Gather IO):</p>
<p>All the requested ranges will be retrieved into the supplied byte buffers -possibly asynchronously, possibly in parallel, with results potentially coming in out-of-order.</p>
<olstyle="list-style-type: decimal">
<li>The default implementation uses a series of <code>readFully()</code> calls, so delivers equivalent performance.</li>
<li>The local filesystem uses java native IO calls for higher performance reads than <code>readFully()</code>.</li>
<li>The S3A filesystem issues parallel HTTP GET requests in different threads.</li>
</ol>
<p>Benchmarking of enhanced Apache ORC and Apache Parquet clients through <code>file://</code> and <code>s3a://</code> show significant improvements in query performance.</p>
<h2><aname="Mapreduce:_Manifest_Committer_for_Azure_ABFS_and_google_GCS"></a>Mapreduce: Manifest Committer for Azure ABFS and google GCS</h2>
<p>The new <i>Intermediate Manifest Committer</i> uses a manifest file to commit the work of successful task attempts, rather than renaming directories. Job commit is matter of reading all the manifests, creating the destination directories (parallelized) and renaming the files, again in parallel.</p>
<p>This is both fast and correct on Azure Storage and Google GCS, and should be used there instead of the classic v1/v2 file output committers.</p>
<p>It is also safe to use on HDFS, where it should be faster than the v1 committer. It is however optimized for cloud storage where list and rename operations are significantly slower; the benefits may be less.</p>
<p>More details are available in the <ahref="./hadoop-mapreduce-client/hadoop-mapreduce-client-core/manifest_committer.html">manifest committer</a>. documentation.</p></section><section>
<p>A number of Datanode configuration options can be changed without having to restart the datanode. This makes it possible to tune deployment configurations without cluster-wide Datanode Restarts.</p>
<p>See <aclass="externalLink"href="https://github.com/apache/hadoop/blob/branch-3.3.5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L346-L361">DataNode.java</a> for the list of dynamically reconfigurable attributes.</p></section><section>
<p>A lot of dependencies have been upgraded to address recent CVEs. Many of the CVEs were not actually exploitable through the Hadoop so much of this work is just due diligence. However applications which have all the library is on a class path may be vulnerable, and the ugprades should also reduce the number of false positives security scanners report.</p>
<p>We have not been able to upgrade every single dependency to the latest version there is. Some of those changes are fundamentally incompatible. If you have concerns about the state of a specific library, consult the Apache JIRA issue tracker to see if an issue has been filed, discussions have taken place about the library in question, and whether or not there is already a fix in the pipeline. <i>Please don’t file new JIRAs about dependency-X.Y.Z having a CVE without searching for any existing issue first</i></p>
<p>As an open-source project, contributions in this area are always welcome, especially in testing the active branches, testing applications downstream of those branches and of whether updated dependencies trigger regressions.</p>
<h1>Security Advisory</h1>
<p>Hadoop HDFS is a distributed filesystem allowing remote callers to read and write data.</p>
<p>Hadoop YARN is a distributed job submission/execution engine allowing remote callers to submit arbitrary work into the cluster.</p>
<p>Unless a Hadoop cluster is deployed with <ahref="./hadoop-project-dist/hadoop-common/SecureMode.html">caller authentication with Kerberos</a>, anyone with network access to the servers has unrestricted access to the data and the ability to run whatever code they want in the system.</p>
<p>In production, there are generally three deployment patterns which can, with care, keep data and computing resources private. 1. Physical cluster: <i>configure Hadoop security</i>, usually bonded to the enterprise Kerberos/Active Directory systems. Good. 1. Cloud: transient or persistent single or multiple user/tenant cluster with private VLAN <i>and security</i>. Good. Consider <aclass="externalLink"href="https://knox.apache.org/">Apache Knox</a> for managing remote access to the cluster. 1. Cloud: transient single user/tenant cluster with private VLAN <i>and no security at all</i>. Requires careful network configuration as this is the sole means of securing the cluster.. Consider <aclass="externalLink"href="https://knox.apache.org/">Apache Knox</a> for managing remote access to the cluster.</p>
<p><i>If you deploy a Hadoop cluster in-cloud without security, and without configuring a VLAN to restrict access to trusted users, you are implicitly sharing your data and computing resources with anyone with network access</i></p>
<p>If you do deploy an insecure cluster this way then port scanners will inevitably find it and submit crypto-mining jobs. If this happens to you, please do not report this as a CVE or security issue: it is <i>utterly predictable</i>. Secure <i>your cluster</i> if you want to remain exclusively <i>your cluster</i>.</p>
<p>Finally, if you are using Hadoop as a service deployed/managed by someone else, do determine what security their products offer and make sure it meets your requirements.</p>
<p>The Hadoop documentation includes the information you need to get started using Hadoop. Begin with the <ahref="./hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a> which shows you how to set up a single-node Hadoop installation. Then move on to the <ahref="./hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a> to learn how to set up a multi-node Hadoop installation.</p>
<p>Before deploying Hadoop in production, read <ahref="./hadoop-project-dist/hadoop-common/SecureMode.html">Hadoop in Secure Mode</a>, and follow its instructions to secure your cluster.</p></section>