Merge r1410998 through r1412282 from trunk.

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/branches/HDFS-2802@1412297 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Tsz-wo Sze 2012-11-21 21:08:45 +00:00
commit 3dd84fcef7
38 changed files with 644 additions and 440 deletions

View File

@ -132,6 +132,12 @@ Trunk (Unreleased)
HADOOP-9004. Allow security unit tests to use external KDC. (Stephen Chu HADOOP-9004. Allow security unit tests to use external KDC. (Stephen Chu
via suresh) via suresh)
HADOOP-6616. Improve documentation for rack awareness. (Adam Faris via
jghoman)
HADOOP-9075. FileContext#FSLinkResolver should be made static.
(Arpit Agarwal via suresh)
BUG FIXES BUG FIXES
HADOOP-8177. MBeans shouldn't try to register when it fails to create MBeanName. HADOOP-8177. MBeans shouldn't try to register when it fails to create MBeanName.
@ -371,6 +377,9 @@ Release 2.0.3-alpha - Unreleased
HADOOP-9035. Generalize setup of LoginContext (daryn via bobby) HADOOP-9035. Generalize setup of LoginContext (daryn via bobby)
HADOOP-9042. Add a test for umask in FileSystemContractBaseTest.
(Colin McCabe via eli)
OPTIMIZATIONS OPTIMIZATIONS
HADOOP-8866. SampleQuantiles#query is O(N^2) instead of O(N). (Andrew Wang HADOOP-8866. SampleQuantiles#query is O(N^2) instead of O(N). (Andrew Wang
@ -435,6 +444,9 @@ Release 2.0.3-alpha - Unreleased
HADOOP-6607. Add different variants of non caching HTTP headers. (tucu) HADOOP-6607. Add different variants of non caching HTTP headers. (tucu)
HADOOP-9049. DelegationTokenRenewer needs to be Singleton and FileSystems
should register/deregister to/from. (Karthik Kambatla via tomwhite)
Release 2.0.2-alpha - 2012-09-07 Release 2.0.2-alpha - 2012-09-07
INCOMPATIBLE CHANGES INCOMPATIBLE CHANGES
@ -1137,6 +1149,9 @@ Release 0.23.6 - UNRELEASED
BUG FIXES BUG FIXES
HADOOP-9072. Hadoop-Common-0.23-Build Fails to build in Jenkins
(Robert Parker via tgraves)
Release 0.23.5 - UNRELEASED Release 0.23.5 - UNRELEASED
INCOMPATIBLE CHANGES INCOMPATIBLE CHANGES

View File

@ -1292,23 +1292,139 @@
<section> <section>
<title>Hadoop Rack Awareness</title> <title>Hadoop Rack Awareness</title>
<p>The HDFS and the Map/Reduce components are rack-aware.</p> <p>
<p>The <code>NameNode</code> and the <code>JobTracker</code> obtains the Both HDFS and Map/Reduce components are rack-aware. HDFS block placement will use rack
<code>rack id</code> of the slaves in the cluster by invoking an API awareness for fault tolerance by placing one block replica on a different rack. This provides
<a href="ext:api/org/apache/hadoop/net/dnstoswitchmapping/resolve data availability in the event of a network switch failure within the cluster. The jobtracker uses rack
">resolve</a> in an administrator configured awareness to reduce network transfers of HDFS data blocks by attempting to schedule tasks on datanodes with a local
module. The API resolves the slave's DNS name (also IP address) to a copy of needed HDFS blocks. If the tasks cannot be scheduled on the datanodes
rack id. What module to use can be configured using the configuration containing the needed HDFS blocks, then the tasks will be scheduled on the same rack to reduce network transfers if possible.
item <code>net.topology.node.switch.mapping.impl</code>. The default </p>
implementation of the same runs a script/command configured using <p>The NameNode and the JobTracker obtain the rack id of the cluster slaves by invoking either
<code>net.topology.script.file.name</code>. If topology.script.file.name is an external script or java class as specified by configuration files. Using either the
not set, the rack id <code>/default-rack</code> is returned for any java class or external script for topology, output must adhere to the java
passed IP address. The additional configuration in the Map/Reduce <a href="ext:api/org/apache/hadoop/net/dnstoswitchmapping/resolve">DNSToSwitchMapping</a>
part is <code>mapred.cache.task.levels</code> which determines the number interface. The interface expects a one-to-one correspondence to be maintained
of levels (in the network topology) of caches. So, for example, if it is and the topology information in the format of '/myrack/myhost', where '/' is the topology
the default value of 2, two levels of caches will be constructed - delimiter, 'myrack' is the rack identifier, and 'myhost' is the individual host. Assuming
one for hosts (host -> task mapping) and another for racks a single /24 subnet per rack, one could use the format of '/192.168.100.0/192.168.100.5' as a
(rack -> task mapping). unique rack-host topology mapping.
</p>
<p>
To use the java class for topology mapping, the class name is specified by the
<code>'topology.node.switch.mapping.impl'</code> parameter in the configuration file.
An example, NetworkTopology.java, is included with the hadoop distribution and can be customized
by the hadoop administrator. If not included with your distribution, NetworkTopology.java can also be found in the Hadoop
<a href="http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java?view=markup">
subversion tree</a>. Using a java class instead of an external script has a slight performance benefit in
that it doesn't need to fork an external process when a new slave node registers itself with the jobtracker or namenode.
As this class is only used during slave node registration, the performance benefit is limited.
</p>
<p>
If implementing an external script, it will be specified with the
<code>topology.script.file.name</code> parameter in the configuration files. Unlike the java
class, the external topology script is not included with the Hadoop distribution and is provided by the
administrator. Hadoop will send multiple IP addresses to ARGV when forking the topology script. The
number of IP addresses sent to the topology script is controlled with <code>net.topology.script.number.args</code>
and defaults to 100. If <code>net.topology.script.number.args</code> was changed to 1, a topology script would
get forked for each IP submitted by datanodes and/or tasktrackers. Below are example topology scripts.
</p>
<section>
<title>Python example</title>
<source>
<code>
#!/usr/bin/python
# this script makes assumptions about the physical environment.
# 1) each rack is its own layer 3 network with a /24 subnet, which could be typical where each rack has its own
# switch with uplinks to a central core router.
#
# +-----------+
# |core router|
# +-----------+
# / \
# +-----------+ +-----------+
# |rack switch| |rack switch|
# +-----------+ +-----------+
# | data node | | data node |
# +-----------+ +-----------+
# | data node | | data node |
# +-----------+ +-----------+
#
# 2) topology script gets list of IP's as input, calculates network address, and prints '/network_address/ip'.
import netaddr
import sys
sys.argv.pop(0) # discard name of topology script from argv list as we just want IP addresses
netmask = '255.255.255.0' # set netmask to what's being used in your environment. The example uses a /24
for ip in sys.argv: # loop over list of datanode IP's
address = '{0}/{1}'.format(ip, netmask) # format address string so it looks like 'ip/netmask' to make netaddr work
try:
network_address = netaddr.IPNetwork(address).network # calculate and print network address
print "/{0}".format(network_address)
except:
print "/rack-unknown" # print catch-all value if unable to calculate network address
</code>
</source>
</section>
<section>
<title>Bash example</title>
<source>
<code>
#!/bin/bash
# Here's a bash example to show just how simple these scripts can be
# Assuming we have flat network with everything on a single switch, we can fake a rack topology.
# This could occur in a lab environment where we have limited nodes,like 2-8 physical machines on a unmanaged switch.
# This may also apply to multiple virtual machines running on the same physical hardware.
# The number of machines isn't important, but that we are trying to fake a network topology when there isn't one.
#
# +----------+ +--------+
# |jobtracker| |datanode|
# +----------+ +--------+
# \ /
# +--------+ +--------+ +--------+
# |datanode|--| switch |--|datanode|
# +--------+ +--------+ +--------+
# / \
# +--------+ +--------+
# |datanode| |namenode|
# +--------+ +--------+
#
# With this network topology, we are treating each host as a rack. This is being done by taking the last octet
# in the datanode's IP and prepending it with the word '/rack-'. The advantage for doing this is so HDFS
# can create its 'off-rack' block copy.
# 1) 'echo $@' will echo all ARGV values to xargs.
# 2) 'xargs' will enforce that we print a single argv value per line
# 3) 'awk' will split fields on dots and append the last field to the string '/rack-'. If awk
# fails to split on four dots, it will still print '/rack-' last field value
echo $@ | xargs -n 1 | awk -F '.' '{print "/rack-"$NF}'
</code>
</source>
</section>
<p>
If <code>topology.script.file.name</code> or <code>topology.node.switch.mapping.impl</code> is
not set, the rack id '/default-rack' is returned for any passed IP address.
While this behavior appears desirable, it can cause issues with HDFS block replication as
default behavior is to write one replicated block off rack and is unable to do so as there is
only a single rack named '/default-rack'.
</p>
<p>
An additional configuration setting is <code>mapred.cache.task.levels</code> which determines
the number of levels (in the network topology) of caches. So, for example, if it is the
default value of 2, two levels of caches will be constructed - one for hosts
(host -> task mapping) and another for racks (rack -> task mapping). Giving us our one-to-one
mapping of '/myrack/myhost'
</p> </p>
</section> </section>

View File

@ -33,7 +33,7 @@ import org.apache.hadoop.util.Time;
* A daemon thread that waits for the next file system to renew. * A daemon thread that waits for the next file system to renew.
*/ */
@InterfaceAudience.Private @InterfaceAudience.Private
public class DelegationTokenRenewer<T extends FileSystem & DelegationTokenRenewer.Renewable> public class DelegationTokenRenewer
extends Thread { extends Thread {
/** The renewable interface used by the renewer. */ /** The renewable interface used by the renewer. */
public interface Renewable { public interface Renewable {
@ -93,7 +93,7 @@ public class DelegationTokenRenewer<T extends FileSystem & DelegationTokenRenewe
* @param newTime the new time * @param newTime the new time
*/ */
private void updateRenewalTime() { private void updateRenewalTime() {
renewalTime = RENEW_CYCLE + Time.now(); renewalTime = renewCycle + Time.now();
} }
/** /**
@ -136,32 +136,67 @@ public class DelegationTokenRenewer<T extends FileSystem & DelegationTokenRenewe
/** Wait for 95% of a day between renewals */ /** Wait for 95% of a day between renewals */
private static final int RENEW_CYCLE = 24 * 60 * 60 * 950; private static final int RENEW_CYCLE = 24 * 60 * 60 * 950;
private DelayQueue<RenewAction<T>> queue = new DelayQueue<RenewAction<T>>(); @InterfaceAudience.Private
protected static int renewCycle = RENEW_CYCLE;
public DelegationTokenRenewer(final Class<T> clazz) { /** Queue to maintain the RenewActions to be processed by the {@link #run()} */
private volatile DelayQueue<RenewAction<?>> queue = new DelayQueue<RenewAction<?>>();
/**
* Create the singleton instance. However, the thread can be started lazily in
* {@link #addRenewAction(FileSystem)}
*/
private static DelegationTokenRenewer INSTANCE = null;
private DelegationTokenRenewer(final Class<? extends FileSystem> clazz) {
super(clazz.getSimpleName() + "-" + DelegationTokenRenewer.class.getSimpleName()); super(clazz.getSimpleName() + "-" + DelegationTokenRenewer.class.getSimpleName());
setDaemon(true); setDaemon(true);
} }
/** Add a renew action to the queue. */ public static synchronized DelegationTokenRenewer getInstance() {
public void addRenewAction(final T fs) { if (INSTANCE == null) {
queue.add(new RenewAction<T>(fs)); INSTANCE = new DelegationTokenRenewer(FileSystem.class);
}
return INSTANCE;
} }
/** Add a renew action to the queue. */
public synchronized <T extends FileSystem & Renewable> void addRenewAction(final T fs) {
queue.add(new RenewAction<T>(fs));
if (!isAlive()) {
start();
}
}
/** Remove the associated renew action from the queue */
public synchronized <T extends FileSystem & Renewable> void removeRenewAction(
final T fs) {
for (RenewAction<?> action : queue) {
if (action.weakFs.get() == fs) {
queue.remove(action);
return;
}
}
}
@SuppressWarnings("static-access")
@Override @Override
public void run() { public void run() {
for(;;) { for(;;) {
RenewAction<T> action = null; RenewAction<?> action = null;
try { try {
synchronized (this) {
action = queue.take(); action = queue.take();
if (action.renew()) { if (action.renew()) {
action.updateRenewalTime(); action.updateRenewalTime();
queue.add(action); queue.add(action);
} }
}
} catch (InterruptedException ie) { } catch (InterruptedException ie) {
return; return;
} catch (Exception ie) { } catch (Exception ie) {
T.LOG.warn("Failed to renew token, action=" + action, ie); action.weakFs.get().LOG.warn("Failed to renew token, action=" + action,
ie);
} }
} }
} }

View File

@ -1119,7 +1119,7 @@ public final class FileContext {
* @param target The symlink's absolute target * @param target The symlink's absolute target
* @return Fully qualified version of the target. * @return Fully qualified version of the target.
*/ */
private Path qualifySymlinkTarget(final AbstractFileSystem pathFS, private static Path qualifySymlinkTarget(final AbstractFileSystem pathFS,
Path pathWithLink, Path target) { Path pathWithLink, Path target) {
// NB: makeQualified uses the target's scheme and authority, if // NB: makeQualified uses the target's scheme and authority, if
// specified, and the scheme and authority of pathFS, if not. // specified, and the scheme and authority of pathFS, if not.
@ -2321,7 +2321,7 @@ public final class FileContext {
* Class used to perform an operation on and resolve symlinks in a * Class used to perform an operation on and resolve symlinks in a
* path. The operation may potentially span multiple file systems. * path. The operation may potentially span multiple file systems.
*/ */
protected abstract class FSLinkResolver<T> { protected static abstract class FSLinkResolver<T> {
// The maximum number of symbolic link components in a path // The maximum number of symbolic link components in a path
private static final int MAX_PATH_LINKS = 32; private static final int MAX_PATH_LINKS = 32;

View File

@ -23,11 +23,13 @@ import java.io.IOException;
import junit.framework.TestCase; import junit.framework.TestCase;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;
/** /**
* <p> * <p>
@ -43,7 +45,7 @@ import org.apache.hadoop.fs.Path;
* </p> * </p>
*/ */
public abstract class FileSystemContractBaseTest extends TestCase { public abstract class FileSystemContractBaseTest extends TestCase {
protected final static String TEST_UMASK = "062";
protected FileSystem fs; protected FileSystem fs;
protected byte[] data = new byte[getBlockSize() * 2]; // two blocks of data protected byte[] data = new byte[getBlockSize() * 2]; // two blocks of data
{ {
@ -152,6 +154,25 @@ public abstract class FileSystemContractBaseTest extends TestCase {
} }
public void testMkdirsWithUmask() throws Exception {
if (fs.getScheme().equals("s3") || fs.getScheme().equals("s3n")) {
// skip permission tests for S3FileSystem until HDFS-1333 is fixed.
return;
}
Configuration conf = fs.getConf();
String oldUmask = conf.get(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY);
try {
conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY, TEST_UMASK);
final Path dir = new Path("/test/newDir");
assertTrue(fs.mkdirs(dir, new FsPermission((short)0777)));
FileStatus status = fs.getFileStatus(dir);
assertTrue(status.isDirectory());
assertEquals((short)0715, status.getPermission().toShort());
} finally {
conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY, oldUmask);
}
}
public void testGetFileStatusThrowsExceptionForNonExistentFile() public void testGetFileStatusThrowsExceptionForNonExistentFile()
throws Exception { throws Exception {
try { try {

View File

@ -0,0 +1,159 @@
package org.apache.hadoop.fs;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.net.URI;
import static org.junit.Assert.assertTrue;
import static org.mockito.Mockito.mock;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.permission.FsPermission;
import org.apache.hadoop.security.token.Token;
import org.apache.hadoop.security.token.TokenIdentifier;
import org.apache.hadoop.util.Progressable;
import org.junit.Before;
import org.junit.Test;
public class TestDelegationTokenRenewer {
private static final int RENEW_CYCLE = 1000;
private static final int MAX_RENEWALS = 100;
@SuppressWarnings("rawtypes")
static class TestToken extends Token {
public volatile int renewCount = 0;
@Override
public long renew(Configuration conf) {
if (renewCount == MAX_RENEWALS) {
Thread.currentThread().interrupt();
} else {
renewCount++;
}
return renewCount;
}
}
static class TestFileSystem extends FileSystem implements
DelegationTokenRenewer.Renewable {
private Configuration mockConf = mock(Configuration.class);;
private TestToken testToken = new TestToken();
@Override
public Configuration getConf() {
return mockConf;
}
@Override
public Token<?> getRenewToken() {
return testToken;
}
@Override
public URI getUri() {
return null;
}
@Override
public FSDataInputStream open(Path f, int bufferSize) throws IOException {
return null;
}
@Override
public FSDataOutputStream create(Path f, FsPermission permission,
boolean overwrite, int bufferSize, short replication, long blockSize,
Progressable progress) throws IOException {
return null;
}
@Override
public FSDataOutputStream append(Path f, int bufferSize,
Progressable progress) throws IOException {
return null;
}
@Override
public boolean rename(Path src, Path dst) throws IOException {
return false;
}
@Override
public boolean delete(Path f, boolean recursive) throws IOException {
return false;
}
@Override
public FileStatus[] listStatus(Path f) throws FileNotFoundException,
IOException {
return null;
}
@Override
public void setWorkingDirectory(Path new_dir) {
}
@Override
public Path getWorkingDirectory() {
return null;
}
@Override
public boolean mkdirs(Path f, FsPermission permission) throws IOException {
return false;
}
@Override
public FileStatus getFileStatus(Path f) throws IOException {
return null;
}
@Override
public <T extends TokenIdentifier> void setDelegationToken(Token<T> token) {
return;
}
}
private DelegationTokenRenewer renewer;
@Before
public void setup() {
DelegationTokenRenewer.renewCycle = RENEW_CYCLE;
renewer = DelegationTokenRenewer.getInstance();
}
@Test
public void testAddRenewAction() throws IOException, InterruptedException {
TestFileSystem tfs = new TestFileSystem();
renewer.addRenewAction(tfs);
for (int i = 0; i < 10; i++) {
Thread.sleep(RENEW_CYCLE);
if (tfs.testToken.renewCount > 0) {
return;
}
}
assertTrue("Token not renewed even after 10 seconds",
(tfs.testToken.renewCount > 0));
}
@Test
public void testRemoveRenewAction() throws IOException, InterruptedException {
TestFileSystem tfs = new TestFileSystem();
renewer.addRenewAction(tfs);
for (int i = 0; i < 10; i++) {
Thread.sleep(RENEW_CYCLE);
if (tfs.testToken.renewCount > 0) {
renewer.removeRenewAction(tfs);
break;
}
}
assertTrue("Token not renewed even once",
(tfs.testToken.renewCount > 0));
assertTrue("Token not removed",
(tfs.testToken.renewCount < MAX_RENEWALS));
}
}

View File

@ -164,6 +164,9 @@ Trunk (Unreleased)
HDFS-4206. Change the fields in INode and its subclasses to private. HDFS-4206. Change the fields in INode and its subclasses to private.
(szetszwo) (szetszwo)
HDFS-4215. Remove locking from addToParent(..) since it is used in image
loading, and add INode.isFile(). (szetszwo)
OPTIMIZATIONS OPTIMIZATIONS
BUG FIXES BUG FIXES
@ -617,6 +620,13 @@ Release 2.0.3-alpha - Unreleased
HDFS-4171. WebHDFS and HttpFs should accept only valid Unix user HDFS-4171. WebHDFS and HttpFs should accept only valid Unix user
names. (tucu) names. (tucu)
HDFS-4178. Shell scripts should not close stderr (Andy Isaacson via daryn)
HDFS-4179. BackupNode: allow reads, fix checkpointing, safeMode. (shv)
HDFS-4216. Do not ignore QuotaExceededException when adding symlinks.
(szetszwo)
Release 2.0.2-alpha - 2012-09-07 Release 2.0.2-alpha - 2012-09-07
INCOMPATIBLE CHANGES INCOMPATIBLE CHANGES

View File

@ -74,7 +74,7 @@ fi
#--------------------------------------------------------- #---------------------------------------------------------
# secondary namenodes (if any) # secondary namenodes (if any)
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>&-) SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>/dev/null)
if [ -n "$SECONDARY_NAMENODES" ]; then if [ -n "$SECONDARY_NAMENODES" ]; then
echo "Starting secondary namenodes [$SECONDARY_NAMENODES]" echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"

View File

@ -50,7 +50,7 @@ fi
#--------------------------------------------------------- #---------------------------------------------------------
# secondary namenodes (if any) # secondary namenodes (if any)
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>&-) SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>/dev/null)
if [ -n "$SECONDARY_NAMENODES" ]; then if [ -n "$SECONDARY_NAMENODES" ]; then
echo "Stopping secondary namenodes [$SECONDARY_NAMENODES]" echo "Stopping secondary namenodes [$SECONDARY_NAMENODES]"

View File

@ -82,12 +82,8 @@ import org.xml.sax.helpers.XMLReaderFactory;
@InterfaceStability.Evolving @InterfaceStability.Evolving
public class HftpFileSystem extends FileSystem public class HftpFileSystem extends FileSystem
implements DelegationTokenRenewer.Renewable { implements DelegationTokenRenewer.Renewable {
private static final DelegationTokenRenewer<HftpFileSystem> dtRenewer
= new DelegationTokenRenewer<HftpFileSystem>(HftpFileSystem.class);
static { static {
HttpURLConnection.setFollowRedirects(true); HttpURLConnection.setFollowRedirects(true);
dtRenewer.start();
} }
public static final Text TOKEN_KIND = new Text("HFTP delegation"); public static final Text TOKEN_KIND = new Text("HFTP delegation");
@ -106,6 +102,16 @@ public class HftpFileSystem extends FileSystem
private static final HftpDelegationTokenSelector hftpTokenSelector = private static final HftpDelegationTokenSelector hftpTokenSelector =
new HftpDelegationTokenSelector(); new HftpDelegationTokenSelector();
private DelegationTokenRenewer dtRenewer = null;
private synchronized void addRenewAction(final HftpFileSystem hftpFs) {
if (dtRenewer == null) {
dtRenewer = DelegationTokenRenewer.getInstance();
}
dtRenewer.addRenewAction(hftpFs);
}
public static final SimpleDateFormat getDateFormat() { public static final SimpleDateFormat getDateFormat() {
final SimpleDateFormat df = new SimpleDateFormat(HFTP_DATE_FORMAT); final SimpleDateFormat df = new SimpleDateFormat(HFTP_DATE_FORMAT);
df.setTimeZone(TimeZone.getTimeZone(HFTP_TIMEZONE)); df.setTimeZone(TimeZone.getTimeZone(HFTP_TIMEZONE));
@ -202,7 +208,7 @@ public class HftpFileSystem extends FileSystem
if (token != null) { if (token != null) {
setDelegationToken(token); setDelegationToken(token);
if (createdToken) { if (createdToken) {
dtRenewer.addRenewAction(this); addRenewAction(this);
LOG.debug("Created new DT for " + token.getService()); LOG.debug("Created new DT for " + token.getService());
} else { } else {
LOG.debug("Found existing DT for " + token.getService()); LOG.debug("Found existing DT for " + token.getService());
@ -395,6 +401,14 @@ public class HftpFileSystem extends FileSystem
return new FSDataInputStream(new RangeHeaderInputStream(u)); return new FSDataInputStream(new RangeHeaderInputStream(u));
} }
@Override
public void close() throws IOException {
super.close();
if (dtRenewer != null) {
dtRenewer.removeRenewAction(this); // blocks
}
}
/** Class to parse and store a listing reply from the server. */ /** Class to parse and store a listing reply from the server. */
class LsParser extends DefaultHandler { class LsParser extends DefaultHandler {

View File

@ -69,6 +69,8 @@ public class BackupNode extends NameNode {
private static final String BN_HTTP_ADDRESS_NAME_KEY = DFSConfigKeys.DFS_NAMENODE_BACKUP_HTTP_ADDRESS_KEY; private static final String BN_HTTP_ADDRESS_NAME_KEY = DFSConfigKeys.DFS_NAMENODE_BACKUP_HTTP_ADDRESS_KEY;
private static final String BN_HTTP_ADDRESS_DEFAULT = DFSConfigKeys.DFS_NAMENODE_BACKUP_HTTP_ADDRESS_DEFAULT; private static final String BN_HTTP_ADDRESS_DEFAULT = DFSConfigKeys.DFS_NAMENODE_BACKUP_HTTP_ADDRESS_DEFAULT;
private static final String BN_SERVICE_RPC_ADDRESS_KEY = DFSConfigKeys.DFS_NAMENODE_BACKUP_SERVICE_RPC_ADDRESS_KEY; private static final String BN_SERVICE_RPC_ADDRESS_KEY = DFSConfigKeys.DFS_NAMENODE_BACKUP_SERVICE_RPC_ADDRESS_KEY;
private static final float BN_SAFEMODE_THRESHOLD_PCT_DEFAULT = 1.5f;
private static final int BN_SAFEMODE_EXTENSION_DEFAULT = Integer.MAX_VALUE;
/** Name-node proxy */ /** Name-node proxy */
NamenodeProtocol namenode; NamenodeProtocol namenode;
@ -127,6 +129,10 @@ public class BackupNode extends NameNode {
@Override // NameNode @Override // NameNode
protected void loadNamesystem(Configuration conf) throws IOException { protected void loadNamesystem(Configuration conf) throws IOException {
conf.setFloat(DFSConfigKeys.DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_KEY,
BN_SAFEMODE_THRESHOLD_PCT_DEFAULT);
conf.setInt(DFSConfigKeys.DFS_NAMENODE_SAFEMODE_EXTENSION_KEY,
BN_SAFEMODE_EXTENSION_DEFAULT);
BackupImage bnImage = new BackupImage(conf); BackupImage bnImage = new BackupImage(conf);
this.namesystem = new FSNamesystem(conf, bnImage); this.namesystem = new FSNamesystem(conf, bnImage);
bnImage.setNamesystem(namesystem); bnImage.setNamesystem(namesystem);
@ -423,9 +429,9 @@ public class BackupNode extends NameNode {
return; return;
} }
if (OperationCategory.JOURNAL != op && if (OperationCategory.JOURNAL != op &&
!(OperationCategory.READ == op && allowStaleStandbyReads)) { !(OperationCategory.READ == op && !isRole(NamenodeRole.CHECKPOINT))) {
String msg = "Operation category " + op String msg = "Operation category " + op
+ " is not supported at the BackupNode"; + " is not supported at " + getRole();
throw new StandbyException(msg); throw new StandbyException(msg);
} }
} }

View File

@ -206,6 +206,7 @@ class Checkpointer extends Daemon {
RemoteEditLogManifest manifest = RemoteEditLogManifest manifest =
getRemoteNamenodeProxy().getEditLogManifest(bnImage.getLastAppliedTxId() + 1); getRemoteNamenodeProxy().getEditLogManifest(bnImage.getLastAppliedTxId() + 1);
boolean needReloadImage = false;
if (!manifest.getLogs().isEmpty()) { if (!manifest.getLogs().isEmpty()) {
RemoteEditLog firstRemoteLog = manifest.getLogs().get(0); RemoteEditLog firstRemoteLog = manifest.getLogs().get(0);
// we don't have enough logs to roll forward using only logs. Need // we don't have enough logs to roll forward using only logs. Need
@ -218,13 +219,10 @@ class Checkpointer extends Daemon {
bnStorage, true); bnStorage, true);
bnImage.saveDigestAndRenameCheckpointImage( bnImage.saveDigestAndRenameCheckpointImage(
sig.mostRecentCheckpointTxId, downloadedHash); sig.mostRecentCheckpointTxId, downloadedHash);
lastApplied = sig.mostRecentCheckpointTxId;
LOG.info("Loading image with txid " + sig.mostRecentCheckpointTxId); needReloadImage = true;
File file = bnStorage.findImageFile(sig.mostRecentCheckpointTxId);
bnImage.reloadFromImageFile(file, backupNode.getNamesystem());
} }
lastApplied = bnImage.getLastAppliedTxId();
if (firstRemoteLog.getStartTxId() > lastApplied + 1) { if (firstRemoteLog.getStartTxId() > lastApplied + 1) {
throw new IOException("No logs to roll forward from " + lastApplied); throw new IOException("No logs to roll forward from " + lastApplied);
} }
@ -235,6 +233,11 @@ class Checkpointer extends Daemon {
backupNode.nnHttpAddress, log, bnStorage); backupNode.nnHttpAddress, log, bnStorage);
} }
if(needReloadImage) {
LOG.info("Loading image with txid " + sig.mostRecentCheckpointTxId);
File file = bnStorage.findImageFile(sig.mostRecentCheckpointTxId);
bnImage.reloadFromImageFile(file, backupNode.getNamesystem());
}
rollForwardByApplyingLogs(manifest, bnImage, backupNode.getNamesystem()); rollForwardByApplyingLogs(manifest, bnImage, backupNode.getNamesystem());
} }
@ -243,8 +246,9 @@ class Checkpointer extends Daemon {
backupNode.namesystem.writeLock(); backupNode.namesystem.writeLock();
try { try {
backupNode.namesystem.dir.setReady(); backupNode.namesystem.dir.setReady();
if(backupNode.namesystem.getBlocksTotal() > 0) {
backupNode.namesystem.setBlockTotal(); backupNode.namesystem.setBlockTotal();
}
bnImage.saveFSImageInAllDirs(backupNode.getNamesystem(), txid); bnImage.saveFSImageInAllDirs(backupNode.getNamesystem(), txid);
bnStorage.writeAll(); bnStorage.writeAll();
} finally { } finally {
@ -284,9 +288,9 @@ class Checkpointer extends Daemon {
List<EditLogInputStream> editsStreams = Lists.newArrayList(); List<EditLogInputStream> editsStreams = Lists.newArrayList();
for (RemoteEditLog log : manifest.getLogs()) { for (RemoteEditLog log : manifest.getLogs()) {
if (log.getEndTxId() > dstImage.getLastAppliedTxId()) {
File f = dstStorage.findFinalizedEditsFile( File f = dstStorage.findFinalizedEditsFile(
log.getStartTxId(), log.getEndTxId()); log.getStartTxId(), log.getEndTxId());
if (log.getStartTxId() > dstImage.getLastAppliedTxId()) {
editsStreams.add(new EditLogFileInputStream(f, log.getStartTxId(), editsStreams.add(new EditLogFileInputStream(f, log.getStartTxId(),
log.getEndTxId(), true)); log.getEndTxId(), true));
} }

View File

@ -77,6 +77,12 @@ import com.google.common.base.Preconditions;
* *
*************************************************/ *************************************************/
public class FSDirectory implements Closeable { public class FSDirectory implements Closeable {
private static INodeDirectoryWithQuota createRoot(FSNamesystem namesystem) {
final INodeDirectoryWithQuota r = new INodeDirectoryWithQuota(
INodeDirectory.ROOT_NAME,
namesystem.createFsOwnerPermissions(new FsPermission((short)0755)));
return INodeDirectorySnapshottable.newInstance(r, 0);
}
INodeDirectoryWithQuota rootDir; INodeDirectoryWithQuota rootDir;
FSImage fsImage; FSImage fsImage;
@ -125,16 +131,7 @@ public class FSDirectory implements Closeable {
FSDirectory(FSImage fsImage, FSNamesystem ns, Configuration conf) { FSDirectory(FSImage fsImage, FSNamesystem ns, Configuration conf) {
this.dirLock = new ReentrantReadWriteLock(true); // fair this.dirLock = new ReentrantReadWriteLock(true); // fair
this.cond = dirLock.writeLock().newCondition(); this.cond = dirLock.writeLock().newCondition();
rootDir = createRoot(ns);
this.namesystem = ns;
int threshold = conf.getInt(
DFSConfigKeys.DFS_NAMENODE_NAME_CACHE_THRESHOLD_KEY,
DFSConfigKeys.DFS_NAMENODE_NAME_CACHE_THRESHOLD_DEFAULT);
NameNode.LOG.info("Caching file names occuring more than " + threshold
+ " times");
this.nameCache = new NameCache<ByteArray>(threshold);
reset();
this.fsImage = fsImage; this.fsImage = fsImage;
int configuredLimit = conf.getInt( int configuredLimit = conf.getInt(
DFSConfigKeys.DFS_LIST_LIMIT, DFSConfigKeys.DFS_LIST_LIMIT_DEFAULT); DFSConfigKeys.DFS_LIST_LIMIT, DFSConfigKeys.DFS_LIST_LIMIT_DEFAULT);
@ -148,6 +145,14 @@ public class FSDirectory implements Closeable {
this.maxDirItems = conf.getInt( this.maxDirItems = conf.getInt(
DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY, DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY,
DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_DEFAULT); DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_DEFAULT);
int threshold = conf.getInt(
DFSConfigKeys.DFS_NAMENODE_NAME_CACHE_THRESHOLD_KEY,
DFSConfigKeys.DFS_NAMENODE_NAME_CACHE_THRESHOLD_DEFAULT);
NameNode.LOG.info("Caching file names occuring more than " + threshold
+ " times");
nameCache = new NameCache<ByteArray>(threshold);
namesystem = ns;
} }
private FSNamesystem getFSNamesystem() { private FSNamesystem getFSNamesystem() {
@ -309,35 +314,6 @@ public class FSDirectory implements Closeable {
return newNode; return newNode;
} }
INodeDirectory addToParent(INodeDirectory parentINode,
INode newNode, boolean propagateModTime) {
// NOTE: This does not update space counts for parents
INodeDirectory newParent = null;
writeLock();
try {
try {
newParent = rootDir.addToParent(newNode, parentINode,
propagateModTime);
cacheName(newNode);
} catch (FileNotFoundException e) {
return null;
}
if(newParent == null)
return null;
if(!newNode.isDirectory() && !newNode.isSymlink()) {
// Add file->block mapping
INodeFile newF = (INodeFile)newNode;
BlockInfo[] blocks = newF.getBlocks();
for (int i = 0; i < blocks.length; i++) {
newF.setBlock(i, getBlockManager().addBlockCollection(blocks[i], newF));
}
}
} finally {
writeUnlock();
}
return newParent;
}
/** /**
* Add a block to the file. Returns a reference to the added block. * Add a block to the file. Returns a reference to the added block.
*/ */
@ -845,11 +821,7 @@ public class FSDirectory implements Closeable {
final INodesInPath inodesInPath = rootDir.getMutableINodesInPath(src, true); final INodesInPath inodesInPath = rootDir.getMutableINodesInPath(src, true);
final INode[] inodes = inodesInPath.getINodes(); final INode[] inodes = inodesInPath.getINodes();
INode inode = inodes[inodes.length - 1]; INode inode = inodes[inodes.length - 1];
if (inode == null) { if (inode == null || !inode.isFile()) {
return null;
}
assert !inode.isSymlink();
if (inode.isDirectory()) {
return null; return null;
} }
INodeFile fileNode = (INodeFile)inode; INodeFile fileNode = (INodeFile)inode;
@ -868,22 +840,15 @@ public class FSDirectory implements Closeable {
} }
/** /**
* Get the blocksize of a file * @param path the file path
* @param filename the filename * @return the block size of the file.
* @return the number of bytes
*/ */
long getPreferredBlockSize(String filename) throws UnresolvedLinkException, long getPreferredBlockSize(String path) throws UnresolvedLinkException,
FileNotFoundException, IOException { FileNotFoundException, IOException {
readLock(); readLock();
try { try {
INode inode = rootDir.getNode(filename, false); return INodeFile.valueOf(rootDir.getNode(path, false), path
if (inode == null) { ).getPreferredBlockSize();
throw new FileNotFoundException("File does not exist: " + filename);
}
if (inode.isDirectory() || inode.isSymlink()) {
throw new IOException("Getting block size of non-file: "+ filename);
}
return ((INodeFile)inode).getPreferredBlockSize();
} finally { } finally {
readUnlock(); readUnlock();
} }
@ -897,9 +862,7 @@ public class FSDirectory implements Closeable {
if (inode == null) { if (inode == null) {
return false; return false;
} }
return inode.isDirectory() || inode.isSymlink() return !inode.isFile() || ((INodeFile)inode).getBlocks() != null;
? true
: ((INodeFile)inode).getBlocks() != null;
} finally { } finally {
readUnlock(); readUnlock();
} }
@ -1336,14 +1299,8 @@ public class FSDirectory implements Closeable {
waitForReady(); waitForReady();
readLock(); readLock();
try { try {
INode targetNode = rootDir.getNode(src, false); final INode i = rootDir.getNode(src, false);
if (targetNode == null) return i != null && i.isFile()? ((INodeFile)i).getBlocks(): null;
return null;
if (targetNode.isDirectory())
return null;
if (targetNode.isSymlink())
return null;
return ((INodeFile)targetNode).getBlocks();
} finally { } finally {
readUnlock(); readUnlock();
} }
@ -2151,11 +2108,7 @@ public class FSDirectory implements Closeable {
writeLock(); writeLock();
try { try {
setReady(false); setReady(false);
final INodeDirectoryWithQuota r = new INodeDirectoryWithQuota( rootDir = createRoot(getFSNamesystem());
INodeDirectory.ROOT_NAME,
getFSNamesystem().createFsOwnerPermissions(new FsPermission((short)0755)),
Long.MAX_VALUE, UNKNOWN_DISK_SPACE);
rootDir = INodeDirectorySnapshottable.newInstance(r, 0);
nameCache.reset(); nameCache.reset();
} finally { } finally {
writeUnlock(); writeUnlock();
@ -2250,7 +2203,7 @@ public class FSDirectory implements Closeable {
INodeSymlink addSymlink(String path, String target, INodeSymlink addSymlink(String path, String target,
PermissionStatus dirPerms, boolean createParent) PermissionStatus dirPerms, boolean createParent)
throws UnresolvedLinkException, FileAlreadyExistsException, throws UnresolvedLinkException, FileAlreadyExistsException,
QuotaExceededException, IOException { QuotaExceededException, SnapshotAccessControlException {
waitForReady(); waitForReady();
final long modTime = now(); final long modTime = now();
@ -2264,7 +2217,7 @@ public class FSDirectory implements Closeable {
INodeSymlink newNode = null; INodeSymlink newNode = null;
writeLock(); writeLock();
try { try {
newNode = unprotectedSymlink(path, target, modTime, modTime, newNode = unprotectedAddSymlink(path, target, modTime, modTime,
new PermissionStatus(userName, null, FsPermission.getDefault())); new PermissionStatus(userName, null, FsPermission.getDefault()));
} finally { } finally {
writeUnlock(); writeUnlock();
@ -2284,23 +2237,12 @@ public class FSDirectory implements Closeable {
/** /**
* Add the specified path into the namespace. Invoked from edit log processing. * Add the specified path into the namespace. Invoked from edit log processing.
*/ */
INodeSymlink unprotectedSymlink(String path, String target, long modTime, INodeSymlink unprotectedAddSymlink(String path, String target, long mtime,
long atime, PermissionStatus perm) long atime, PermissionStatus perm)
throws UnresolvedLinkException { throws UnresolvedLinkException, QuotaExceededException {
assert hasWriteLock(); assert hasWriteLock();
INodeSymlink newNode = new INodeSymlink(target, modTime, atime, perm); final INodeSymlink symlink = new INodeSymlink(target, mtime, atime, perm);
try { return addNode(path, symlink, UNKNOWN_DISK_SPACE);
newNode = addNode(path, newNode, UNKNOWN_DISK_SPACE);
} catch (UnresolvedLinkException e) {
/* All UnresolvedLinkExceptions should have been resolved by now, but we
* should re-throw them in case that changes so they are not swallowed
* by catching IOException below.
*/
throw e;
} catch (IOException e) {
return null;
}
return newNode;
} }
/** /**
@ -2309,7 +2251,7 @@ public class FSDirectory implements Closeable {
*/ */
void cacheName(INode inode) { void cacheName(INode inode) {
// Name is cached only for files // Name is cached only for files
if (inode.isDirectory() || inode.isSymlink()) { if (!inode.isFile()) {
return; return;
} }
ByteArray name = new ByteArray(inode.getLocalNameBytes()); ByteArray name = new ByteArray(inode.getLocalNameBytes());

View File

@ -426,7 +426,7 @@ public class FSEditLogLoader {
} }
case OP_SYMLINK: { case OP_SYMLINK: {
SymlinkOp symlinkOp = (SymlinkOp)op; SymlinkOp symlinkOp = (SymlinkOp)op;
fsDir.unprotectedSymlink(symlinkOp.path, symlinkOp.value, fsDir.unprotectedAddSymlink(symlinkOp.path, symlinkOp.value,
symlinkOp.mtime, symlinkOp.atime, symlinkOp.mtime, symlinkOp.atime,
symlinkOp.permissionStatus); symlinkOp.permissionStatus);
break; break;

View File

@ -38,7 +38,6 @@ import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.protocol.HdfsConstants; import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import org.apache.hadoop.hdfs.protocol.LayoutVersion; import org.apache.hadoop.hdfs.protocol.LayoutVersion;
import org.apache.hadoop.hdfs.protocol.LayoutVersion.Feature; import org.apache.hadoop.hdfs.protocol.LayoutVersion.Feature;
import org.apache.hadoop.hdfs.server.common.GenerationStamp;
import org.apache.hadoop.hdfs.server.common.InconsistentFSStateException; import org.apache.hadoop.hdfs.server.common.InconsistentFSStateException;
import org.apache.hadoop.hdfs.server.common.Storage; import org.apache.hadoop.hdfs.server.common.Storage;
import org.apache.hadoop.hdfs.server.common.Storage.FormatConfirmable; import org.apache.hadoop.hdfs.server.common.Storage.FormatConfirmable;
@ -1020,6 +1019,7 @@ public class FSImage implements Closeable {
NamenodeCommand startCheckpoint(NamenodeRegistration bnReg, // backup node NamenodeCommand startCheckpoint(NamenodeRegistration bnReg, // backup node
NamenodeRegistration nnReg) // active name-node NamenodeRegistration nnReg) // active name-node
throws IOException { throws IOException {
LOG.info("Start checkpoint at txid " + getEditLog().getLastWrittenTxId());
String msg = null; String msg = null;
// Verify that checkpoint is allowed // Verify that checkpoint is allowed
if(bnReg.getNamespaceID() != storage.getNamespaceID()) if(bnReg.getNamespaceID() != storage.getNamespaceID())
@ -1059,6 +1059,7 @@ public class FSImage implements Closeable {
* @throws IOException if the checkpoint fields are inconsistent * @throws IOException if the checkpoint fields are inconsistent
*/ */
void endCheckpoint(CheckpointSignature sig) throws IOException { void endCheckpoint(CheckpointSignature sig) throws IOException {
LOG.info("End checkpoint at txid " + getEditLog().getLastWrittenTxId());
sig.validateStorageInfo(this); sig.validateStorageInfo(this);
} }

View File

@ -42,6 +42,7 @@ import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import org.apache.hadoop.hdfs.protocol.LayoutVersion; import org.apache.hadoop.hdfs.protocol.LayoutVersion;
import org.apache.hadoop.hdfs.protocol.LayoutVersion.Feature; import org.apache.hadoop.hdfs.protocol.LayoutVersion.Feature;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo; import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
import org.apache.hadoop.hdfs.server.common.InconsistentFSStateException; import org.apache.hadoop.hdfs.server.common.InconsistentFSStateException;
import org.apache.hadoop.hdfs.util.ReadOnlyList; import org.apache.hadoop.hdfs.util.ReadOnlyList;
import org.apache.hadoop.io.MD5Hash; import org.apache.hadoop.io.MD5Hash;
@ -202,7 +203,7 @@ class FSImageFormat {
fsDir.rootDir.setQuota(nsQuota, dsQuota); fsDir.rootDir.setQuota(nsQuota, dsQuota);
} }
fsDir.rootDir.setModificationTime(root.getModificationTime()); fsDir.rootDir.setModificationTime(root.getModificationTime());
fsDir.rootDir.setPermissionStatus(root.getPermissionStatus()); fsDir.rootDir.clonePermissionStatus(root);
} }
/** /**
@ -258,7 +259,7 @@ class FSImageFormat {
// add to parent // add to parent
newNode.setLocalName(localName); newNode.setLocalName(localName);
namesystem.dir.addToParent(parent, newNode, false); addToParent(parent, newNode);
} }
return numChildren; return numChildren;
} }
@ -293,7 +294,30 @@ class FSImageFormat {
// add new inode // add new inode
newNode.setLocalName(pathComponents[pathComponents.length-1]); newNode.setLocalName(pathComponents[pathComponents.length-1]);
parentINode = fsDir.addToParent(parentINode, newNode, false); addToParent(parentINode, newNode);
}
}
/**
* Add the child node to parent and, if child is a file, update block map.
* This method is only used for image loading so that synchronization,
* modification time update and space count update are not needed.
*/
void addToParent(INodeDirectory parent, INode child) {
// NOTE: This does not update space counts for parents
if (parent.addChild(child, false) == null) {
return;
}
namesystem.dir.cacheName(child);
if (child.isFile()) {
// Add file->block mapping
final INodeFile file = (INodeFile)child;
final BlockInfo[] blocks = file.getBlocks();
final BlockManager bm = namesystem.getBlockManager();
for (int i = 0; i < blocks.length; i++) {
file.setBlock(i, bm.addBlockCollection(blocks[i], file));
}
} }
} }

View File

@ -4048,7 +4048,8 @@ public class FSNamesystem implements Namesystem, FSClusterStats,
// of the number of total blocks in the system. // of the number of total blocks in the system.
this.shouldIncrementallyTrackBlocks = true; this.shouldIncrementallyTrackBlocks = true;
} }
if(blockSafe < 0)
this.blockSafe = 0;
checkMode(); checkMode();
} }

View File

@ -90,17 +90,17 @@ public abstract class INode implements Comparable<byte[]> {
return (record & ~MASK) | (bits << OFFSET); return (record & ~MASK) | (bits << OFFSET);
} }
/** Set the {@link PermissionStatus} */ /** Encode the {@link PermissionStatus} to a long. */
static long toLong(PermissionStatus ps) { static long toLong(PermissionStatus ps) {
long permission = 0L; long permission = 0L;
final int user = SerialNumberManager.INSTANCE.getUserSerialNumber( final int user = SerialNumberManager.INSTANCE.getUserSerialNumber(
ps.getUserName()); ps.getUserName());
permission = PermissionStatusFormat.USER.combine(user, permission); permission = USER.combine(user, permission);
final int group = SerialNumberManager.INSTANCE.getGroupSerialNumber( final int group = SerialNumberManager.INSTANCE.getGroupSerialNumber(
ps.getGroupName()); ps.getGroupName());
permission = PermissionStatusFormat.GROUP.combine(group, permission); permission = GROUP.combine(group, permission);
final int mode = ps.getPermission().toShort(); final int mode = ps.getPermission().toShort();
permission = PermissionStatusFormat.MODE.combine(mode, permission); permission = MODE.combine(mode, permission);
return permission; return permission;
} }
} }
@ -114,8 +114,9 @@ public abstract class INode implements Comparable<byte[]> {
*/ */
private byte[] name = null; private byte[] name = null;
/** /**
* Permission encoded using PermissionStatusFormat. * Permission encoded using {@link PermissionStatusFormat}.
* Codes other than {@link #updatePermissionStatus(PermissionStatusFormat, long)}. * Codes other than {@link #clonePermissionStatus(INode)}
* and {@link #updatePermissionStatus(PermissionStatusFormat, long)}
* should not modify it. * should not modify it.
*/ */
private long permission = 0L; private long permission = 0L;
@ -159,11 +160,9 @@ public abstract class INode implements Comparable<byte[]> {
return name.length == 0; return name.length == 0;
} }
/** Set the {@link PermissionStatus} */ /** Clone the {@link PermissionStatus}. */
protected void setPermissionStatus(PermissionStatus ps) { void clonePermissionStatus(INode that) {
setUser(ps.getUserName()); this.permission = that.permission;
setGroup(ps.getGroupName());
setPermission(ps.getPermission());
} }
/** Get the {@link PermissionStatus} */ /** Get the {@link PermissionStatus} */
public PermissionStatus getPermissionStatus() { public PermissionStatus getPermissionStatus() {
@ -205,6 +204,13 @@ public abstract class INode implements Comparable<byte[]> {
updatePermissionStatus(PermissionStatusFormat.MODE, permission.toShort()); updatePermissionStatus(PermissionStatusFormat.MODE, permission.toShort());
} }
/**
* Check whether it's a file.
*/
public boolean isFile() {
return false;
}
/** /**
* Check whether it's a directory * Check whether it's a directory
*/ */

View File

@ -395,23 +395,6 @@ public class INodeDirectory extends INode {
return addToParent(pathComponents, newNode, true) == null? null: newNode; return addToParent(pathComponents, newNode, true) == null? null: newNode;
} }
/**
* Add new inode to the parent if specified.
* Optimized version of addNode() if parent is not null.
*
* @return parent INode if new inode is inserted
* or null if it already exists.
* @throws FileNotFoundException if parent does not exist or
* is not a directory.
*/
INodeDirectory addToParent(INode newNode, INodeDirectory parent,
boolean propagateModTime) throws FileNotFoundException {
// insert into the parent children list
if(parent.addChild(newNode, propagateModTime) == null)
return null;
return parent;
}
INodeDirectory getParent(byte[][] pathComponents INodeDirectory getParent(byte[][] pathComponents
) throws FileNotFoundException, UnresolvedLinkException { ) throws FileNotFoundException, UnresolvedLinkException {
if (pathComponents.length < 2) // add root if (pathComponents.length < 2) // add root

View File

@ -19,6 +19,7 @@ package org.apache.hadoop.hdfs.server.namenode;
import org.apache.hadoop.fs.permission.PermissionStatus; import org.apache.hadoop.fs.permission.PermissionStatus;
import org.apache.hadoop.hdfs.protocol.DSQuotaExceededException; import org.apache.hadoop.hdfs.protocol.DSQuotaExceededException;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import org.apache.hadoop.hdfs.protocol.NSQuotaExceededException; import org.apache.hadoop.hdfs.protocol.NSQuotaExceededException;
import org.apache.hadoop.hdfs.protocol.QuotaExceededException; import org.apache.hadoop.hdfs.protocol.QuotaExceededException;
@ -26,9 +27,13 @@ import org.apache.hadoop.hdfs.protocol.QuotaExceededException;
* Directory INode class that has a quota restriction * Directory INode class that has a quota restriction
*/ */
public class INodeDirectoryWithQuota extends INodeDirectory { public class INodeDirectoryWithQuota extends INodeDirectory {
private long nsQuota; /// NameSpace quota /** Name space quota */
private long nsQuota = Long.MAX_VALUE;
/** Name space count */
private long nsCount = 1L; private long nsCount = 1L;
private long dsQuota; /// disk space quota /** Disk space quota */
private long dsQuota = HdfsConstants.QUOTA_RESET;
/** Disk space count */
private long diskspace = 0L; private long diskspace = 0L;
/** Convert an existing directory inode to one with the given quota /** Convert an existing directory inode to one with the given quota
@ -57,11 +62,8 @@ public class INodeDirectoryWithQuota extends INodeDirectory {
} }
/** constructor with no quota verification */ /** constructor with no quota verification */
INodeDirectoryWithQuota(String name, PermissionStatus permissions, INodeDirectoryWithQuota(String name, PermissionStatus permissions) {
long nsQuota, long dsQuota) {
super(name, permissions); super(name, permissions);
this.nsQuota = nsQuota;
this.dsQuota = dsQuota;
} }
/** Get this directory's namespace quota /** Get this directory's namespace quota

View File

@ -100,6 +100,12 @@ public class INodeFile extends INode implements BlockCollection {
this.setLocalName(f.getLocalNameBytes()); this.setLocalName(f.getLocalNameBytes());
} }
/** @return true unconditionally. */
@Override
public final boolean isFile() {
return true;
}
/** /**
* Set the {@link FsPermission} of this {@link INodeFile}. * Set the {@link FsPermission} of this {@link INodeFile}.
* Since this is a file, * Since this is a file,

View File

@ -124,15 +124,14 @@ public class WebHdfsFileSystem extends FileSystem
public static final WebHdfsDelegationTokenSelector DT_SELECTOR public static final WebHdfsDelegationTokenSelector DT_SELECTOR
= new WebHdfsDelegationTokenSelector(); = new WebHdfsDelegationTokenSelector();
private static DelegationTokenRenewer<WebHdfsFileSystem> DT_RENEWER = null; private DelegationTokenRenewer dtRenewer = null;
private static synchronized void addRenewAction(final WebHdfsFileSystem webhdfs) { private synchronized void addRenewAction(final WebHdfsFileSystem webhdfs) {
if (DT_RENEWER == null) { if (dtRenewer == null) {
DT_RENEWER = new DelegationTokenRenewer<WebHdfsFileSystem>(WebHdfsFileSystem.class); dtRenewer = DelegationTokenRenewer.getInstance();
DT_RENEWER.start();
} }
DT_RENEWER.addRenewAction(webhdfs); dtRenewer.addRenewAction(webhdfs);
} }
/** Is WebHDFS enabled in conf? */ /** Is WebHDFS enabled in conf? */
@ -766,6 +765,14 @@ public class WebHdfsFileSystem extends FileSystem
new OffsetUrlOpener(url), new OffsetUrlOpener(null))); new OffsetUrlOpener(url), new OffsetUrlOpener(null)));
} }
@Override
public void close() throws IOException {
super.close();
if (dtRenewer != null) {
dtRenewer.removeRenewAction(this); // blocks
}
}
class OffsetUrlOpener extends ByteRangeInputStream.URLOpener { class OffsetUrlOpener extends ByteRangeInputStream.URLOpener {
OffsetUrlOpener(final URL url) { OffsetUrlOpener(final URL url) {
super(url); super(url);

View File

@ -28,9 +28,11 @@ import org.apache.commons.logging.impl.Log4JLogger;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.fs.permission.FsPermission;
import org.apache.hadoop.hdfs.DFSConfigKeys; import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.DistributedFileSystem;
import org.apache.hadoop.hdfs.HdfsConfiguration; import org.apache.hadoop.hdfs.HdfsConfiguration;
import org.apache.hadoop.hdfs.MiniDFSCluster; import org.apache.hadoop.hdfs.MiniDFSCluster;
import org.apache.hadoop.hdfs.protocol.HdfsConstants; import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import org.apache.hadoop.hdfs.protocol.QuotaExceededException;
import org.apache.hadoop.hdfs.server.namenode.NameNode; import org.apache.hadoop.hdfs.server.namenode.NameNode;
import org.apache.hadoop.hdfs.web.WebHdfsFileSystem; import org.apache.hadoop.hdfs.web.WebHdfsFileSystem;
import org.apache.hadoop.hdfs.web.WebHdfsTestUtil; import org.apache.hadoop.hdfs.web.WebHdfsTestUtil;
@ -51,6 +53,7 @@ public class TestFcHdfsSymlink extends FileContextSymlinkBaseTest {
private static MiniDFSCluster cluster; private static MiniDFSCluster cluster;
private static WebHdfsFileSystem webhdfs; private static WebHdfsFileSystem webhdfs;
private static DistributedFileSystem dfs;
@Override @Override
@ -89,6 +92,7 @@ public class TestFcHdfsSymlink extends FileContextSymlinkBaseTest {
cluster = new MiniDFSCluster.Builder(conf).build(); cluster = new MiniDFSCluster.Builder(conf).build();
fc = FileContext.getFileContext(cluster.getURI(0)); fc = FileContext.getFileContext(cluster.getURI(0));
webhdfs = WebHdfsTestUtil.getWebHdfsFileSystem(conf); webhdfs = WebHdfsTestUtil.getWebHdfsFileSystem(conf);
dfs = cluster.getFileSystem();
} }
@AfterClass @AfterClass
@ -317,4 +321,27 @@ public class TestFcHdfsSymlink extends FileContextSymlinkBaseTest {
assertEquals(2, fc.getFileStatus(link).getReplication()); assertEquals(2, fc.getFileStatus(link).getReplication());
assertEquals(2, fc.getFileStatus(file).getReplication()); assertEquals(2, fc.getFileStatus(file).getReplication());
} }
@Test
/** Test craeteSymlink(..) with quota. */
public void testQuota() throws IOException {
final Path dir = new Path(testBaseDir1());
dfs.setQuota(dir, 3, HdfsConstants.QUOTA_DONT_SET);
final Path file = new Path(dir, "file");
createAndWriteFile(file);
//creating the first link should succeed
final Path link1 = new Path(dir, "link1");
fc.createSymlink(file, link1, false);
try {
//creating the second link should fail with QuotaExceededException.
final Path link2 = new Path(dir, "link2");
fc.createSymlink(file, link2, false);
fail("Created symlink despite quota violation");
} catch(QuotaExceededException qee) {
//expected
}
}
} }

View File

@ -21,6 +21,7 @@ package org.apache.hadoop.hdfs;
import java.io.IOException; import java.io.IOException;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.CommonConfigurationKeys;
import org.apache.hadoop.fs.FileSystemContractBaseTest; import org.apache.hadoop.fs.FileSystemContractBaseTest;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.UserGroupInformation;
@ -33,6 +34,8 @@ public class TestHDFSFileSystemContract extends FileSystemContractBaseTest {
@Override @Override
protected void setUp() throws Exception { protected void setUp() throws Exception {
Configuration conf = new HdfsConfiguration(); Configuration conf = new HdfsConfiguration();
conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY,
FileSystemContractBaseTest.TEST_UMASK);
cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build(); cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
fs = cluster.getFileSystem(); fs = cluster.getFileSystem();
defaultWorkingDirectory = "/user/" + defaultWorkingDirectory = "/user/" +

View File

@ -40,6 +40,7 @@ import org.apache.hadoop.hdfs.DFSTestUtil;
import org.apache.hadoop.hdfs.HAUtil; import org.apache.hadoop.hdfs.HAUtil;
import org.apache.hadoop.hdfs.HdfsConfiguration; import org.apache.hadoop.hdfs.HdfsConfiguration;
import org.apache.hadoop.hdfs.MiniDFSCluster; import org.apache.hadoop.hdfs.MiniDFSCluster;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NamenodeRole;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption;
import org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory; import org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory;
import org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile; import org.apache.hadoop.hdfs.server.namenode.FileJournalManager.EditLogFile;
@ -99,7 +100,10 @@ public class TestBackupNode {
c.set(DFSConfigKeys.DFS_NAMENODE_BACKUP_ADDRESS_KEY, c.set(DFSConfigKeys.DFS_NAMENODE_BACKUP_ADDRESS_KEY,
"127.0.0.1:0"); "127.0.0.1:0");
return (BackupNode)NameNode.createNameNode(new String[]{startupOpt.getName()}, c); BackupNode bn = (BackupNode)NameNode.createNameNode(
new String[]{startupOpt.getName()}, c);
assertTrue(bn.getRole() + " must be in SafeMode.", bn.isInSafeMode());
return bn;
} }
void waitCheckpointDone(MiniDFSCluster cluster, long txid) { void waitCheckpointDone(MiniDFSCluster cluster, long txid) {
@ -358,11 +362,22 @@ public class TestBackupNode {
DFSTestUtil.createFile(bnFS, file3, fileSize, fileSize, blockSize, DFSTestUtil.createFile(bnFS, file3, fileSize, fileSize, blockSize,
replication, seed); replication, seed);
} catch (IOException eio) { } catch (IOException eio) {
LOG.info("Write to BN failed as expected: ", eio); LOG.info("Write to " + backup.getRole() + " failed as expected: ", eio);
canWrite = false; canWrite = false;
} }
assertFalse("Write to BackupNode must be prohibited.", canWrite); assertFalse("Write to BackupNode must be prohibited.", canWrite);
// Reads are allowed for BackupNode, but not for CheckpointNode
boolean canRead = true;
try {
bnFS.exists(file2);
} catch (IOException eio) {
LOG.info("Read from " + backup.getRole() + " failed: ", eio);
canRead = false;
}
assertEquals("Reads to BackupNode are allowed, but not CheckpointNode.",
canRead, backup.isRole(NamenodeRole.BACKUP));
DFSTestUtil.createFile(fileSys, file3, fileSize, fileSize, blockSize, DFSTestUtil.createFile(fileSys, file3, fileSize, fileSize, blockSize,
replication, seed); replication, seed);

View File

@ -73,7 +73,7 @@ public class TestFsLimits {
fileAsURI(new File(MiniDFSCluster.getBaseDirectory(), fileAsURI(new File(MiniDFSCluster.getBaseDirectory(),
"namenode")).toString()); "namenode")).toString());
rootInode = new INodeDirectoryWithQuota(INodeDirectory.ROOT_NAME, perms, 0L, 0L); rootInode = new INodeDirectoryWithQuota(INodeDirectory.ROOT_NAME, perms);
inodes = new INode[]{ rootInode, null }; inodes = new INode[]{ rootInode, null };
fs = null; fs = null;
fsIsReady = true; fsIsReady = true;

View File

@ -89,11 +89,6 @@
<phase>test-compile</phase> <phase>test-compile</phase>
</execution> </execution>
</executions> </executions>
<configuration>
<excludes>
<exclude>mrapp-generated-classpath</exclude>
</excludes>
</configuration>
</plugin> </plugin>
<plugin> <plugin>
<artifactId>maven-antrun-plugin</artifactId> <artifactId>maven-antrun-plugin</artifactId>

View File

@ -601,7 +601,7 @@ public class JobHistoryEventHandler extends AbstractService
setSummarySlotSeconds(summary, context.getJob(jobId).getAllCounters()); setSummarySlotSeconds(summary, context.getJob(jobId).getAllCounters());
break; break;
default: default:
throw new YarnException("Invalid event type"); break;
} }
} }

View File

@ -18,13 +18,8 @@
package org.apache.hadoop.mapreduce.v2.util; package org.apache.hadoop.mapreduce.v2.util;
import java.io.BufferedReader;
import java.io.File;
import java.io.IOException; import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URI; import java.net.URI;
import java.net.URL;
import java.util.Arrays; import java.util.Arrays;
import java.util.HashMap; import java.util.HashMap;
import java.util.List; import java.util.List;
@ -134,40 +129,10 @@ public class MRApps extends Apps {
private static void setMRFrameworkClasspath( private static void setMRFrameworkClasspath(
Map<String, String> environment, Configuration conf) throws IOException { Map<String, String> environment, Configuration conf) throws IOException {
InputStream classpathFileStream = null; // Propagate the system classpath when using the mini cluster
BufferedReader reader = null; if (conf.getBoolean(YarnConfiguration.IS_MINI_YARN_CLUSTER, false)) {
try {
// Get yarn mapreduce-app classpath from generated classpath
// Works if compile time env is same as runtime. Mainly tests.
ClassLoader thisClassLoader =
Thread.currentThread().getContextClassLoader();
String mrAppGeneratedClasspathFile = "mrapp-generated-classpath";
classpathFileStream =
thisClassLoader.getResourceAsStream(mrAppGeneratedClasspathFile);
// Put the file itself on classpath for tasks.
URL classpathResource = thisClassLoader
.getResource(mrAppGeneratedClasspathFile);
if (classpathResource != null) {
String classpathElement = classpathResource.getFile();
if (classpathElement.contains("!")) {
classpathElement = classpathElement.substring(0,
classpathElement.indexOf("!"));
} else {
classpathElement = new File(classpathElement).getParent();
}
Apps.addToEnvironment(environment, Environment.CLASSPATH.name(), Apps.addToEnvironment(environment, Environment.CLASSPATH.name(),
classpathElement); System.getProperty("java.class.path"));
}
if (classpathFileStream != null) {
reader = new BufferedReader(new InputStreamReader(classpathFileStream,
Charsets.UTF_8));
String cp = reader.readLine();
if (cp != null) {
Apps.addToEnvironment(environment, Environment.CLASSPATH.name(),
cp.trim());
}
} }
// Add standard Hadoop classes // Add standard Hadoop classes
@ -183,14 +148,6 @@ public class MRApps extends Apps {
Apps.addToEnvironment(environment, Environment.CLASSPATH.name(), c Apps.addToEnvironment(environment, Environment.CLASSPATH.name(), c
.trim()); .trim());
} }
} finally {
if (classpathFileStream != null) {
classpathFileStream.close();
}
if (reader != null) {
reader.close();
}
}
// TODO: Remove duplicates. // TODO: Remove duplicates.
} }

View File

@ -31,6 +31,7 @@ import org.apache.hadoop.mapred.RunningJob;
import org.apache.hadoop.mapreduce.MRConfig; import org.apache.hadoop.mapreduce.MRConfig;
import org.apache.hadoop.security.ssl.KeyStoreTestUtil; import org.apache.hadoop.security.ssl.KeyStoreTestUtil;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.junit.After; import org.junit.After;
import org.junit.AfterClass; import org.junit.AfterClass;
import org.junit.Before; import org.junit.Before;
@ -53,6 +54,8 @@ public class TestEncryptedShuffle {
System.getProperty("test.build.dir", "target/test-dir") + "/" + System.getProperty("test.build.dir", "target/test-dir") + "/" +
TestEncryptedShuffle.class.getSimpleName(); TestEncryptedShuffle.class.getSimpleName();
private String classpathDir;
@BeforeClass @BeforeClass
public static void setUp() throws Exception { public static void setUp() throws Exception {
File base = new File(BASEDIR); File base = new File(BASEDIR);
@ -62,27 +65,12 @@ public class TestEncryptedShuffle {
@Before @Before
public void createCustomYarnClasspath() throws Exception { public void createCustomYarnClasspath() throws Exception {
String classpathDir = classpathDir = KeyStoreTestUtil.getClasspathDir(TestEncryptedShuffle.class);
KeyStoreTestUtil.getClasspathDir(TestEncryptedShuffle.class);
URL url = Thread.currentThread().getContextClassLoader().
getResource("mrapp-generated-classpath");
File f = new File(url.getPath());
BufferedReader reader = new BufferedReader(new FileReader(f));
String cp = reader.readLine();
cp = cp + ":" + classpathDir;
f = new File(classpathDir, "mrapp-generated-classpath");
Writer writer = new FileWriter(f);
writer.write(cp);
writer.close();
new File(classpathDir, "core-site.xml").delete(); new File(classpathDir, "core-site.xml").delete();
} }
@After @After
public void cleanUpMiniClusterSpecialConfig() throws Exception { public void cleanUpMiniClusterSpecialConfig() throws Exception {
String classpathDir =
KeyStoreTestUtil.getClasspathDir(TestEncryptedShuffle.class);
new File(classpathDir, "mrapp-generated-classpath").delete();
new File(classpathDir, "core-site.xml").delete(); new File(classpathDir, "core-site.xml").delete();
String keystoresDir = new File(BASEDIR).getAbsolutePath(); String keystoresDir = new File(BASEDIR).getAbsolutePath();
KeyStoreTestUtil.cleanupSSLConfig(keystoresDir, classpathDir); KeyStoreTestUtil.cleanupSSLConfig(keystoresDir, classpathDir);
@ -98,6 +86,9 @@ public class TestEncryptedShuffle {
conf.set("dfs.block.access.token.enable", "false"); conf.set("dfs.block.access.token.enable", "false");
conf.set("dfs.permissions", "true"); conf.set("dfs.permissions", "true");
conf.set("hadoop.security.authentication", "simple"); conf.set("hadoop.security.authentication", "simple");
String cp = conf.get(YarnConfiguration.YARN_APPLICATION_CLASSPATH) +
File.pathSeparator + classpathDir;
conf.set(YarnConfiguration.YARN_APPLICATION_CLASSPATH, cp);
dfsCluster = new MiniDFSCluster(conf, 1, true, null); dfsCluster = new MiniDFSCluster(conf, 1, true, null);
FileSystem fileSystem = dfsCluster.getFileSystem(); FileSystem fileSystem = dfsCluster.getFileSystem();
fileSystem.mkdirs(new Path("/tmp")); fileSystem.mkdirs(new Path("/tmp"));
@ -113,8 +104,6 @@ public class TestEncryptedShuffle {
mrCluster = MiniMRClientClusterFactory.create(this.getClass(), 1, conf); mrCluster = MiniMRClientClusterFactory.create(this.getClass(), 1, conf);
// so the minicluster conf is avail to the containers. // so the minicluster conf is avail to the containers.
String classpathDir =
KeyStoreTestUtil.getClasspathDir(TestEncryptedShuffle.class);
Writer writer = new FileWriter(classpathDir + "/core-site.xml"); Writer writer = new FileWriter(classpathDir + "/core-site.xml");
mrCluster.getConfig().writeXml(writer); mrCluster.getConfig().writeXml(writer);
writer.close(); writer.close();

View File

@ -703,11 +703,6 @@
<groupId>org.apache.maven.plugins</groupId> <groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId> <artifactId>maven-jar-plugin</artifactId>
<version>2.3.1</version> <version>2.3.1</version>
<configuration>
<excludes>
<exclude>mrapp-generated-classpath</exclude>
</excludes>
</configuration>
</plugin> </plugin>
<plugin> <plugin>
<groupId>org.apache.maven.plugins</groupId> <groupId>org.apache.maven.plugins</groupId>
@ -802,21 +797,6 @@
</execution> </execution>
</executions> </executions>
</plugin> </plugin>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>build-classpath</id>
<phase>generate-sources</phase>
<goals>
<goal>build-classpath</goal>
</goals>
<configuration>
<outputFile>target/classes/mrapp-generated-classpath</outputFile>
</configuration>
</execution>
</executions>
</plugin>
<plugin> <plugin>
<groupId>org.apache.maven.plugins</groupId> <groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId> <artifactId>maven-surefire-plugin</artifactId>

View File

@ -71,6 +71,8 @@ Release 2.0.3-alpha - Unreleased
YARN-183. Clean up fair scheduler code. (Sandy Ryza via tomwhite) YARN-183. Clean up fair scheduler code. (Sandy Ryza via tomwhite)
YARN-129. Simplify classpath construction for mini YARN tests. (tomwhite)
OPTIMIZATIONS OPTIMIZATIONS
BUG FIXES BUG FIXES
@ -245,6 +247,8 @@ Release 0.23.5 - UNRELEASED
YARN-212. NM state machine ignores an APPLICATION_CONTAINER_FINISHED event YARN-212. NM state machine ignores an APPLICATION_CONTAINER_FINISHED event
when it shouldn't (Nathan Roberts via jlowe) when it shouldn't (Nathan Roberts via jlowe)
YARN-219. NM should aggregate logs when application finishes. (bobby)
Release 0.23.4 Release 0.23.4
INCOMPATIBLE CHANGES INCOMPATIBLE CHANGES

View File

@ -90,23 +90,6 @@
</archive> </archive>
</configuration> </configuration>
</plugin> </plugin>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>build-classpath</id>
<phase>generate-sources</phase>
<goals>
<goal>build-classpath</goal>
</goals>
<configuration>
<!-- needed to run the unit test for DS to generate the required classpath
that is required in the env of the launch container in the mini yarn cluster -->
<outputFile>target/classes/yarn-apps-ds-generated-classpath</outputFile>
</configuration>
</execution>
</executions>
</plugin>
<plugin> <plugin>
<groupId>org.apache.maven.plugins</groupId> <groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId> <artifactId>maven-surefire-plugin</artifactId>

View File

@ -494,9 +494,10 @@ public class Client extends YarnClientImpl {
classPathEnv.append(":./log4j.properties"); classPathEnv.append(":./log4j.properties");
// add the runtime classpath needed for tests to work // add the runtime classpath needed for tests to work
String testRuntimeClassPath = Client.getTestRuntimeClasspath(); if (conf.getBoolean(YarnConfiguration.IS_MINI_YARN_CLUSTER, false)) {
classPathEnv.append(':'); classPathEnv.append(':');
classPathEnv.append(testRuntimeClassPath); classPathEnv.append(System.getProperty("java.class.path"));
}
env.put("CLASSPATH", classPathEnv.toString()); env.put("CLASSPATH", classPathEnv.toString());
@ -663,50 +664,4 @@ public class Client extends YarnClientImpl {
super.killApplication(appId); super.killApplication(appId);
} }
private static String getTestRuntimeClasspath() {
InputStream classpathFileStream = null;
BufferedReader reader = null;
String envClassPath = "";
LOG.info("Trying to generate classpath for app master from current thread's classpath");
try {
// Create classpath from generated classpath
// Check maven ppom.xml for generated classpath info
// Works if compile time env is same as runtime. Mainly tests.
ClassLoader thisClassLoader =
Thread.currentThread().getContextClassLoader();
String generatedClasspathFile = "yarn-apps-ds-generated-classpath";
classpathFileStream =
thisClassLoader.getResourceAsStream(generatedClasspathFile);
if (classpathFileStream == null) {
LOG.info("Could not classpath resource from class loader");
return envClassPath;
}
LOG.info("Readable bytes from stream=" + classpathFileStream.available());
reader = new BufferedReader(new InputStreamReader(classpathFileStream));
String cp = reader.readLine();
if (cp != null) {
envClassPath += cp.trim() + ":";
}
// Put the file itself on classpath for tasks.
envClassPath += thisClassLoader.getResource(generatedClasspathFile).getFile();
} catch (IOException e) {
LOG.info("Could not find the necessary resource to generate class path for tests. Error=" + e.getMessage());
}
try {
if (classpathFileStream != null) {
classpathFileStream.close();
}
if (reader != null) {
reader.close();
}
} catch (IOException e) {
LOG.info("Failed to close class path file stream or reader. Error=" + e.getMessage());
}
return envClassPath;
}
} }

View File

@ -86,23 +86,6 @@
<build> <build>
<plugins> <plugins>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>build-classpath</id>
<phase>generate-sources</phase>
<goals>
<goal>build-classpath</goal>
</goals>
<configuration>
<!-- needed to run the unit test for DS to generate the required classpath
that is required in the env of the launch container in the mini yarn cluster -->
<outputFile>target/classes/yarn-apps-am-generated-classpath</outputFile>
</configuration>
</execution>
</executions>
</plugin>
<plugin> <plugin>
<groupId>org.apache.maven.plugins</groupId> <groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId> <artifactId>maven-surefire-plugin</artifactId>

View File

@ -18,12 +18,9 @@
package org.apache.hadoop.yarn.applications.unmanagedamlauncher; package org.apache.hadoop.yarn.applications.unmanagedamlauncher;
import java.io.BufferedReader;
import java.io.File; import java.io.File;
import java.io.FileOutputStream; import java.io.FileOutputStream;
import java.io.IOException; import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream; import java.io.OutputStream;
import java.net.URL; import java.net.URL;
@ -80,51 +77,17 @@ public class TestUnmanagedAMLauncher {
} }
private static String getTestRuntimeClasspath() { private static String getTestRuntimeClasspath() {
InputStream classpathFileStream = null;
BufferedReader reader = null;
String envClassPath = "";
LOG.info("Trying to generate classpath for app master from current thread's classpath"); LOG.info("Trying to generate classpath for app master from current thread's classpath");
try { String envClassPath = "";
String cp = System.getProperty("java.class.path");
// Create classpath from generated classpath
// Check maven pom.xml for generated classpath info
// Works if compile time env is same as runtime. Mainly tests.
ClassLoader thisClassLoader = Thread.currentThread()
.getContextClassLoader();
String generatedClasspathFile = "yarn-apps-am-generated-classpath";
classpathFileStream = thisClassLoader
.getResourceAsStream(generatedClasspathFile);
if (classpathFileStream == null) {
LOG.info("Could not classpath resource from class loader");
return envClassPath;
}
LOG.info("Readable bytes from stream=" + classpathFileStream.available());
reader = new BufferedReader(new InputStreamReader(classpathFileStream));
String cp = reader.readLine();
if (cp != null) { if (cp != null) {
envClassPath += cp.trim() + File.pathSeparator; envClassPath += cp.trim() + File.pathSeparator;
} }
// yarn-site.xml at this location contains proper config for mini cluster // yarn-site.xml at this location contains proper config for mini cluster
ClassLoader thisClassLoader = Thread.currentThread()
.getContextClassLoader();
URL url = thisClassLoader.getResource("yarn-site.xml"); URL url = thisClassLoader.getResource("yarn-site.xml");
envClassPath += new File(url.getFile()).getParent(); envClassPath += new File(url.getFile()).getParent();
} catch (IOException e) {
LOG.info("Could not find the necessary resource to generate class path for tests. Error="
+ e.getMessage());
}
try {
if (classpathFileStream != null) {
classpathFileStream.close();
}
if (reader != null) {
reader.close();
}
} catch (IOException e) {
LOG.info("Failed to close class path file stream or reader. Error="
+ e.getMessage());
}
return envClassPath; return envClassPath;
} }

View File

@ -149,18 +149,15 @@ public class AppLogAggregatorImpl implements AppLogAggregator {
ContainerId containerId; ContainerId containerId;
while (!this.appFinishing.get()) { while (!this.appFinishing.get()) {
synchronized(this) {
try { try {
containerId = this.pendingContainers.poll(); wait(THREAD_SLEEP_TIME);
if (containerId == null) {
Thread.sleep(THREAD_SLEEP_TIME);
} else {
uploadLogsForContainer(containerId);
}
} catch (InterruptedException e) { } catch (InterruptedException e) {
LOG.warn("PendingContainers queue is interrupted"); LOG.warn("PendingContainers queue is interrupted");
this.appFinishing.set(true); this.appFinishing.set(true);
} }
} }
}
// Application is finished. Finish pending-containers // Application is finished. Finish pending-containers
while ((containerId = this.pendingContainers.poll()) != null) { while ((containerId = this.pendingContainers.poll()) != null) {
@ -251,8 +248,9 @@ public class AppLogAggregatorImpl implements AppLogAggregator {
} }
@Override @Override
public void finishLogAggregation() { public synchronized void finishLogAggregation() {
LOG.info("Application just finished : " + this.applicationId); LOG.info("Application just finished : " + this.applicationId);
this.appFinishing.set(true); this.appFinishing.set(true);
this.notifyAll();
} }
} }

View File

@ -517,7 +517,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xs
<groupId>com.atlassian.maven.plugins</groupId> <groupId>com.atlassian.maven.plugins</groupId>
<artifactId>maven-clover2-plugin</artifactId> <artifactId>maven-clover2-plugin</artifactId>
<configuration> <configuration>
<includesAllSourceRoots>true</includesAllSourceRoots> <includesAllSourceRoots>false</includesAllSourceRoots>
<includesTestSourceRoots>true</includesTestSourceRoots> <includesTestSourceRoots>true</includesTestSourceRoots>
<licenseLocation>${cloverLicenseLocation}</licenseLocation> <licenseLocation>${cloverLicenseLocation}</licenseLocation>
<cloverDatabase>${cloverDatabase}</cloverDatabase> <cloverDatabase>${cloverDatabase}</cloverDatabase>