Compare commits

...

77 Commits

Author SHA1 Message Date
Inigo Goiri 02597b60cd HDFS-14545. RBF: Router should support GetUserMappingsProtocol. Contributed by Ayush Saxena. 2019-06-23 09:27:04 +05:30
Akira Ajisaka 8a9281afdc HDFS-14550. RBF: Failed to get statistics from NameNodes before 2.9.0. Contributed by He Xiaoqiao. 2019-06-23 09:27:04 +05:30
Ayush Saxena e2a900b217 HDFS-13404. Addendum: RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fail. Contributed by Takanobu Asanuma. 2019-06-23 09:27:04 +05:30
Ayush Saxena 812256baad HDFS-14526. RBF: Update the document of RBF related metrics. Contributed by Takanobu Asanuma. 2019-06-23 09:27:04 +05:30
Ayush Saxena 1579136fa7 HDFS-14508. RBF: Clean-up and refactor UI components. Contributed by Takanobu Asanuma. 2019-06-23 09:27:04 +05:30
Ayush Saxena 0c21e81e02 HDFS-13480. RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key. Contributed by Ayush Saxena. 2019-06-23 09:27:04 +05:30
Ayush Saxena f544121239 HDFS-13955. RBF: Support secure Namenode in NamenodeHeartbeatService. Contributed by CR Hota. 2019-06-23 09:27:04 +05:30
Inigo Goiri 90f4887dcc HDFS-14475. RBF: Expose router security enabled status on the UI. Contributed by CR Hota. 2019-06-23 09:27:04 +05:30
Ayush Saxena 8edfb8a4ea HDFS-13787. RBF: Add Snapshot related ClientProtocol APIs. Contributed by Inigo Goiri. 2019-06-23 09:27:04 +05:30
Ayush Saxena 6cf674ca12 HDFS-14516. RBF: Create hdfs-rbf-site.xml for RBF specific properties. Contributed by Takanobu Asanuma. 2019-06-23 09:27:04 +05:30
Ayush Saxena 395312b821 HDFS-13909. RBF: Add Cache pools and directives related ClientProtocol APIs. Contributed by Ayush Saxena. 2019-06-23 09:27:04 +05:30
Ayush Saxena 68d4df4d7d HDFS-13255. RBF: Fail when try to remove mount point paths. Contributed by Akira Ajisaka. 2019-06-23 09:27:04 +05:30
Ayush Saxena 0512084d44 HDFS-14440. RBF: Optimize the file write process in case of multiple destinations. Contributed by Ayush Saxena. 2019-06-23 09:27:04 +05:30
Brahma Reddy Battula ec1b79e150 HDFS-13995. RBF: Security documentation. Contributed by CR Hota. 2019-06-23 09:27:04 +05:30
Giovanni Matteo Fumarola dc32bf0e4f HDFS-14447. RBF: Router should support RefreshUserMappingsProtocol. Contributed by Shen Yinjie. 2019-06-23 09:27:04 +05:30
Giovanni Matteo Fumarola a53d678ad8 HDFS-14490. RBF: Remove unnecessary quota checks. Contributed by Ayush Saxena. 2019-06-23 09:27:04 +05:30
Ayush Saxena 4afe588ba2 HDFS-14210. RBF: ACL commands should work over all the destinations. Contributed by Ayush Saxena. 2019-06-23 09:27:04 +05:30
Giovanni Matteo Fumarola 326ec1744b HDFS-14426. RBF: Add delegation token total count as one of the federation metrics. Contributed by Fengnan Li. 2019-06-23 09:27:04 +05:30
Ayush Saxena 97b672d440 HDFS-14454. RBF: getContentSummary() should allow non-existing folders. Contributed by Inigo Goiri. 2019-06-23 09:27:04 +05:30
Ayush Saxena b4e852eabe HDFS-14457. RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'. Contributed by luhuachao. 2019-06-23 09:27:04 +05:30
Brahma Reddy Battula 506d073482 HDFS-13972. RBF: Support for Delegation Token (WebHDFS). Contributed by CR Hota. 2019-06-23 09:27:04 +05:30
Ayush Saxena e7e48a4e96 HDFS-14422. RBF: Router shouldn't allow READ operations in safe mode. Contributed by Inigo Goiri. 2019-06-23 09:27:03 +05:30
Ayush Saxena 5cd42d4019 HDFS-14369. RBF: Fix trailing / for webhdfs. Contributed by Akira Ajisaka. 2019-06-23 09:27:03 +05:30
Ayush Saxena 1fc385745b HDFS-13853. RBF: RouterAdmin update cmd is overwriting the entry not updating the existing. Contributed by Ayush Saxena. 2019-06-23 09:27:03 +05:30
Ayush Saxena 86a3cd5324 HDFS-14316. RBF: Support unavailable subclusters for mount points with multiple destinations. Contributed by Inigo Goiri. 2019-06-23 09:27:03 +05:30
Ayush Saxena 5664b3e6d4 HDFS-14388. RBF: Prevent loading metric system when disabled. Contributed by Inigo Goiri. 2019-06-23 09:27:03 +05:30
Ayush Saxena 5cb7a4de34 HDFS-14351. RBF: Optimize configuration item resolving for monitor namenode. Contributed by He Xiaoqiao and Inigo Goiri. 2019-06-23 09:27:03 +05:30
Giovanni Matteo Fumarola 6c686253e9 HDFS-14343. RBF: Fix renaming folders spread across multiple subclusters. Contributed by Ayush Saxena. 2019-06-23 09:27:03 +05:30
Giovanni Matteo Fumarola c99f62fdad HDFS-14334. RBF: Use human readable format for long numbers in the Router UI. Contributed by Inigo Goiri. 2019-06-23 09:27:03 +05:30
Inigo Goiri d79685af04 HDFS-14335. RBF: Fix heartbeat typos in the Router. Contributed by CR Hota. 2019-06-23 09:27:03 +05:30
Inigo Goiri 55b499dac4 HDFS-14331. RBF: IOE While Removing Mount Entry. Contributed by Ayush Saxena. 2019-06-23 09:27:03 +05:30
Inigo Goiri 0e97ed1d79 HDFS-14329. RBF: Add maintenance nodes to federation metrics. Contributed by Ayush Saxena. 2019-06-23 09:27:03 +05:30
Inigo Goiri 7400a0a6de HDFS-14259. RBF: Fix safemode message for Router. Contributed by Ranith Sadar. 2019-06-23 09:27:03 +05:30
Inigo Goiri 0f43b36e15 HDFS-14322. RBF: Security manager should not load if security is disabled. Contributed by CR Hota. 2019-06-23 09:27:03 +05:30
Brahma Reddy Battula 8b31975280 HDFS-14052. RBF: Use Router keytab for WebHDFS. Contributed by CR Hota. 2019-06-23 09:27:03 +05:30
Inigo Goiri a701c13df2 HDFS-14307. RBF: Update tests to use internal Whitebox instead of Mockito. Contributed by CR Hota. 2019-06-23 09:27:03 +05:30
Giovanni Matteo Fumarola ef1aaa7a50 HDFS-14249. RBF: Tooling to identify the subcluster location of a file. Contributed by Inigo Goiri. 2019-06-23 09:27:03 +05:30
Giovanni Matteo Fumarola 9c46012fbb HDFS-14268. RBF: Fix the location of the DNs in getDatanodeReport(). Contributed by Inigo Goiri. 2019-06-23 09:27:03 +05:30
Inigo Goiri a2c8633275 HDFS-14226. RBF: Setting attributes should set on all subclusters' directories. Contributed by Ayush Saxena. 2019-06-23 09:27:03 +05:30
Brahma Reddy Battula d8d6c9d324 HDFS-13358. RBF: Support for Delegation Token (RPC). Contributed by CR Hota. 2019-06-23 09:27:03 +05:30
Inigo Goiri bdacc8c831 HDFS-14230. RBF: Throw RetriableException instead of IOException when no namenodes available. Contributed by Fei Hui. 2019-06-23 09:27:03 +05:30
Giovanni Matteo Fumarola 5757a020c5 HDFS-14252. RBF : Exceptions are exposing the actual sub cluster path. Contributed by Ayush Saxena. 2019-06-23 09:27:03 +05:30
Surendra Singh Lilhore b4eb949d33 HDFS-14225. RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace. Contributed by Ranith Sardar. 2019-06-23 09:27:03 +05:30
Takanobu Asanuma c1345bc588 HDFS-13404. RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails. 2019-06-23 09:27:03 +05:30
Inigo Goiri 4c4e8df68c HDFS-14215. RBF: Remove dependency on availability of default namespace. Contributed by Ayush Saxena. 2019-06-23 09:27:03 +05:30
Brahma Reddy Battula 0c47bac33d HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple destinations. Contributed by Ayush Saxena. 2019-06-23 09:27:03 +05:30
Brahma Reddy Battula d9c09ed990 HDFS-14223. RBF: Add configuration documents for using multiple sub-clusters. Contributed by Takanobu Asanuma. 2019-06-23 09:27:03 +05:30
Yiqun Lin 6b5f63c25b HDFS-14209. RBF: setQuota() through router is working for only the mount Points under the Source column in MountTable. Contributed by Shubham Dewan. 2019-06-23 09:27:03 +05:30
Inigo Goiri b9e0b02a6c HDFS-14156. RBF: rollEdit() command fails with Router. Contributed by Shubham Dewan. 2019-06-23 09:27:02 +05:30
Vinayakumar B b20c5fa841 HDFS-14193. RBF: Inconsistency with the Default Namespace. Contributed by Ayush Saxena. 2019-06-23 09:27:02 +05:30
Surendra Singh Lilhore 58c9bc1eca HDFS-14129. addendum to HDFS-14129. Contributed by Ranith Sardar. 2019-06-23 09:27:02 +05:30
Surendra Singh Lilhore b990ba5f59 HDFS-14129. RBF: Create new policy provider for router. Contributed by Ranith Sardar. 2019-06-23 09:27:02 +05:30
Yiqun Lin 85f2d54cc3 HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo Goiri. 2019-06-23 09:27:02 +05:30
Inigo Goiri 5fcfc3c306 HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin -refreshRouterArgs command. Contributed by yanghuafeng. 2019-06-23 09:27:02 +05:30
Surendra Singh Lilhore 3bb5752276 HDFS-14191. RBF: Remove hard coded router status from FederationMetrics. Contributed by Ranith Sardar. 2019-06-23 09:27:02 +05:30
Yiqun Lin ea3e7b8288 HDFS-14150. RBF: Quotas of the sub-cluster should be removed when removing the mount point. Contributed by Takanobu Asanuma. 2019-06-23 09:27:02 +05:30
Inigo Goiri 53791b97a3 HDFS-14161. RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection. Contributed by Fei Hui. 2019-06-23 09:27:02 +05:30
Inigo Goiri a75d1fcf8d HDFS-14167. RBF: Add stale nodes to federation metrics. Contributed by Inigo Goiri. 2019-06-23 09:27:02 +05:30
Yiqun Lin cd73cb8d00 HDFS-13443. RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries. Contributed by Mohammad Arshad. 2019-06-23 09:27:02 +05:30
Takanobu Asanuma 7d8cc5d12c HDFS-14151. RBF: Make the read-only column of Mount Table clearly understandable. 2019-06-23 09:27:02 +05:30
Yiqun Lin a505876bda HDFS-13869. RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics. Contributed by Ranith Sardar. 2019-06-23 09:27:02 +05:30
Takanobu Asanuma 71cec6aa47 HDFS-14152. RBF: Fix a typo in RouterAdmin usage. Contributed by Ayush Saxena. 2019-06-23 09:27:02 +05:30
Yiqun Lin dbe01391f2 HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui. 2019-06-23 09:27:02 +05:30
Yiqun Lin 88fd500c50 Revert "HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui."
This reverts commit 7c0d6f65fde12ead91ed7c706521ad1d3dc995f8.
2019-06-23 09:27:02 +05:30
Yiqun Lin acafc7aee0 HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui. 2019-06-23 09:27:02 +05:30
Surendra Singh Lilhore d49ebf36d4 HDFS-14085. RBF: LS command for root shows wrong owner and permission information. Contributed by Ayush Saxena. 2019-06-23 09:27:02 +05:30
Brahma Reddy Battula 5b24f0f44c HDFS-14089. RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService. Contributed by Ranith Sardar. 2019-06-23 09:27:02 +05:30
Brahma Reddy Battula 5df703fff6 HDFS-13776. RBF: Add Storage policies related ClientProtocol APIs. Contributed by Dibyendu Karmakar. 2019-06-23 09:27:02 +05:30
Yiqun Lin 650b0f5dfc HDFS-14082. RBF: Add option to fail operations when a subcluster is unavailable. Contributed by Inigo Goiri. 2019-06-23 09:27:02 +05:30
Inigo Goiri 0635de615e HDFS-13834. RBF: Connection creator thread should catch Throwable. Contributed by CR Hota. 2019-06-23 09:27:02 +05:30
Inigo Goiri 7d2a1f890c HDFS-13852. RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys. Contributed by yanghuafeng. 2019-06-23 09:27:02 +05:30
Brahma Reddy Battula 84b33ee328 HDFS-12284. addendum to HDFS-12284. Contributed by Inigo Goiri. 2019-06-23 09:27:02 +05:30
Brahma Reddy Battula da154a687e HDFS-12284. RBF: Support for Kerberos authentication. Contributed by Sherwood Zheng and Inigo Goiri. 2019-06-23 09:27:02 +05:30
Inigo Goiri d2d214817d HDFS-14024. RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService. Contributed by CR Hota. 2019-06-23 09:27:02 +05:30
Brahma Reddy Battula ab38e37523 HDFS-13845. RBF: The default MountTableResolver should fail resolving multi-destination paths. Contributed by yanghuafeng. 2019-06-23 09:27:02 +05:30
Yiqun Lin 7ac5e769fb HDFS-14011. RBF: Add more information to HdfsFileStatus for a mount point. Contributed by Akira Ajisaka. 2019-06-23 09:27:02 +05:30
Vinayakumar B 8dfd2e5644 HDFS-13906. RBF: Add multiple paths for dfsrouteradmin 'rm' and 'clrquota' commands. Contributed by Ayush Saxena. 2019-06-23 09:27:01 +05:30
146 changed files with 12855 additions and 1337 deletions

View File

@ -109,6 +109,16 @@
active and stand-by states of namenode.</description>
</property>
<property>
<name>security.router.admin.protocol.acl</name>
<value>*</value>
<description>ACL for RouterAdmin Protocol. The ACL is a comma-separated
list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.
</description>
</property>
<property>
<name>security.zkfc.protocol.acl</name>
<value>*</value>

View File

@ -218,6 +218,8 @@ public class CommonConfigurationKeys extends CommonConfigurationKeysPublic {
SECURITY_CLIENT_PROTOCOL_ACL = "security.client.protocol.acl";
public static final String SECURITY_CLIENT_DATANODE_PROTOCOL_ACL =
"security.client.datanode.protocol.acl";
public static final String SECURITY_ROUTER_ADMIN_PROTOCOL_ACL =
"security.router.admin.protocol.acl";
public static final String
SECURITY_DATANODE_PROTOCOL_ACL = "security.datanode.protocol.acl";
public static final String

View File

@ -478,6 +478,40 @@ contains tags such as Hostname as additional information along with metrics.
| `FileIoErrorRateNumOps` | The number of file io error operations within an interval time of metric |
| `FileIoErrorRateAvgTime` | It measures the mean time in milliseconds from the start of an operation to hitting a failure |
RBFMetrics
----------------
RBFMetrics shows the metrics which are the aggregated values of sub-clusters' information in the Router-based federation.
| Name | Description |
|:---- |:---- |
| `NumFiles` | Current number of files and directories |
| `NumBlocks` | Current number of allocated blocks |
| `NumOfBlocksPendingReplication` | Current number of blocks pending to be replicated |
| `NumOfBlocksUnderReplicated` | Current number of blocks under replicated |
| `NumOfBlocksPendingDeletion` | Current number of blocks pending deletion |
| `ProvidedSpace` | The total remote storage capacity mounted in the federated cluster |
| `NumInMaintenanceLiveDataNodes` | Number of live Datanodes which are in maintenance state |
| `NumInMaintenanceDeadDataNodes` | Number of dead Datanodes which are in maintenance state |
| `NumEnteringMaintenanceDataNodes` | Number of Datanodes that are entering the maintenance state |
| `TotalCapacity` | Current raw capacity of DataNodes in bytes |
| `UsedCapacity` | Current used capacity across all DataNodes in bytes |
| `RemainingCapacity` | Current remaining capacity in bytes |
| `NumOfMissingBlocks` | Current number of missing blocks |
| `NumLiveNodes` | Number of datanodes which are currently live |
| `NumDeadNodes` | Number of datanodes which are currently dead |
| `NumStaleNodes` | Current number of DataNodes marked stale due to delayed heartbeat |
| `NumDecomLiveNodes` | Number of datanodes which have been decommissioned and are now live |
| `NumDecomDeadNodes` | Number of datanodes which have been decommissioned and are now dead |
| `NumDecommissioningNodes` | Number of datanodes in decommissioning state |
| `Namenodes` | Current information about all the namenodes |
| `Nameservices` | Current information for each registered nameservice |
| `MountTable` | The mount table for the federated filesystem |
| `Routers` | Current information about all routers |
| `NumNameservices` | Number of nameservices |
| `NumNamenodes` | Number of namenodes |
| `NumExpiredNamenodes` | Number of expired namenodes |
| `NodeUsage` | Max, Median, Min and Standard Deviation of DataNodes usage |
RouterRPCMetrics
----------------
RouterRPCMetrics shows the statistics of the Router component in Router-based federation.

View File

@ -133,6 +133,12 @@ public abstract class AbstractContractAppendTest extends AbstractFSContractTestB
assertPathExists("original file does not exist", target);
byte[] dataset = dataset(256, 'a', 'z');
FSDataOutputStream outputStream = getFileSystem().append(target);
if (isSupported(CREATE_VISIBILITY_DELAYED)) {
// Some filesystems like WebHDFS doesn't assure sequential consistency.
// In such a case, delay is needed. Given that we can not check the lease
// because here is closed in client side package, simply add a sleep.
Thread.sleep(100);
}
outputStream.write(dataset);
Path renamed = new Path(testPath, "renamed");
rename(target, renamed);

View File

@ -17,6 +17,10 @@
*/
package org.apache.hadoop.hdfs.protocol;
import java.util.Collection;
import org.apache.commons.lang3.builder.EqualsBuilder;
import org.apache.commons.lang3.builder.HashCodeBuilder;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
@ -103,4 +107,71 @@ public final class ECBlockGroupStats {
statsBuilder.append("]");
return statsBuilder.toString();
}
@Override
public int hashCode() {
return new HashCodeBuilder()
.append(lowRedundancyBlockGroups)
.append(corruptBlockGroups)
.append(missingBlockGroups)
.append(bytesInFutureBlockGroups)
.append(pendingDeletionBlocks)
.append(highestPriorityLowRedundancyBlocks)
.toHashCode();
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
ECBlockGroupStats other = (ECBlockGroupStats)o;
return new EqualsBuilder()
.append(lowRedundancyBlockGroups, other.lowRedundancyBlockGroups)
.append(corruptBlockGroups, other.corruptBlockGroups)
.append(missingBlockGroups, other.missingBlockGroups)
.append(bytesInFutureBlockGroups, other.bytesInFutureBlockGroups)
.append(pendingDeletionBlocks, other.pendingDeletionBlocks)
.append(highestPriorityLowRedundancyBlocks,
other.highestPriorityLowRedundancyBlocks)
.isEquals();
}
/**
* Merge the multiple ECBlockGroupStats.
* @param stats Collection of stats to merge.
* @return A new ECBlockGroupStats merging all the input ones
*/
public static ECBlockGroupStats merge(Collection<ECBlockGroupStats> stats) {
long lowRedundancyBlockGroups = 0;
long corruptBlockGroups = 0;
long missingBlockGroups = 0;
long bytesInFutureBlockGroups = 0;
long pendingDeletionBlocks = 0;
long highestPriorityLowRedundancyBlocks = 0;
boolean hasHighestPriorityLowRedundancyBlocks = false;
for (ECBlockGroupStats stat : stats) {
lowRedundancyBlockGroups += stat.getLowRedundancyBlockGroups();
corruptBlockGroups += stat.getCorruptBlockGroups();
missingBlockGroups += stat.getMissingBlockGroups();
bytesInFutureBlockGroups += stat.getBytesInFutureBlockGroups();
pendingDeletionBlocks += stat.getPendingDeletionBlocks();
if (stat.hasHighestPriorityLowRedundancyBlocks()) {
hasHighestPriorityLowRedundancyBlocks = true;
highestPriorityLowRedundancyBlocks +=
stat.getHighestPriorityLowRedundancyBlocks();
}
}
if (hasHighestPriorityLowRedundancyBlocks) {
return new ECBlockGroupStats(lowRedundancyBlockGroups, corruptBlockGroups,
missingBlockGroups, bytesInFutureBlockGroups, pendingDeletionBlocks,
highestPriorityLowRedundancyBlocks);
}
return new ECBlockGroupStats(lowRedundancyBlockGroups, corruptBlockGroups,
missingBlockGroups, bytesInFutureBlockGroups, pendingDeletionBlocks);
}
}

View File

@ -92,6 +92,11 @@ public final class HdfsConstants {
*/
public static final String CLIENT_NAMENODE_PROTOCOL_NAME =
"org.apache.hadoop.hdfs.protocol.ClientProtocol";
/**
* Router admin Protocol Names.
*/
public static final String ROUTER_ADMIN_PROTOCOL_NAME =
"org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol";
// Timeouts for communicating with DataNode for streaming writes/reads
public static final int READ_TIMEOUT = 60 * 1000;

View File

@ -34,6 +34,16 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
</properties>
<dependencies>
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcprov-jdk15on</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-minikdc</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>

View File

@ -0,0 +1,20 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
</configuration>

View File

@ -0,0 +1,34 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.protocolPB;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
import org.apache.hadoop.hdfs.server.federation.router.NameserviceManager;
import org.apache.hadoop.hdfs.server.federation.router.RouterStateManager;
import org.apache.hadoop.ipc.GenericRefreshProtocol;
/**
* Protocol used by routeradmin to communicate with statestore.
*/
@InterfaceAudience.Private
@InterfaceStability.Stable
public interface RouterAdminProtocol extends MountTableManager,
RouterStateManager, NameserviceManager, GenericRefreshProtocol {
}

View File

@ -19,10 +19,10 @@ package org.apache.hadoop.hdfs.protocolPB;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos.RouterAdminProtocolService;
import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector;
import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
import org.apache.hadoop.ipc.ProtocolInfo;
import org.apache.hadoop.security.KerberosInfo;
import org.apache.hadoop.security.token.TokenInfo;
@ -35,9 +35,9 @@ import org.apache.hadoop.security.token.TokenInfo;
@InterfaceAudience.Private
@InterfaceStability.Stable
@KerberosInfo(
serverPrincipal = DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY)
serverPrincipal = RBFConfigKeys.DFS_ROUTER_KERBEROS_PRINCIPAL_KEY)
@TokenInfo(DelegationTokenSelector.class)
@ProtocolInfo(protocolName = HdfsConstants.CLIENT_NAMENODE_PROTOCOL_NAME,
@ProtocolInfo(protocolName = HdfsConstants.ROUTER_ADMIN_PROTOCOL_NAME,
protocolVersion = 1)
public interface RouterAdminProtocolPB extends
RouterAdminProtocolService.BlockingInterface {

View File

@ -31,12 +31,16 @@ import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProt
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.UpdateMountTableEntryRequestProto;
@ -52,12 +56,16 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeRequ
import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@ -72,12 +80,16 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnterSafe
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnterSafeModeResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDisabledNameservicesRequestPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDisabledNameservicesResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationRequestPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesRequestPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetSafeModeRequestPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetSafeModeResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.LeaveSafeModeRequestPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.LeaveSafeModeResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RefreshMountTableEntriesRequestPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RefreshMountTableEntriesResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RemoveMountTableEntryRequestPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RemoveMountTableEntryResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.UpdateMountTableEntryRequestPBImpl;
@ -275,4 +287,37 @@ public class RouterAdminProtocolServerSideTranslatorPB implements
throw new ServiceException(e);
}
}
@Override
public RefreshMountTableEntriesResponseProto refreshMountTableEntries(
RpcController controller, RefreshMountTableEntriesRequestProto request)
throws ServiceException {
try {
RefreshMountTableEntriesRequest req =
new RefreshMountTableEntriesRequestPBImpl(request);
RefreshMountTableEntriesResponse response =
server.refreshMountTableEntries(req);
RefreshMountTableEntriesResponsePBImpl responsePB =
(RefreshMountTableEntriesResponsePBImpl) response;
return responsePB.getProto();
} catch (IOException e) {
throw new ServiceException(e);
}
}
@Override
public GetDestinationResponseProto getDestination(
RpcController controller, GetDestinationRequestProto request)
throws ServiceException {
try {
GetDestinationRequest req =
new GetDestinationRequestPBImpl(request);
GetDestinationResponse response = server.getDestination(req);
GetDestinationResponsePBImpl responsePB =
(GetDestinationResponsePBImpl)response;
return responsePB.getProto();
} catch (IOException e) {
throw new ServiceException(e);
}
}
}

View File

@ -32,12 +32,16 @@ import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProt
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.UpdateMountTableEntryRequestProto;
@ -55,12 +59,16 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeRequ
import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@ -73,10 +81,14 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnableNam
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnableNameserviceResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnterSafeModeResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDisabledNameservicesResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationRequestPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesRequestPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetSafeModeResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.LeaveSafeModeResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RefreshMountTableEntriesRequestPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RefreshMountTableEntriesResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RemoveMountTableEntryRequestPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RemoveMountTableEntryResponsePBImpl;
import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.UpdateMountTableEntryRequestPBImpl;
@ -267,4 +279,34 @@ public class RouterAdminProtocolTranslatorPB
throw new IOException(ProtobufHelper.getRemoteException(e).getMessage());
}
}
@Override
public RefreshMountTableEntriesResponse refreshMountTableEntries(
RefreshMountTableEntriesRequest request) throws IOException {
RefreshMountTableEntriesRequestPBImpl requestPB =
(RefreshMountTableEntriesRequestPBImpl) request;
RefreshMountTableEntriesRequestProto proto = requestPB.getProto();
try {
RefreshMountTableEntriesResponseProto response =
rpcProxy.refreshMountTableEntries(null, proto);
return new RefreshMountTableEntriesResponsePBImpl(response);
} catch (ServiceException e) {
throw new IOException(ProtobufHelper.getRemoteException(e).getMessage());
}
}
@Override
public GetDestinationResponse getDestination(
GetDestinationRequest request) throws IOException {
GetDestinationRequestPBImpl requestPB =
(GetDestinationRequestPBImpl) request;
GetDestinationRequestProto proto = requestPB.getProto();
try {
GetDestinationResponseProto response =
rpcProxy.getDestination(null, proto);
return new GetDestinationResponsePBImpl(response);
} catch (ServiceException e) {
throw new IOException(ProtobufHelper.getRemoteException(e).getMessage());
}
}
}

View File

@ -0,0 +1,52 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.protocolPB;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.fs.CommonConfigurationKeys;
import org.apache.hadoop.hdfs.HDFSPolicyProvider;
import org.apache.hadoop.security.authorize.Service;
/**
* {@link HDFSPolicyProvider} for RBF protocols.
*/
@InterfaceAudience.Private
public class RouterPolicyProvider extends HDFSPolicyProvider {
private static final Service[] RBF_SERVICES = new Service[] {
new Service(CommonConfigurationKeys.SECURITY_ROUTER_ADMIN_PROTOCOL_ACL,
RouterAdminProtocol.class) };
private final Service[] services;
public RouterPolicyProvider() {
List<Service> list = new ArrayList<>();
list.addAll(Arrays.asList(super.getServices()));
list.addAll(Arrays.asList(RBF_SERVICES));
services = list.toArray(new Service[list.size()]);
}
@Override
public Service[] getServices() {
return Arrays.copyOf(services, services.length);
}
}

View File

@ -106,6 +106,12 @@ public interface FederationMBean {
*/
int getNumDeadNodes();
/**
* Get the number of stale datanodes.
* @return Number of stale datanodes.
*/
int getNumStaleNodes();
/**
* Get the number of decommissioning datanodes.
* @return Number of decommissioning datanodes.
@ -124,6 +130,24 @@ public interface FederationMBean {
*/
int getNumDecomDeadNodes();
/**
* Get the number of live datanodes which are under maintenance.
* @return Number of live datanodes which are under maintenance.
*/
int getNumInMaintenanceLiveDataNodes();
/**
* Get the number of dead datanodes which are under maintenance.
* @return Number of dead datanodes which are under maintenance.
*/
int getNumInMaintenanceDeadDataNodes();
/**
* Get the number of datanodes which are entering maintenance.
* @return Number of datanodes which are entering maintenance.
*/
int getNumEnteringMaintenanceDataNodes();
/**
* Get Max, Median, Min and Standard Deviation of DataNodes usage.
* @return the DataNode usage information, as a JSON string.
@ -169,55 +193,87 @@ public interface FederationMBean {
/**
* When the router started.
* @return Date as a string the router started.
* @deprecated Use {@link RouterMBean#getRouterStarted()} instead.
*/
@Deprecated
String getRouterStarted();
/**
* Get the version of the router.
* @return Version of the router.
* @deprecated Use {@link RouterMBean#getVersion()} instead.
*/
@Deprecated
String getVersion();
/**
* Get the compilation date of the router.
* @return Compilation date of the router.
* @deprecated Use {@link RouterMBean#getCompiledDate()} instead.
*/
@Deprecated
String getCompiledDate();
/**
* Get the compilation info of the router.
* @return Compilation info of the router.
* @deprecated Use {@link RouterMBean#getCompileInfo()} instead.
*/
@Deprecated
String getCompileInfo();
/**
* Get the host and port of the router.
* @return Host and port of the router.
* @deprecated Use {@link RouterMBean#getHostAndPort()} instead.
*/
@Deprecated
String getHostAndPort();
/**
* Get the identifier of the router.
* @return Identifier of the router.
* @deprecated Use {@link RouterMBean#getRouterId()} instead.
*/
@Deprecated
String getRouterId();
/**
* Get the host and port of the router.
* @return Host and port of the router.
* Gets the cluster ids of the namenodes.
* @return the cluster ids of the namenodes.
* @deprecated Use {@link RouterMBean#getClusterId()} instead.
*/
String getClusterId();
/**
* Get the host and port of the router.
* @return Host and port of the router.
* Gets the block pool ids of the namenodes.
* @return the block pool ids of the namenodes.
* @deprecated Use {@link RouterMBean#getBlockPoolId()} instead.
*/
@Deprecated
String getBlockPoolId();
/**
* Get the current state of the router.
*
* @return String label for the current router state.
* @deprecated Use {@link RouterMBean#getRouterStatus()} instead.
*/
@Deprecated
String getRouterStatus();
/**
* Get the current number of delegation tokens in memory.
* @return number of DTs
* @deprecated Use {@link RouterMBean#getCurrentTokensCount()} instead.
*/
@Deprecated
long getCurrentTokensCount();
/**
* Get the security status of the router.
* @return Security status.
* @deprecated Use {@link RouterMBean#isSecurityEnabled()} instead.
*/
@Deprecated
boolean isSecurityEnabled();
}

View File

@ -46,6 +46,8 @@ public interface FederationRPCMBean {
long getProxyOpRetries();
long getProxyOpNoNamenodes();
long getRouterFailureStateStoreOps();
long getRouterFailureReadOnlyOps();

View File

@ -60,6 +60,8 @@ public class FederationRPCMetrics implements FederationRPCMBean {
private MutableCounterLong proxyOpNotImplemented;
@Metric("Number of operation retries")
private MutableCounterLong proxyOpRetries;
@Metric("Number of operations to hit no namenodes available")
private MutableCounterLong proxyOpNoNamenodes;
@Metric("Failed requests due to State Store unavailable")
private MutableCounterLong routerFailureStateStore;
@ -138,6 +140,15 @@ public class FederationRPCMetrics implements FederationRPCMBean {
return proxyOpRetries.value();
}
public void incrProxyOpNoNamenodes() {
proxyOpNoNamenodes.incr();
}
@Override
public long getProxyOpNoNamenodes() {
return proxyOpNoNamenodes.value();
}
public void incrRouterFailureStateStore() {
routerFailureStateStore.incr();
}

View File

@ -129,7 +129,7 @@ public class FederationRPCPerformanceMonitor implements RouterRpcMonitor {
public long proxyOp() {
PROXY_TIME.set(monotonicNow());
long processingTime = getProcessingTime();
if (processingTime >= 0) {
if (metrics != null && processingTime >= 0) {
metrics.addProcessingTime(processingTime);
}
return Thread.currentThread().getId();
@ -139,7 +139,7 @@ public class FederationRPCPerformanceMonitor implements RouterRpcMonitor {
public void proxyOpComplete(boolean success) {
if (success) {
long proxyTime = getProxyTime();
if (proxyTime >= 0) {
if (metrics != null && proxyTime >= 0) {
metrics.addProxyTime(proxyTime);
}
}
@ -147,47 +147,72 @@ public class FederationRPCPerformanceMonitor implements RouterRpcMonitor {
@Override
public void proxyOpFailureStandby() {
metrics.incrProxyOpFailureStandby();
if (metrics != null) {
metrics.incrProxyOpFailureStandby();
}
}
@Override
public void proxyOpFailureCommunicate() {
metrics.incrProxyOpFailureCommunicate();
if (metrics != null) {
metrics.incrProxyOpFailureCommunicate();
}
}
@Override
public void proxyOpFailureClientOverloaded() {
metrics.incrProxyOpFailureClientOverloaded();
if (metrics != null) {
metrics.incrProxyOpFailureClientOverloaded();
}
}
@Override
public void proxyOpNotImplemented() {
metrics.incrProxyOpNotImplemented();
if (metrics != null) {
metrics.incrProxyOpNotImplemented();
}
}
@Override
public void proxyOpRetries() {
metrics.incrProxyOpRetries();
if (metrics != null) {
metrics.incrProxyOpRetries();
}
}
@Override
public void proxyOpNoNamenodes() {
if (metrics != null) {
metrics.incrProxyOpNoNamenodes();
}
}
@Override
public void routerFailureStateStore() {
metrics.incrRouterFailureStateStore();
if (metrics != null) {
metrics.incrRouterFailureStateStore();
}
}
@Override
public void routerFailureSafemode() {
metrics.incrRouterFailureSafemode();
if (metrics != null) {
metrics.incrRouterFailureSafemode();
}
}
@Override
public void routerFailureReadOnly() {
metrics.incrRouterFailureReadOnly();
if (metrics != null) {
metrics.incrRouterFailureReadOnly();
}
}
@Override
public void routerFailureLocked() {
metrics.incrRouterFailureLocked();
if (metrics != null) {
metrics.incrRouterFailureLocked();
}
}

View File

@ -74,21 +74,6 @@ public class NamenodeBeanMetrics
private static final Logger LOG =
LoggerFactory.getLogger(NamenodeBeanMetrics.class);
/** Prevent holding the page from loading too long. */
private static final String DN_REPORT_TIME_OUT =
RBFConfigKeys.FEDERATION_ROUTER_PREFIX + "dn-report.time-out";
/** We only wait for 1 second. */
private static final long DN_REPORT_TIME_OUT_DEFAULT =
TimeUnit.SECONDS.toMillis(1);
/** Time to cache the DN information. */
public static final String DN_REPORT_CACHE_EXPIRE =
RBFConfigKeys.FEDERATION_ROUTER_PREFIX + "dn-report.cache-expire";
/** We cache the DN information for 10 seconds by default. */
public static final long DN_REPORT_CACHE_EXPIRE_DEFAULT =
TimeUnit.SECONDS.toMillis(10);
/** Instance of the Router being monitored. */
private final Router router;
@ -148,10 +133,11 @@ public class NamenodeBeanMetrics
// Initialize the cache for the DN reports
Configuration conf = router.getConfig();
this.dnReportTimeOut = conf.getTimeDuration(
DN_REPORT_TIME_OUT, DN_REPORT_TIME_OUT_DEFAULT, TimeUnit.MILLISECONDS);
RBFConfigKeys.DN_REPORT_TIME_OUT,
RBFConfigKeys.DN_REPORT_TIME_OUT_MS_DEFAULT, TimeUnit.MILLISECONDS);
long dnCacheExpire = conf.getTimeDuration(
DN_REPORT_CACHE_EXPIRE,
DN_REPORT_CACHE_EXPIRE_DEFAULT, TimeUnit.MILLISECONDS);
RBFConfigKeys.DN_REPORT_CACHE_EXPIRE,
RBFConfigKeys.DN_REPORT_CACHE_EXPIRE_MS_DEFAULT, TimeUnit.MILLISECONDS);
this.dnCache = CacheBuilder.newBuilder()
.expireAfterWrite(dnCacheExpire, TimeUnit.MILLISECONDS)
.build(
@ -182,8 +168,12 @@ public class NamenodeBeanMetrics
}
}
private FederationMetrics getFederationMetrics() {
return this.router.getMetrics();
private RBFMetrics getRBFMetrics() throws IOException {
RBFMetrics metrics = getRouter().getMetrics();
if (metrics == null) {
throw new IOException("Federated metrics is not initialized");
}
return metrics;
}
/////////////////////////////////////////////////////////
@ -202,28 +192,52 @@ public class NamenodeBeanMetrics
@Override
public long getUsed() {
return getFederationMetrics().getUsedCapacity();
try {
return getRBFMetrics().getUsedCapacity();
} catch (IOException e) {
LOG.debug("Failed to get the used capacity", e.getMessage());
}
return 0;
}
@Override
public long getFree() {
return getFederationMetrics().getRemainingCapacity();
try {
return getRBFMetrics().getRemainingCapacity();
} catch (IOException e) {
LOG.debug("Failed to get remaining capacity", e.getMessage());
}
return 0;
}
@Override
public long getTotal() {
return getFederationMetrics().getTotalCapacity();
try {
return getRBFMetrics().getTotalCapacity();
} catch (IOException e) {
LOG.debug("Failed to Get total capacity", e.getMessage());
}
return 0;
}
@Override
public long getProvidedCapacity() {
return getFederationMetrics().getProvidedSpace();
try {
return getRBFMetrics().getProvidedSpace();
} catch (IOException e) {
LOG.debug("Failed to get provided capacity", e.getMessage());
}
return 0;
}
@Override
public String getSafemode() {
// We assume that the global federated view is never in safe mode
return "";
try {
return getRBFMetrics().getSafemode();
} catch (IOException e) {
return "Failed to get safemode status. Please check router"
+ "log for more detail.";
}
}
@Override
@ -275,39 +289,79 @@ public class NamenodeBeanMetrics
@Override
public long getTotalBlocks() {
return getFederationMetrics().getNumBlocks();
try {
return getRBFMetrics().getNumBlocks();
} catch (IOException e) {
LOG.debug("Failed to get number of blocks", e.getMessage());
}
return 0;
}
@Override
public long getNumberOfMissingBlocks() {
return getFederationMetrics().getNumOfMissingBlocks();
try {
return getRBFMetrics().getNumOfMissingBlocks();
} catch (IOException e) {
LOG.debug("Failed to get number of missing blocks", e.getMessage());
}
return 0;
}
@Override
@Deprecated
public long getPendingReplicationBlocks() {
return getFederationMetrics().getNumOfBlocksPendingReplication();
try {
return getRBFMetrics().getNumOfBlocksPendingReplication();
} catch (IOException e) {
LOG.debug("Failed to get number of blocks pending replica",
e.getMessage());
}
return 0;
}
@Override
public long getPendingReconstructionBlocks() {
return getFederationMetrics().getNumOfBlocksPendingReplication();
try {
return getRBFMetrics().getNumOfBlocksPendingReplication();
} catch (IOException e) {
LOG.debug("Failed to get number of blocks pending replica",
e.getMessage());
}
return 0;
}
@Override
@Deprecated
public long getUnderReplicatedBlocks() {
return getFederationMetrics().getNumOfBlocksUnderReplicated();
try {
return getRBFMetrics().getNumOfBlocksUnderReplicated();
} catch (IOException e) {
LOG.debug("Failed to get number of blocks under replicated",
e.getMessage());
}
return 0;
}
@Override
public long getLowRedundancyBlocks() {
return getFederationMetrics().getNumOfBlocksUnderReplicated();
try {
return getRBFMetrics().getNumOfBlocksUnderReplicated();
} catch (IOException e) {
LOG.debug("Failed to get number of blocks under replicated",
e.getMessage());
}
return 0;
}
@Override
public long getPendingDeletionBlocks() {
return getFederationMetrics().getNumOfBlocksPendingDeletion();
try {
return getRBFMetrics().getNumOfBlocksPendingDeletion();
} catch (IOException e) {
LOG.debug("Failed to get number of blocks pending deletion",
e.getMessage());
}
return 0;
}
@Override
@ -485,7 +539,12 @@ public class NamenodeBeanMetrics
@Override
public long getNNStartedTimeInMillis() {
return this.router.getStartTime();
try {
return getRouter().getStartTime();
} catch (IOException e) {
LOG.debug("Failed to get the router startup time", e.getMessage());
}
return 0;
}
@Override
@ -541,7 +600,12 @@ public class NamenodeBeanMetrics
@Override
public long getFilesTotal() {
return getFederationMetrics().getNumFiles();
try {
return getRBFMetrics().getNumFiles();
} catch (IOException e) {
LOG.debug("Failed to get number of files", e.getMessage());
}
return 0;
}
@Override
@ -551,46 +615,97 @@ public class NamenodeBeanMetrics
@Override
public int getNumLiveDataNodes() {
return this.router.getMetrics().getNumLiveNodes();
try {
return getRBFMetrics().getNumLiveNodes();
} catch (IOException e) {
LOG.debug("Failed to get number of live nodes", e.getMessage());
}
return 0;
}
@Override
public int getNumDeadDataNodes() {
return this.router.getMetrics().getNumDeadNodes();
try {
return getRBFMetrics().getNumDeadNodes();
} catch (IOException e) {
LOG.debug("Failed to get number of dead nodes", e.getMessage());
}
return 0;
}
@Override
public int getNumStaleDataNodes() {
return -1;
try {
return getRBFMetrics().getNumStaleNodes();
} catch (IOException e) {
LOG.debug("Failed to get number of stale nodes", e.getMessage());
}
return 0;
}
@Override
public int getNumDecomLiveDataNodes() {
return this.router.getMetrics().getNumDecomLiveNodes();
try {
return getRBFMetrics().getNumDecomLiveNodes();
} catch (IOException e) {
LOG.debug("Failed to get the number of live decommissioned datanodes",
e.getMessage());
}
return 0;
}
@Override
public int getNumDecomDeadDataNodes() {
return this.router.getMetrics().getNumDecomDeadNodes();
try {
return getRBFMetrics().getNumDecomDeadNodes();
} catch (IOException e) {
LOG.debug("Failed to get the number of dead decommissioned datanodes",
e.getMessage());
}
return 0;
}
@Override
public int getNumDecommissioningDataNodes() {
return this.router.getMetrics().getNumDecommissioningNodes();
try {
return getRBFMetrics().getNumDecommissioningNodes();
} catch (IOException e) {
LOG.debug("Failed to get number of decommissioning nodes",
e.getMessage());
}
return 0;
}
@Override
public int getNumInMaintenanceLiveDataNodes() {
try {
return getRBFMetrics().getNumInMaintenanceLiveDataNodes();
} catch (IOException e) {
LOG.debug("Failed to get number of live in maintenance nodes",
e.getMessage());
}
return 0;
}
@Override
public int getNumInMaintenanceDeadDataNodes() {
try {
return getRBFMetrics().getNumInMaintenanceDeadDataNodes();
} catch (IOException e) {
LOG.debug("Failed to get number of dead in maintenance nodes",
e.getMessage());
}
return 0;
}
@Override
public int getNumEnteringMaintenanceDataNodes() {
try {
return getRBFMetrics().getNumEnteringMaintenanceDataNodes();
} catch (IOException e) {
LOG.debug("Failed to get number of entering maintenance nodes",
e.getMessage());
}
return 0;
}
@ -669,6 +784,12 @@ public class NamenodeBeanMetrics
@Override
public boolean isSecurityEnabled() {
try {
return getRBFMetrics().isSecurityEnabled();
} catch (IOException e) {
LOG.debug("Failed to get security status.",
e.getMessage());
}
return false;
}
@ -716,4 +837,11 @@ public class NamenodeBeanMetrics
public String getVerifyECWithTopologyResult() {
return null;
}
private Router getRouter() throws IOException {
if (this.router == null) {
throw new IOException("Router is not initialized");
}
return this.router;
}
}

View File

@ -0,0 +1,56 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.metrics;
/**
* Implementation of the State Store metrics which does not do anything.
* This is used when the metrics are disabled (e.g., tests).
*/
public class NullStateStoreMetrics extends StateStoreMetrics {
public void addRead(long latency) {}
public long getReadOps() {
return -1;
}
public double getReadAvg() {
return -1;
}
public void addWrite(long latency) {}
public long getWriteOps() {
return -1;
}
public double getWriteAvg() {
return -1;
}
public void addFailure(long latency) { }
public long getFailureOps() {
return -1;
}
public double getFailureAvg() {
return -1;
}
public void addRemove(long latency) {}
public long getRemoveOps() {
return -1;
}
public double getRemoveAvg() {
return -1;
}
public void setCacheSize(String name, int size) {}
public void reset() {}
public void shutdown() {}
}

View File

@ -47,14 +47,18 @@ import javax.management.NotCompliantMBeanException;
import javax.management.ObjectName;
import javax.management.StandardMBean;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext;
import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
import org.apache.hadoop.hdfs.server.federation.router.Router;
import org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer;
import org.apache.hadoop.hdfs.server.federation.router.RouterServiceState;
import org.apache.hadoop.hdfs.server.federation.router.security.RouterSecurityManager;
import org.apache.hadoop.hdfs.server.federation.store.MembershipStore;
import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
import org.apache.hadoop.hdfs.server.federation.store.RouterStore;
@ -73,7 +77,9 @@ import org.apache.hadoop.hdfs.server.federation.store.records.MembershipStats;
import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
import org.apache.hadoop.hdfs.server.federation.store.records.RouterState;
import org.apache.hadoop.hdfs.server.federation.store.records.StateStoreVersion;
import org.apache.hadoop.metrics2.annotation.Metrics;
import org.apache.hadoop.metrics2.util.MBeans;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.util.VersionInfo;
import org.codehaus.jettison.json.JSONObject;
@ -86,23 +92,25 @@ import com.google.common.annotations.VisibleForTesting;
/**
* Implementation of the Router metrics collector.
*/
public class FederationMetrics implements FederationMBean {
@Metrics(name="RBFActivity", about="RBF metrics", context="dfs")
public class RBFMetrics implements RouterMBean, FederationMBean {
private static final Logger LOG =
LoggerFactory.getLogger(FederationMetrics.class);
LoggerFactory.getLogger(RBFMetrics.class);
/** Format for a date. */
private static final String DATE_FORMAT = "yyyy/MM/dd HH:mm:ss";
/** Prevent holding the page from load too long. */
private static final long TIME_OUT = TimeUnit.SECONDS.toMillis(1);
private final long timeOut;
/** Router interface. */
private final Router router;
/** FederationState JMX bean. */
private ObjectName beanName;
private ObjectName routerBeanName;
private ObjectName federationBeanName;
/** Resolve the namenode for each namespace. */
private final ActiveNamenodeResolver namenodeResolver;
@ -117,17 +125,26 @@ public class FederationMetrics implements FederationMBean {
private RouterStore routerStore;
public FederationMetrics(Router router) throws IOException {
public RBFMetrics(Router router) throws IOException {
this.router = router;
try {
StandardMBean bean = new StandardMBean(this, FederationMBean.class);
this.beanName = MBeans.register("Router", "FederationState", bean);
LOG.info("Registered Router MBean: {}", this.beanName);
StandardMBean bean = new StandardMBean(this, RouterMBean.class);
this.routerBeanName = MBeans.register("Router", "Router", bean);
LOG.info("Registered Router MBean: {}", this.routerBeanName);
} catch (NotCompliantMBeanException e) {
throw new RuntimeException("Bad Router MBean setup", e);
}
try {
StandardMBean bean = new StandardMBean(this, FederationMBean.class);
this.federationBeanName = MBeans.register("Router", "FederationState",
bean);
LOG.info("Registered FederationState MBean: {}", this.federationBeanName);
} catch (NotCompliantMBeanException e) {
throw new RuntimeException("Bad FederationState MBean setup", e);
}
// Resolve namenode for each nameservice
this.namenodeResolver = this.router.getNamenodeResolver();
@ -143,14 +160,23 @@ public class FederationMetrics implements FederationMBean {
this.routerStore = stateStore.getRegisteredRecordStore(
RouterStore.class);
}
// Initialize the cache for the DN reports
Configuration conf = router.getConfig();
this.timeOut = conf.getTimeDuration(RBFConfigKeys.DN_REPORT_TIME_OUT,
RBFConfigKeys.DN_REPORT_TIME_OUT_MS_DEFAULT, TimeUnit.MILLISECONDS);
}
/**
* Unregister the JMX beans.
*/
public void close() {
if (this.beanName != null) {
MBeans.unregister(beanName);
if (this.routerBeanName != null) {
MBeans.unregister(routerBeanName);
}
if (this.federationBeanName != null) {
MBeans.unregister(federationBeanName);
}
}
@ -262,6 +288,7 @@ public class FederationMetrics implements FederationMBean {
innerInfo.put("order", "");
}
innerInfo.put("readonly", entry.isReadOnly());
innerInfo.put("faulttolerant", entry.isFaultTolerant());
info.add(Collections.unmodifiableMap(innerInfo));
}
} catch (IOException e) {
@ -405,6 +432,12 @@ public class FederationMetrics implements FederationMBean {
return getNameserviceAggregatedInt(MembershipStats::getNumOfDeadDatanodes);
}
@Override
public int getNumStaleNodes() {
return getNameserviceAggregatedInt(
MembershipStats::getNumOfStaleDatanodes);
}
@Override
public int getNumDecommissioningNodes() {
return getNameserviceAggregatedInt(
@ -423,6 +456,24 @@ public class FederationMetrics implements FederationMBean {
MembershipStats::getNumOfDecomDeadDatanodes);
}
@Override
public int getNumInMaintenanceLiveDataNodes() {
return getNameserviceAggregatedInt(
MembershipStats::getNumOfInMaintenanceLiveDataNodes);
}
@Override
public int getNumInMaintenanceDeadDataNodes() {
return getNameserviceAggregatedInt(
MembershipStats::getNumOfInMaintenanceDeadDataNodes);
}
@Override
public int getNumEnteringMaintenanceDataNodes() {
return getNameserviceAggregatedInt(
MembershipStats::getNumOfEnteringMaintenanceDataNodes);
}
@Override // NameNodeMXBean
public String getNodeUsage() {
float median = 0;
@ -434,7 +485,7 @@ public class FederationMetrics implements FederationMBean {
try {
RouterRpcServer rpcServer = this.router.getRpcServer();
DatanodeInfo[] live = rpcServer.getDatanodeReport(
DatanodeReportType.LIVE, false, TIME_OUT);
DatanodeReportType.LIVE, false, timeOut);
if (live.length > 0) {
float totalDfsUsed = 0;
@ -568,7 +619,45 @@ public class FederationMetrics implements FederationMBean {
@Override
public String getRouterStatus() {
return "RUNNING";
return this.router.getRouterState().toString();
}
@Override
public long getCurrentTokensCount() {
RouterSecurityManager mgr =
this.router.getRpcServer().getRouterSecurityManager();
if (mgr != null && mgr.getSecretManager() != null) {
return mgr.getSecretManager().getCurrentTokensSize();
}
return -1;
}
@Override
public boolean isSecurityEnabled() {
return UserGroupInformation.isSecurityEnabled();
}
@Override
public String getSafemode() {
if (this.router.isRouterState(RouterServiceState.SAFEMODE)) {
return "Safe mode is ON. " + this.getSafeModeTip();
} else {
return "";
}
}
private String getSafeModeTip() {
String cmd = "Use \"hdfs dfsrouteradmin -safemode leave\" "
+ "to turn safe mode off.";
if (this.router.isRouterState(RouterServiceState.INITIALIZING)
|| this.router.isRouterState(RouterServiceState.UNINITIALIZED)) {
return "Router is in" + this.router.getRouterState()
+ "mode, the router will immediately return to "
+ "normal mode after some time. " + cmd;
} else if (this.router.isRouterState(RouterServiceState.SAFEMODE)) {
return "It was turned on manually. " + cmd;
}
return "";
}
/**

View File

@ -0,0 +1,104 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.metrics;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
/**
* JMX interface for the router specific metrics.
*/
@InterfaceAudience.Private
@InterfaceStability.Evolving
public interface RouterMBean {
/**
* When the router started.
* @return Date as a string the router started.
*/
String getRouterStarted();
/**
* Get the version of the router.
* @return Version of the router.
*/
String getVersion();
/**
* Get the compilation date of the router.
* @return Compilation date of the router.
*/
String getCompiledDate();
/**
* Get the compilation info of the router.
* @return Compilation info of the router.
*/
String getCompileInfo();
/**
* Get the host and port of the router.
* @return Host and port of the router.
*/
String getHostAndPort();
/**
* Get the identifier of the router.
* @return Identifier of the router.
*/
String getRouterId();
/**
* Get the current state of the router.
*
* @return String label for the current router state.
*/
String getRouterStatus();
/**
* Gets the cluster ids of the namenodes.
* @return the cluster ids of the namenodes.
*/
String getClusterId();
/**
* Gets the block pool ids of the namenodes.
* @return the block pool ids of the namenodes.
*/
String getBlockPoolId();
/**
* Get the current number of delegation tokens in memory.
* @return number of DTs
*/
long getCurrentTokensCount();
/**
* Gets the safemode status.
*
* @return the safemode status.
*/
String getSafemode();
/**
* Gets if security is enabled.
*
* @return true, if security is enabled.
*/
boolean isSecurityEnabled();
}

View File

@ -39,7 +39,7 @@ import com.google.common.annotations.VisibleForTesting;
*/
@Metrics(name = "StateStoreActivity", about = "Router metrics",
context = "dfs")
public final class StateStoreMetrics implements StateStoreMBean {
public class StateStoreMetrics implements StateStoreMBean {
private final MetricsRegistry registry = new MetricsRegistry("router");
@ -54,6 +54,8 @@ public final class StateStoreMetrics implements StateStoreMBean {
private Map<String, MutableGaugeInt> cacheSizes;
protected StateStoreMetrics() {}
private StateStoreMetrics(Configuration conf) {
registry.tag(SessionId, "RouterSession");
registry.tag(ProcessName, "Router");

View File

@ -61,8 +61,10 @@ public interface FileSubclusterResolver {
* cache.
*
* @param path Path to get the mount points under.
* @return List of mount points present at this path or zero-length list if
* none are found.
* @return List of mount points present at this path. Return zero-length
* list if the path is a mount point but there are no mount points
* under the path. Return null if the path is not a mount point
* and there are no mount points under the path.
* @throws IOException Throws exception if the data is not available.
*/
List<String> getMountPoints(String path) throws IOException;

View File

@ -280,8 +280,15 @@ public class MembershipNamenodeResolver
report.getNumDecommissioningDatanodes());
stats.setNumOfActiveDatanodes(report.getNumLiveDatanodes());
stats.setNumOfDeadDatanodes(report.getNumDeadDatanodes());
stats.setNumOfStaleDatanodes(report.getNumStaleDatanodes());
stats.setNumOfDecomActiveDatanodes(report.getNumDecomLiveDatanodes());
stats.setNumOfDecomDeadDatanodes(report.getNumDecomDeadDatanodes());
stats.setNumOfInMaintenanceLiveDataNodes(
report.getNumInMaintenanceLiveDataNodes());
stats.setNumOfInMaintenanceDeadDataNodes(
report.getNumInMaintenanceDeadDataNodes());
stats.setNumOfEnteringMaintenanceDataNodes(
report.getNumEnteringMaintenanceDataNodes());
record.setStats(stats);
}

View File

@ -21,8 +21,12 @@ import java.io.IOException;
import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@ -77,4 +81,28 @@ public interface MountTableManager {
*/
GetMountTableEntriesResponse getMountTableEntries(
GetMountTableEntriesRequest request) throws IOException;
/**
* Refresh mount table entries cache from the state store. Cache is updated
* periodically but with this API cache can be refreshed immediately. This API
* is primarily meant to be called from the Admin Server. Admin Server will
* call this API and refresh mount table cache of all the routers while
* changing mount table entries.
*
* @param request Fully populated request object.
* @return True the mount table entry was updated without any error.
* @throws IOException Throws exception if the data store is not initialized.
*/
RefreshMountTableEntriesResponse refreshMountTableEntries(
RefreshMountTableEntriesRequest request) throws IOException;
/**
* Get the destination subcluster (namespace) of a file/directory.
*
* @param request Fully populated request object including the file to check.
* @return The response including the subcluster where the input file is.
* @throws IOException Throws exception if the data store is not initialized.
*/
GetDestinationResponse getDestination(
GetDestinationRequest request) throws IOException;
}

View File

@ -17,8 +17,6 @@
*/
package org.apache.hadoop.hdfs.server.federation.resolver;
import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_NAMESERVICES;
import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMESERVICE_ID;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE_DEFAULT;
@ -50,8 +48,6 @@ import java.util.concurrent.locks.ReentrantReadWriteLock;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdfs.DFSUtilClient;
import org.apache.hadoop.hdfs.server.federation.resolver.order.DestinationOrder;
import org.apache.hadoop.hdfs.server.federation.router.Router;
import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
@ -61,6 +57,7 @@ import org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableExcep
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
import org.apache.hadoop.hdfs.tools.federation.RouterAdmin;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -90,6 +87,8 @@ public class MountTableResolver
/** If the tree has been initialized. */
private boolean init = false;
/** If the mount table is manually disabled*/
private boolean disabled = false;
/** Path -> Remote HDFS location. */
private final TreeMap<String, MountTable> tree = new TreeMap<>();
/** Path -> Remote location. */
@ -163,33 +162,22 @@ public class MountTableResolver
* @param conf Configuration for this resolver.
*/
private void initDefaultNameService(Configuration conf) {
this.defaultNameService = conf.get(
DFS_ROUTER_DEFAULT_NAMESERVICE,
DFSUtil.getNamenodeNameServiceId(conf));
this.defaultNSEnable = conf.getBoolean(
DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE,
DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE_DEFAULT);
if (defaultNameService == null) {
LOG.warn(
"{} and {} is not set. Fallback to {} as the default name service.",
DFS_ROUTER_DEFAULT_NAMESERVICE, DFS_NAMESERVICE_ID, DFS_NAMESERVICES);
Collection<String> nsIds = DFSUtilClient.getNameServiceIds(conf);
if (nsIds.isEmpty()) {
this.defaultNameService = "";
} else {
this.defaultNameService = nsIds.iterator().next();
}
if (!this.defaultNSEnable) {
LOG.warn("Default name service is disabled.");
return;
}
this.defaultNameService = conf.get(DFS_ROUTER_DEFAULT_NAMESERVICE, "");
if (this.defaultNameService.equals("")) {
this.defaultNSEnable = false;
LOG.warn("Default name service is not set.");
} else {
String enable = this.defaultNSEnable ? "enabled" : "disabled";
LOG.info("Default name service: {}, {} to read or write",
this.defaultNameService, enable);
LOG.info("Default name service: {}, enabled to read or write",
this.defaultNameService);
}
}
@ -405,7 +393,14 @@ public class MountTableResolver
};
return this.locationCache.get(path, meh);
} catch (ExecutionException e) {
throw new IOException(e);
Throwable cause = e.getCause();
final IOException ioe;
if (cause instanceof IOException) {
ioe = (IOException) cause;
} else {
ioe = new IOException(cause);
}
throw ioe;
} finally {
readLock.unlock();
}
@ -414,12 +409,13 @@ public class MountTableResolver
/**
* Build the path location to insert into the cache atomically. It must hold
* the read lock.
* @param path Path to check/insert.
* @param str Path to check/insert.
* @return New remote location.
* @throws IOException If it cannot find the location.
*/
public PathLocation lookupLocation(final String path) throws IOException {
public PathLocation lookupLocation(final String str) throws IOException {
PathLocation ret = null;
final String path = RouterAdmin.normalizeFileSystemPath(str);
MountTable entry = findDeepest(path);
if (entry != null) {
ret = buildLocation(path, entry);
@ -447,12 +443,13 @@ public class MountTableResolver
*/
public MountTable getMountPoint(final String path) throws IOException {
verifyMountTable();
return findDeepest(path);
return findDeepest(RouterAdmin.normalizeFileSystemPath(path));
}
@Override
public List<String> getMountPoints(final String path) throws IOException {
public List<String> getMountPoints(final String str) throws IOException {
verifyMountTable();
final String path = RouterAdmin.normalizeFileSystemPath(str);
Set<String> children = new TreeSet<>();
readLock.lock();
@ -508,8 +505,7 @@ public class MountTableResolver
*/
public List<MountTable> getMounts(final String path) throws IOException {
verifyMountTable();
return getTreeValues(path, false);
return getTreeValues(RouterAdmin.normalizeFileSystemPath(path), false);
}
/**
@ -517,7 +513,7 @@ public class MountTableResolver
* @throws StateStoreUnavailableException If it cannot connect to the store.
*/
private void verifyMountTable() throws StateStoreUnavailableException {
if (!this.init) {
if (!this.init || disabled) {
throw new StateStoreUnavailableException("Mount Table not initialized");
}
}
@ -539,21 +535,28 @@ public class MountTableResolver
* @param entry Mount table entry.
* @return PathLocation containing the namespace, local path.
*/
private static PathLocation buildLocation(
final String path, final MountTable entry) {
private PathLocation buildLocation(
final String path, final MountTable entry) throws IOException {
String srcPath = entry.getSourcePath();
if (!path.startsWith(srcPath)) {
LOG.error("Cannot build location, {} not a child of {}", path, srcPath);
return null;
}
List<RemoteLocation> dests = entry.getDestinations();
if (getClass() == MountTableResolver.class && dests.size() > 1) {
throw new IOException("Cannnot build location, "
+ getClass().getSimpleName()
+ " should not resolve multiple destinations for " + path);
}
String remainingPath = path.substring(srcPath.length());
if (remainingPath.startsWith(Path.SEPARATOR)) {
remainingPath = remainingPath.substring(1);
}
List<RemoteLocation> locations = new LinkedList<>();
for (RemoteLocation oneDst : entry.getDestinations()) {
for (RemoteLocation oneDst : dests) {
String nsId = oneDst.getNameserviceId();
String dest = oneDst.getDest();
String newPath = dest;
@ -660,4 +663,9 @@ public class MountTableResolver
public void setDefaultNSEnable(boolean defaultNSRWEnable) {
this.defaultNSEnable = defaultNSRWEnable;
}
@VisibleForTesting
public void setDisabled(boolean disable) {
this.disabled = disable;
}
}

View File

@ -42,6 +42,7 @@ public class NamenodeStatusReport {
/** Datanodes stats. */
private int liveDatanodes = -1;
private int deadDatanodes = -1;
private int staleDatanodes = -1;
/** Decommissioning datanodes. */
private int decomDatanodes = -1;
/** Live decommissioned datanodes. */
@ -49,6 +50,15 @@ public class NamenodeStatusReport {
/** Dead decommissioned datanodes. */
private int deadDecomDatanodes = -1;
/** Live in maintenance datanodes. */
private int inMaintenanceLiveDataNodes = -1;
/** Dead in maintenance datanodes. */
private int inMaintenanceDeadDataNodes = -1;
/** Entering maintenance datanodes. */
private int enteringMaintenanceDataNodes = -1;
/** Space stats. */
private long availableSpace = -1;
private long numOfFiles = -1;
@ -223,17 +233,27 @@ public class NamenodeStatusReport {
*
* @param numLive Number of live nodes.
* @param numDead Number of dead nodes.
* @param numStale Number of stale nodes.
* @param numDecom Number of decommissioning nodes.
* @param numLiveDecom Number of decommissioned live nodes.
* @param numDeadDecom Number of decommissioned dead nodes.
* @param numInMaintenanceLive Number of in maintenance live nodes.
* @param numInMaintenanceDead Number of in maintenance dead nodes.
* @param numEnteringMaintenance Number of entering maintenance nodes.
*/
public void setDatanodeInfo(int numLive, int numDead, int numDecom,
int numLiveDecom, int numDeadDecom) {
public void setDatanodeInfo(int numLive, int numDead, int numStale,
int numDecom, int numLiveDecom, int numDeadDecom,
int numInMaintenanceLive, int numInMaintenanceDead,
int numEnteringMaintenance) {
this.liveDatanodes = numLive;
this.deadDatanodes = numDead;
this.staleDatanodes = numStale;
this.decomDatanodes = numDecom;
this.liveDecomDatanodes = numLiveDecom;
this.deadDecomDatanodes = numDeadDecom;
this.inMaintenanceLiveDataNodes = numInMaintenanceLive;
this.inMaintenanceDeadDataNodes = numInMaintenanceDead;
this.enteringMaintenanceDataNodes = numEnteringMaintenance;
this.statsValid = true;
}
@ -247,7 +267,7 @@ public class NamenodeStatusReport {
}
/**
* Get the number of dead blocks.
* Get the number of dead nodes.
*
* @return The number of dead nodes.
*/
@ -255,6 +275,15 @@ public class NamenodeStatusReport {
return this.deadDatanodes;
}
/**
* Get the number of stale nodes.
*
* @return The number of stale nodes.
*/
public int getNumStaleDatanodes() {
return this.staleDatanodes;
}
/**
* Get the number of decommissionining nodes.
*
@ -282,6 +311,33 @@ public class NamenodeStatusReport {
return this.deadDecomDatanodes;
}
/**
* Get the number of live in maintenance nodes.
*
* @return The number of live in maintenance nodes.
*/
public int getNumInMaintenanceLiveDataNodes() {
return this.inMaintenanceLiveDataNodes;
}
/**
* Get the number of dead in maintenance nodes.
*
* @return The number of dead in maintenance nodes.
*/
public int getNumInMaintenanceDeadDataNodes() {
return this.inMaintenanceDeadDataNodes;
}
/**
* Get the number of entering maintenance nodes.
*
* @return The number of entering maintenance nodes.
*/
public int getNumEnteringMaintenanceDataNodes() {
return this.enteringMaintenanceDataNodes;
}
/**
* Set the filesystem information.
*

View File

@ -17,6 +17,8 @@
*/
package org.apache.hadoop.hdfs.server.federation.resolver.order;
import java.util.EnumSet;
/**
* Order of the destinations when we have multiple of them. When the resolver
* of files to subclusters (FileSubclusterResolver) has multiple destinations,
@ -27,5 +29,11 @@ public enum DestinationOrder {
LOCAL, // Local first
RANDOM, // Random order
HASH_ALL, // Follow consistent hashing
SPACE // Available space based order
SPACE; // Available space based order
/** Approaches that write folders in all subclusters. */
public static final EnumSet<DestinationOrder> FOLDER_ALL = EnumSet.of(
HASH_ALL,
RANDOM,
SPACE);
}

View File

@ -49,10 +49,6 @@ public class ConnectionManager {
private static final Logger LOG =
LoggerFactory.getLogger(ConnectionManager.class);
/** Minimum amount of active connections: 50%. */
protected static final float MIN_ACTIVE_RATIO = 0.5f;
/** Configuration for the connection manager, pool and sockets. */
private final Configuration conf;
@ -60,6 +56,8 @@ public class ConnectionManager {
private final int minSize = 1;
/** Max number of connections per user + nn. */
private final int maxSize;
/** Min ratio of active connections per user + nn. */
private final float minActiveRatio;
/** How often we close a pool for a particular user + nn. */
private final long poolCleanupPeriodMs;
@ -96,10 +94,13 @@ public class ConnectionManager {
public ConnectionManager(Configuration config) {
this.conf = config;
// Configure minimum and maximum connection pools
// Configure minimum, maximum and active connection pools
this.maxSize = this.conf.getInt(
RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE,
RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT);
this.minActiveRatio = this.conf.getFloat(
RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO,
RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT);
// Map with the connections indexed by UGI and Namenode
this.pools = new HashMap<>();
@ -203,7 +204,8 @@ public class ConnectionManager {
pool = this.pools.get(connectionId);
if (pool == null) {
pool = new ConnectionPool(
this.conf, nnAddress, ugi, this.minSize, this.maxSize, protocol);
this.conf, nnAddress, ugi, this.minSize, this.maxSize,
this.minActiveRatio, protocol);
this.pools.put(connectionId, pool);
}
} finally {
@ -326,8 +328,9 @@ public class ConnectionManager {
long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
int total = pool.getNumConnections();
int active = pool.getNumActiveConnections();
float poolMinActiveRatio = pool.getMinActiveRatio();
if (timeSinceLastActive > connectionCleanupPeriodMs ||
active < MIN_ACTIVE_RATIO * total) {
active < poolMinActiveRatio * total) {
// Remove and close 1 connection
List<ConnectionContext> conns = pool.removeConnections(1);
for (ConnectionContext conn : conns) {
@ -393,7 +396,7 @@ public class ConnectionManager {
/**
* Thread that creates connections asynchronously.
*/
private static class ConnectionCreator extends Thread {
static class ConnectionCreator extends Thread {
/** If the creator is running. */
private boolean running = true;
/** Queue to push work to. */
@ -412,8 +415,9 @@ public class ConnectionManager {
try {
int total = pool.getNumConnections();
int active = pool.getNumActiveConnections();
float poolMinActiveRatio = pool.getMinActiveRatio();
if (pool.getNumConnections() < pool.getMaxSize() &&
active >= MIN_ACTIVE_RATIO * total) {
active >= poolMinActiveRatio * total) {
ConnectionContext conn = pool.newConnection();
pool.addConnection(conn);
} else {
@ -426,6 +430,8 @@ public class ConnectionManager {
} catch (InterruptedException e) {
LOG.error("The connection creator was interrupted");
this.running = false;
} catch (Throwable e) {
LOG.error("Fatal error caught by connection creator ", e);
}
}
}

View File

@ -0,0 +1,33 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.router;
import java.io.IOException;
/**
* Exception when can not get a non-null connection.
*/
public class ConnectionNullException extends IOException {
private static final long serialVersionUID = 1L;
public ConnectionNullException(String msg) {
super(msg);
}
}

View File

@ -18,8 +18,10 @@
package org.apache.hadoop.hdfs.server.federation.router;
import java.io.IOException;
import java.lang.reflect.Constructor;
import java.net.InetSocketAddress;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.LinkedList;
import java.util.List;
@ -48,9 +50,15 @@ import org.apache.hadoop.io.retry.RetryUtils;
import org.apache.hadoop.ipc.ProtobufRpcEngine;
import org.apache.hadoop.ipc.RPC;
import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.security.RefreshUserMappingsProtocol;
import org.apache.hadoop.security.SaslRpcServer;
import org.apache.hadoop.security.SecurityUtil;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.security.protocolPB.RefreshUserMappingsProtocolClientSideTranslatorPB;
import org.apache.hadoop.security.protocolPB.RefreshUserMappingsProtocolPB;
import org.apache.hadoop.tools.GetUserMappingsProtocol;
import org.apache.hadoop.tools.protocolPB.GetUserMappingsProtocolClientSideTranslatorPB;
import org.apache.hadoop.tools.protocolPB.GetUserMappingsProtocolPB;
import org.apache.hadoop.util.Time;
import org.eclipse.jetty.util.ajax.JSON;
import org.slf4j.Logger;
@ -91,14 +99,42 @@ public class ConnectionPool {
private final int minSize;
/** Max number of connections per user. */
private final int maxSize;
/** Min ratio of active connections per user. */
private final float minActiveRatio;
/** The last time a connection was active. */
private volatile long lastActiveTime = 0;
/** Map for the protocols and their protobuf implementations. */
private final static Map<Class<?>, ProtoImpl> PROTO_MAP = new HashMap<>();
static {
PROTO_MAP.put(ClientProtocol.class,
new ProtoImpl(ClientNamenodeProtocolPB.class,
ClientNamenodeProtocolTranslatorPB.class));
PROTO_MAP.put(NamenodeProtocol.class, new ProtoImpl(
NamenodeProtocolPB.class, NamenodeProtocolTranslatorPB.class));
PROTO_MAP.put(RefreshUserMappingsProtocol.class,
new ProtoImpl(RefreshUserMappingsProtocolPB.class,
RefreshUserMappingsProtocolClientSideTranslatorPB.class));
PROTO_MAP.put(GetUserMappingsProtocol.class,
new ProtoImpl(GetUserMappingsProtocolPB.class,
GetUserMappingsProtocolClientSideTranslatorPB.class));
}
/** Class to store the protocol implementation. */
private static class ProtoImpl {
private final Class<?> protoPb;
private final Class<?> protoClientPb;
ProtoImpl(Class<?> pPb, Class<?> pClientPb) {
this.protoPb = pPb;
this.protoClientPb = pClientPb;
}
}
protected ConnectionPool(Configuration config, String address,
UserGroupInformation user, int minPoolSize, int maxPoolSize,
Class<?> proto) throws IOException {
float minActiveRatio, Class<?> proto) throws IOException {
this.conf = config;
@ -112,6 +148,7 @@ public class ConnectionPool {
// Set configuration parameters for the pool
this.minSize = minPoolSize;
this.maxSize = maxPoolSize;
this.minActiveRatio = minActiveRatio;
// Add minimum connections to the pool
for (int i=0; i<this.minSize; i++) {
@ -140,6 +177,15 @@ public class ConnectionPool {
return this.minSize;
}
/**
* Get the minimum ratio of active connections in this pool.
*
* @return Minimum ratio of active connections.
*/
protected float getMinActiveRatio() {
return this.minActiveRatio;
}
/**
* Get the connection pool identifier.
*
@ -313,6 +359,7 @@ public class ConnectionPool {
* context for a single user/security context. To maximize throughput it is
* recommended to use multiple connection per user+server, allowing multiple
* writes and reads to be dispatched in parallel.
* @param <T>
*
* @param conf Configuration for the connection.
* @param nnAddress Address of server supporting the ClientProtocol.
@ -322,47 +369,19 @@ public class ConnectionPool {
* security context.
* @throws IOException If it cannot be created.
*/
protected static ConnectionContext newConnection(Configuration conf,
String nnAddress, UserGroupInformation ugi, Class<?> proto)
throws IOException {
ConnectionContext ret;
if (proto == ClientProtocol.class) {
ret = newClientConnection(conf, nnAddress, ugi);
} else if (proto == NamenodeProtocol.class) {
ret = newNamenodeConnection(conf, nnAddress, ugi);
} else {
String msg = "Unsupported protocol for connection to NameNode: " +
((proto != null) ? proto.getClass().getName() : "null");
protected static <T> ConnectionContext newConnection(Configuration conf,
String nnAddress, UserGroupInformation ugi, Class<T> proto)
throws IOException {
if (!PROTO_MAP.containsKey(proto)) {
String msg = "Unsupported protocol for connection to NameNode: "
+ ((proto != null) ? proto.getClass().getName() : "null");
LOG.error(msg);
throw new IllegalStateException(msg);
}
return ret;
}
ProtoImpl classes = PROTO_MAP.get(proto);
RPC.setProtocolEngine(conf, classes.protoPb, ProtobufRpcEngine.class);
/**
* Creates a proxy wrapper for a client NN connection. Each proxy contains
* context for a single user/security context. To maximize throughput it is
* recommended to use multiple connection per user+server, allowing multiple
* writes and reads to be dispatched in parallel.
*
* Mostly based on NameNodeProxies#createNonHAProxy() but it needs the
* connection identifier.
*
* @param conf Configuration for the connection.
* @param nnAddress Address of server supporting the ClientProtocol.
* @param ugi User context.
* @return Proxy for the target ClientProtocol that contains the user's
* security context.
* @throws IOException If it cannot be created.
*/
private static ConnectionContext newClientConnection(
Configuration conf, String nnAddress, UserGroupInformation ugi)
throws IOException {
RPC.setProtocolEngine(
conf, ClientNamenodeProtocolPB.class, ProtobufRpcEngine.class);
final RetryPolicy defaultPolicy = RetryUtils.getDefaultRetryPolicy(
conf,
final RetryPolicy defaultPolicy = RetryUtils.getDefaultRetryPolicy(conf,
HdfsClientConfigKeys.Retry.POLICY_ENABLED_KEY,
HdfsClientConfigKeys.Retry.POLICY_ENABLED_DEFAULT,
HdfsClientConfigKeys.Retry.POLICY_SPEC_KEY,
@ -374,61 +393,32 @@ public class ConnectionPool {
SaslRpcServer.init(conf);
}
InetSocketAddress socket = NetUtils.createSocketAddr(nnAddress);
final long version = RPC.getProtocolVersion(ClientNamenodeProtocolPB.class);
ClientNamenodeProtocolPB proxy = RPC.getProtocolProxy(
ClientNamenodeProtocolPB.class, version, socket, ugi, conf,
factory, RPC.getRpcTimeout(conf), defaultPolicy, null).getProxy();
ClientProtocol client = new ClientNamenodeProtocolTranslatorPB(proxy);
final long version = RPC.getProtocolVersion(classes.protoPb);
Object proxy = RPC.getProtocolProxy(classes.protoPb, version, socket, ugi,
conf, factory, RPC.getRpcTimeout(conf), defaultPolicy, null).getProxy();
T client = newProtoClient(proto, classes, proxy);
Text dtService = SecurityUtil.buildTokenService(socket);
ProxyAndInfo<ClientProtocol> clientProxy =
new ProxyAndInfo<ClientProtocol>(client, dtService, socket);
ProxyAndInfo<T> clientProxy =
new ProxyAndInfo<T>(client, dtService, socket);
ConnectionContext connection = new ConnectionContext(clientProxy);
return connection;
}
/**
* Creates a proxy wrapper for a NN connection. Each proxy contains context
* for a single user/security context. To maximize throughput it is
* recommended to use multiple connection per user+server, allowing multiple
* writes and reads to be dispatched in parallel.
*
* @param conf Configuration for the connection.
* @param nnAddress Address of server supporting the ClientProtocol.
* @param ugi User context.
* @return Proxy for the target NamenodeProtocol that contains the user's
* security context.
* @throws IOException If it cannot be created.
*/
private static ConnectionContext newNamenodeConnection(
Configuration conf, String nnAddress, UserGroupInformation ugi)
throws IOException {
RPC.setProtocolEngine(
conf, NamenodeProtocolPB.class, ProtobufRpcEngine.class);
final RetryPolicy defaultPolicy = RetryUtils.getDefaultRetryPolicy(
conf,
HdfsClientConfigKeys.Retry.POLICY_ENABLED_KEY,
HdfsClientConfigKeys.Retry.POLICY_ENABLED_DEFAULT,
HdfsClientConfigKeys.Retry.POLICY_SPEC_KEY,
HdfsClientConfigKeys.Retry.POLICY_SPEC_DEFAULT,
HdfsConstants.SAFEMODE_EXCEPTION_CLASS_NAME);
SocketFactory factory = SocketFactory.getDefault();
if (UserGroupInformation.isSecurityEnabled()) {
SaslRpcServer.init(conf);
private static <T> T newProtoClient(Class<T> proto, ProtoImpl classes,
Object proxy) {
try {
Constructor<?> constructor =
classes.protoClientPb.getConstructor(classes.protoPb);
Object o = constructor.newInstance(new Object[] {proxy});
if (proto.isAssignableFrom(o.getClass())) {
@SuppressWarnings("unchecked")
T client = (T) o;
return client;
}
} catch (Exception e) {
LOG.error(e.getMessage());
}
InetSocketAddress socket = NetUtils.createSocketAddr(nnAddress);
final long version = RPC.getProtocolVersion(NamenodeProtocolPB.class);
NamenodeProtocolPB proxy = RPC.getProtocolProxy(NamenodeProtocolPB.class,
version, socket, ugi, conf,
factory, RPC.getRpcTimeout(conf), defaultPolicy, null).getProxy();
NamenodeProtocol client = new NamenodeProtocolTranslatorPB(proxy);
Text dtService = SecurityUtil.buildTokenService(socket);
ProxyAndInfo<NamenodeProtocol> clientProxy =
new ProxyAndInfo<NamenodeProtocol>(client, dtService, socket);
ConnectionContext connection = new ConnectionContext(clientProxy);
return connection;
return null;
}
}
}

View File

@ -140,7 +140,7 @@ public class ErasureCoding {
rpcServer.checkOperation(OperationCategory.READ);
final List<RemoteLocation> locations =
rpcServer.getLocationsForPath(src, true);
rpcServer.getLocationsForPath(src, false, false);
RemoteMethod remoteMethod = new RemoteMethod("getErasureCodingPolicy",
new Class<?>[] {String.class}, new RemoteParam());
ErasureCodingPolicy ret = rpcClient.invokeSequential(
@ -153,21 +153,29 @@ public class ErasureCoding {
rpcServer.checkOperation(OperationCategory.WRITE);
final List<RemoteLocation> locations =
rpcServer.getLocationsForPath(src, true);
rpcServer.getLocationsForPath(src, false, false);
RemoteMethod remoteMethod = new RemoteMethod("setErasureCodingPolicy",
new Class<?>[] {String.class, String.class},
new RemoteParam(), ecPolicyName);
rpcClient.invokeSequential(locations, remoteMethod, null, null);
if (rpcServer.isInvokeConcurrent(src)) {
rpcClient.invokeConcurrent(locations, remoteMethod);
} else {
rpcClient.invokeSequential(locations, remoteMethod);
}
}
public void unsetErasureCodingPolicy(String src) throws IOException {
rpcServer.checkOperation(OperationCategory.WRITE);
final List<RemoteLocation> locations =
rpcServer.getLocationsForPath(src, true);
rpcServer.getLocationsForPath(src, false, false);
RemoteMethod remoteMethod = new RemoteMethod("unsetErasureCodingPolicy",
new Class<?>[] {String.class}, new RemoteParam());
rpcClient.invokeSequential(locations, remoteMethod, null, null);
if (rpcServer.isInvokeConcurrent(src)) {
rpcClient.invokeConcurrent(locations, remoteMethod);
} else {
rpcClient.invokeSequential(locations, remoteMethod);
}
}
public ECBlockGroupStats getECBlockGroupStats() throws IOException {
@ -179,33 +187,6 @@ public class ErasureCoding {
rpcClient.invokeConcurrent(
nss, method, true, false, ECBlockGroupStats.class);
// Merge the stats from all the namespaces
long lowRedundancyBlockGroups = 0;
long corruptBlockGroups = 0;
long missingBlockGroups = 0;
long bytesInFutureBlockGroups = 0;
long pendingDeletionBlocks = 0;
long highestPriorityLowRedundancyBlocks = 0;
boolean hasHighestPriorityLowRedundancyBlocks = false;
for (ECBlockGroupStats stats : allStats.values()) {
lowRedundancyBlockGroups += stats.getLowRedundancyBlockGroups();
corruptBlockGroups += stats.getCorruptBlockGroups();
missingBlockGroups += stats.getMissingBlockGroups();
bytesInFutureBlockGroups += stats.getBytesInFutureBlockGroups();
pendingDeletionBlocks += stats.getPendingDeletionBlocks();
if (stats.hasHighestPriorityLowRedundancyBlocks()) {
hasHighestPriorityLowRedundancyBlocks = true;
highestPriorityLowRedundancyBlocks +=
stats.getHighestPriorityLowRedundancyBlocks();
}
}
if (hasHighestPriorityLowRedundancyBlocks) {
return new ECBlockGroupStats(lowRedundancyBlockGroups, corruptBlockGroups,
missingBlockGroups, bytesInFutureBlockGroups, pendingDeletionBlocks,
highestPriorityLowRedundancyBlocks);
}
return new ECBlockGroupStats(lowRedundancyBlockGroups, corruptBlockGroups,
missingBlockGroups, bytesInFutureBlockGroups, pendingDeletionBlocks);
return ECBlockGroupStats.merge(allStats.values());
}
}

View File

@ -27,9 +27,12 @@ import java.net.URLConnection;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
import org.apache.hadoop.hdfs.web.URLConnectionFactory;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.util.VersionInfo;
import org.codehaus.jettison.json.JSONArray;
import org.codehaus.jettison.json.JSONException;
@ -54,9 +57,12 @@ public final class FederationUtil {
*
* @param beanQuery JMX bean.
* @param webAddress Web address of the JMX endpoint.
* @param connectionFactory to open http/https connection.
* @param scheme to use for URL connection.
* @return JSON with the JMX data
*/
public static JSONArray getJmx(String beanQuery, String webAddress) {
public static JSONArray getJmx(String beanQuery, String webAddress,
URLConnectionFactory connectionFactory, String scheme) {
JSONArray ret = null;
BufferedReader reader = null;
try {
@ -67,8 +73,11 @@ public final class FederationUtil {
host = webAddressSplit[0];
port = Integer.parseInt(webAddressSplit[1]);
}
URL jmxURL = new URL("http", host, port, "/jmx?qry=" + beanQuery);
URLConnection conn = jmxURL.openConnection();
URL jmxURL = new URL(scheme, host, port, "/jmx?qry=" + beanQuery);
LOG.debug("JMX URL: {}", jmxURL);
// Create a URL connection
URLConnection conn = connectionFactory.openConnection(
jmxURL, UserGroupInformation.isSecurityEnabled());
conn.setConnectTimeout(5 * 1000);
conn.setReadTimeout(5 * 1000);
InputStream in = conn.getInputStream();
@ -205,4 +214,24 @@ public final class FederationUtil {
return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
|| parent.equals(Path.SEPARATOR);
}
}
/**
* Add the the number of children for an existing HdfsFileStatus object.
* @param dirStatus HdfsfileStatus object.
* @param children number of children to be added.
* @return HdfsFileStatus with the number of children specified.
*/
public static HdfsFileStatus updateMountPointStatus(HdfsFileStatus dirStatus,
int children) {
return new HdfsFileStatus.Builder().atime(dirStatus.getAccessTime())
.blocksize(dirStatus.getBlockSize()).children(children)
.ecPolicy(dirStatus.getErasureCodingPolicy())
.feInfo(dirStatus.getFileEncryptionInfo()).fileId(dirStatus.getFileId())
.group(dirStatus.getGroup()).isdir(dirStatus.isDir())
.length(dirStatus.getLen()).mtime(dirStatus.getModificationTime())
.owner(dirStatus.getOwner()).path(dirStatus.getLocalNameInBytes())
.perm(dirStatus.getPermission()).replication(dirStatus.getReplication())
.storagePolicy(dirStatus.getStoragePolicy())
.symlink(dirStatus.getSymlinkInBytes()).build();
}
}

View File

@ -0,0 +1,289 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.router;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
import org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableException;
import org.apache.hadoop.hdfs.server.federation.store.StateStoreUtils;
import org.apache.hadoop.hdfs.server.federation.store.records.RouterState;
import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.service.AbstractService;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import com.google.common.cache.RemovalListener;
import com.google.common.cache.RemovalNotification;
import com.google.common.util.concurrent.ThreadFactoryBuilder;
/**
* This service is invoked from {@link MountTableStore} when there is change in
* mount table entries and it updates mount table entry cache on local router as
* well as on all remote routers. Refresh on local router is done by calling
* {@link MountTableStore#loadCache(boolean)}} API directly, no RPC call
* involved, but on remote routers refresh is done through RouterClient(RPC
* call). To improve performance, all routers are refreshed in separate thread
* and all connection are cached. Cached connections are removed from
* cache and closed when their max live time is elapsed.
*/
public class MountTableRefresherService extends AbstractService {
private static final String ROUTER_CONNECT_ERROR_MSG =
"Router {} connection failed. Mount table cache will not refesh.";
private static final Logger LOG =
LoggerFactory.getLogger(MountTableRefresherService.class);
/** Local router. */
private final Router router;
/** Mount table store. */
private MountTableStore mountTableStore;
/** Local router admin address in the form of host:port. */
private String localAdminAdress;
/** Timeout in ms to update mount table cache on all the routers. */
private long cacheUpdateTimeout;
/**
* All router admin clients cached. So no need to create the client again and
* again. Router admin address(host:port) is used as key to cache RouterClient
* objects.
*/
private LoadingCache<String, RouterClient> routerClientsCache;
/**
* Removes expired RouterClient from routerClientsCache.
*/
private ScheduledExecutorService clientCacheCleanerScheduler;
/**
* Create a new service to refresh mount table cache when there is change in
* mount table entries.
*
* @param router whose mount table cache will be refreshed
*/
public MountTableRefresherService(Router router) {
super(MountTableRefresherService.class.getSimpleName());
this.router = router;
}
@Override
protected void serviceInit(Configuration conf) throws Exception {
super.serviceInit(conf);
this.mountTableStore = getMountTableStore();
// attach this service to mount table store.
this.mountTableStore.setRefreshService(this);
this.localAdminAdress =
StateStoreUtils.getHostPortString(router.getAdminServerAddress());
this.cacheUpdateTimeout = conf.getTimeDuration(
RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_TIMEOUT,
RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_TIMEOUT_DEFAULT,
TimeUnit.MILLISECONDS);
long routerClientMaxLiveTime = conf.getTimeDuration(
RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_CLIENT_MAX_TIME,
RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_CLIENT_MAX_TIME_DEFAULT,
TimeUnit.MILLISECONDS);
routerClientsCache = CacheBuilder.newBuilder()
.expireAfterWrite(routerClientMaxLiveTime, TimeUnit.MILLISECONDS)
.removalListener(getClientRemover()).build(getClientCreator());
initClientCacheCleaner(routerClientMaxLiveTime);
}
private void initClientCacheCleaner(long routerClientMaxLiveTime) {
clientCacheCleanerScheduler =
Executors.newSingleThreadScheduledExecutor(new ThreadFactoryBuilder()
.setNameFormat("MountTableRefresh_ClientsCacheCleaner")
.setDaemon(true).build());
/*
* When cleanUp() method is called, expired RouterClient will be removed and
* closed.
*/
clientCacheCleanerScheduler.scheduleWithFixedDelay(
() -> routerClientsCache.cleanUp(), routerClientMaxLiveTime,
routerClientMaxLiveTime, TimeUnit.MILLISECONDS);
}
/**
* Create cache entry remove listener.
*/
private RemovalListener<String, RouterClient> getClientRemover() {
return new RemovalListener<String, RouterClient>() {
@Override
public void onRemoval(
RemovalNotification<String, RouterClient> notification) {
closeRouterClient(notification.getValue());
}
};
}
@VisibleForTesting
protected void closeRouterClient(RouterClient client) {
try {
client.close();
} catch (IOException e) {
LOG.error("Error while closing RouterClient", e);
}
}
/**
* Creates RouterClient and caches it.
*/
private CacheLoader<String, RouterClient> getClientCreator() {
return new CacheLoader<String, RouterClient>() {
public RouterClient load(String adminAddress) throws IOException {
InetSocketAddress routerSocket =
NetUtils.createSocketAddr(adminAddress);
Configuration config = getConfig();
return createRouterClient(routerSocket, config);
}
};
}
@VisibleForTesting
protected RouterClient createRouterClient(InetSocketAddress routerSocket,
Configuration config) throws IOException {
return new RouterClient(routerSocket, config);
}
@Override
protected void serviceStart() throws Exception {
super.serviceStart();
}
@Override
protected void serviceStop() throws Exception {
super.serviceStop();
clientCacheCleanerScheduler.shutdown();
// remove and close all admin clients
routerClientsCache.invalidateAll();
}
private MountTableStore getMountTableStore() throws IOException {
MountTableStore mountTblStore =
router.getStateStore().getRegisteredRecordStore(MountTableStore.class);
if (mountTblStore == null) {
throw new IOException("Mount table state store is not available.");
}
return mountTblStore;
}
/**
* Refresh mount table cache of this router as well as all other routers.
*/
public void refresh() throws StateStoreUnavailableException {
List<RouterState> cachedRecords =
router.getRouterStateManager().getCachedRecords();
List<MountTableRefresherThread> refreshThreads = new ArrayList<>();
for (RouterState routerState : cachedRecords) {
String adminAddress = routerState.getAdminAddress();
if (adminAddress == null || adminAddress.length() == 0) {
// this router has not enabled router admin
continue;
}
// No use of calling refresh on router which is not running state
if (routerState.getStatus() != RouterServiceState.RUNNING) {
LOG.info(
"Router {} is not running. Mount table cache will not refesh.");
// remove if RouterClient is cached.
removeFromCache(adminAddress);
} else if (isLocalAdmin(adminAddress)) {
/*
* Local router's cache update does not require RPC call, so no need for
* RouterClient
*/
refreshThreads.add(getLocalRefresher(adminAddress));
} else {
try {
RouterClient client = routerClientsCache.get(adminAddress);
refreshThreads.add(new MountTableRefresherThread(
client.getMountTableManager(), adminAddress));
} catch (ExecutionException execExcep) {
// Can not connect, seems router is stopped now.
LOG.warn(ROUTER_CONNECT_ERROR_MSG, adminAddress, execExcep);
}
}
}
if (!refreshThreads.isEmpty()) {
invokeRefresh(refreshThreads);
}
}
@VisibleForTesting
protected MountTableRefresherThread getLocalRefresher(String adminAddress) {
return new MountTableRefresherThread(router.getAdminServer(), adminAddress);
}
private void removeFromCache(String adminAddress) {
routerClientsCache.invalidate(adminAddress);
}
private void invokeRefresh(List<MountTableRefresherThread> refreshThreads) {
CountDownLatch countDownLatch = new CountDownLatch(refreshThreads.size());
// start all the threads
for (MountTableRefresherThread refThread : refreshThreads) {
refThread.setCountDownLatch(countDownLatch);
refThread.start();
}
try {
/*
* Wait for all the thread to complete, await method returns false if
* refresh is not finished within specified time
*/
boolean allReqCompleted =
countDownLatch.await(cacheUpdateTimeout, TimeUnit.MILLISECONDS);
if (!allReqCompleted) {
LOG.warn("Not all router admins updated their cache");
}
} catch (InterruptedException e) {
LOG.error("Mount table cache refresher was interrupted.", e);
}
logResult(refreshThreads);
}
private boolean isLocalAdmin(String adminAddress) {
return adminAddress.contentEquals(localAdminAdress);
}
private void logResult(List<MountTableRefresherThread> refreshThreads) {
int succesCount = 0;
int failureCount = 0;
for (MountTableRefresherThread mountTableRefreshThread : refreshThreads) {
if (mountTableRefreshThread.isSuccess()) {
succesCount++;
} else {
failureCount++;
// remove RouterClient from cache so that new client is created
removeFromCache(mountTableRefreshThread.getAdminAddress());
}
}
LOG.info("Mount table entries cache refresh succesCount={},failureCount={}",
succesCount, failureCount);
}
}

View File

@ -0,0 +1,96 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.router;
import java.io.IOException;
import java.util.concurrent.CountDownLatch;
import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Base class for updating mount table cache on all the router.
*/
public class MountTableRefresherThread extends Thread {
private static final Logger LOG =
LoggerFactory.getLogger(MountTableRefresherThread.class);
private boolean success;
/** Admin server on which refreshed to be invoked. */
private String adminAddress;
private CountDownLatch countDownLatch;
private MountTableManager manager;
public MountTableRefresherThread(MountTableManager manager,
String adminAddress) {
this.manager = manager;
this.adminAddress = adminAddress;
setName("MountTableRefresh_" + adminAddress);
setDaemon(true);
}
/**
* Refresh mount table cache of local and remote routers. Local and remote
* routers will be refreshed differently. Lets understand what are the
* local and remote routers and refresh will be done differently on these
* routers. Suppose there are three routers R1, R2 and R3. User want to add
* new mount table entry. He will connect to only one router, not all the
* routers. Suppose He connects to R1 and calls add mount table entry through
* API or CLI. Now in this context R1 is local router, R2 and R3 are remote
* routers. Because add mount table entry is invoked on R1, R1 will update the
* cache locally it need not to make RPC call. But R1 will make RPC calls to
* update cache on R2 and R3.
*/
@Override
public void run() {
try {
RefreshMountTableEntriesResponse refreshMountTableEntries =
manager.refreshMountTableEntries(
RefreshMountTableEntriesRequest.newInstance());
success = refreshMountTableEntries.getResult();
} catch (IOException e) {
LOG.error("Failed to refresh mount table entries cache at router {}",
adminAddress, e);
} finally {
countDownLatch.countDown();
}
}
/**
* @return true if cache was refreshed successfully.
*/
public boolean isSuccess() {
return success;
}
public void setCountDownLatch(CountDownLatch countDownLatch) {
this.countDownLatch = countDownLatch;
}
@Override
public String toString() {
return "MountTableRefreshThread [success=" + success + ", adminAddress="
+ adminAddress + "]";
}
public String getAdminAddress() {
return adminAddress;
}
}

View File

@ -38,7 +38,9 @@ import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.NamenodeStatusReport;
import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol;
import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
import org.apache.hadoop.hdfs.tools.DFSHAAdmin;
import org.apache.hadoop.hdfs.tools.NNHAServiceTarget;
import org.apache.hadoop.hdfs.web.URLConnectionFactory;
import org.codehaus.jettison.json.JSONArray;
import org.codehaus.jettison.json.JSONObject;
import org.slf4j.Logger;
@ -85,7 +87,10 @@ public class NamenodeHeartbeatService extends PeriodicService {
private String lifelineAddress;
/** HTTP address for the namenode. */
private String webAddress;
/** Connection factory for JMX calls. */
private URLConnectionFactory connectionFactory;
/** URL scheme to use for JMX calls. */
private String scheme;
/**
* Create a new Namenode status updater.
* @param resolver Namenode resolver service to handle NN registration.
@ -108,7 +113,7 @@ public class NamenodeHeartbeatService extends PeriodicService {
@Override
protected void serviceInit(Configuration configuration) throws Exception {
this.conf = configuration;
this.conf = DFSHAAdmin.addSecurityConfiguration(configuration);
String nnDesc = nameserviceId;
if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
@ -146,6 +151,12 @@ public class NamenodeHeartbeatService extends PeriodicService {
DFSUtil.getNamenodeWebAddr(conf, nameserviceId, namenodeId);
LOG.info("{} Web address: {}", nnDesc, webAddress);
this.connectionFactory =
URLConnectionFactory.newDefaultURLConnectionFactory(conf);
this.scheme =
DFSUtil.getHttpPolicy(conf).isHttpEnabled() ? "http" : "https";
this.setIntervalMs(conf.getLong(
DFS_ROUTER_HEARTBEAT_INTERVAL_MS,
DFS_ROUTER_HEARTBEAT_INTERVAL_MS_DEFAULT));
@ -328,7 +339,8 @@ public class NamenodeHeartbeatService extends PeriodicService {
try {
// TODO part of this should be moved to its own utility
String query = "Hadoop:service=NameNode,name=FSNamesystem*";
JSONArray aux = FederationUtil.getJmx(query, address);
JSONArray aux = FederationUtil.getJmx(
query, address, connectionFactory, scheme);
if (aux != null) {
for (int i = 0; i < aux.length(); i++) {
JSONObject jsonObject = aux.getJSONObject(i);
@ -337,9 +349,13 @@ public class NamenodeHeartbeatService extends PeriodicService {
report.setDatanodeInfo(
jsonObject.getInt("NumLiveDataNodes"),
jsonObject.getInt("NumDeadDataNodes"),
jsonObject.getInt("NumStaleDataNodes"),
jsonObject.getInt("NumDecommissioningDataNodes"),
jsonObject.getInt("NumDecomLiveDataNodes"),
jsonObject.getInt("NumDecomDeadDataNodes"));
jsonObject.getInt("NumDecomDeadDataNodes"),
jsonObject.optInt("NumInMaintenanceLiveDataNodes"),
jsonObject.optInt("NumInMaintenanceDeadDataNodes"),
jsonObject.optInt("NumEnteringMaintenanceDataNodes"));
} else if (name.equals(
"Hadoop:service=NameNode,name=FSNamesystem")) {
report.setNamesystemInfo(
@ -351,7 +367,7 @@ public class NamenodeHeartbeatService extends PeriodicService {
jsonObject.getLong("PendingReplicationBlocks"),
jsonObject.getLong("UnderReplicatedBlocks"),
jsonObject.getLong("PendingDeletionBlocks"),
jsonObject.getLong("ProvidedCapacityTotal"));
jsonObject.optLong("ProvidedCapacityTotal"));
}
}
}
@ -359,4 +375,14 @@ public class NamenodeHeartbeatService extends PeriodicService {
LOG.error("Cannot get stat from {} using JMX", getNamenodeDesc(), e);
}
}
}
@Override
protected void serviceStop() throws Exception {
LOG.info("Stopping NamenodeHeartbeat service for, NS {} NN {} ",
this.nameserviceId, this.namenodeId);
if (this.connectionFactory != null) {
this.connectionFactory.destroy();
}
super.serviceStop();
}
}

View File

@ -0,0 +1,33 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.router;
import java.io.IOException;
/**
* Exception when no namenodes are available.
*/
public class NoNamenodesAvailableException extends IOException {
private static final long serialVersionUID = 1L;
public NoNamenodesAvailableException(String nsId, IOException ioe) {
super("No namenodes available under nameservice " + nsId, ioe);
}
}

View File

@ -163,7 +163,7 @@ public class Quota {
long ssCount = 0;
long nsQuota = HdfsConstants.QUOTA_RESET;
long ssQuota = HdfsConstants.QUOTA_RESET;
boolean hasQuotaUnSet = false;
boolean hasQuotaUnset = false;
for (Map.Entry<RemoteLocation, QuotaUsage> entry : results.entrySet()) {
RemoteLocation loc = entry.getKey();
@ -172,7 +172,7 @@ public class Quota {
// If quota is not set in real FileSystem, the usage
// value will return -1.
if (usage.getQuota() == -1 && usage.getSpaceQuota() == -1) {
hasQuotaUnSet = true;
hasQuotaUnset = true;
}
nsQuota = usage.getQuota();
ssQuota = usage.getSpaceQuota();
@ -189,7 +189,7 @@ public class Quota {
QuotaUsage.Builder builder = new QuotaUsage.Builder()
.fileAndDirectoryCount(nsCount).spaceConsumed(ssCount);
if (hasQuotaUnSet) {
if (hasQuotaUnset) {
builder.quota(HdfsConstants.QUOTA_RESET)
.spaceQuota(HdfsConstants.QUOTA_RESET);
} else {
@ -213,9 +213,15 @@ public class Quota {
if (manager != null) {
Set<String> childrenPaths = manager.getPaths(path);
for (String childPath : childrenPaths) {
locations.addAll(rpcServer.getLocationsForPath(childPath, true, false));
locations.addAll(
rpcServer.getLocationsForPath(childPath, false, false));
}
}
return locations;
if (locations.size() >= 1) {
return locations;
} else {
locations.addAll(rpcServer.getLocationsForPath(path, false, false));
return locations;
}
}
}

View File

@ -19,6 +19,7 @@
package org.apache.hadoop.hdfs.server.federation.router;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
import org.apache.hadoop.hdfs.server.federation.metrics.FederationRPCPerformanceMonitor;
import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
@ -28,6 +29,8 @@ import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
import org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializerPBImpl;
import org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl;
import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager;
import org.apache.hadoop.hdfs.server.federation.router.security.token.ZKDelegationTokenSecretManagerImpl;
import java.util.concurrent.TimeUnit;
@ -37,6 +40,12 @@ import java.util.concurrent.TimeUnit;
@InterfaceAudience.Private
public class RBFConfigKeys extends CommonConfigurationKeysPublic {
public static final String HDFS_RBF_SITE_XML = "hdfs-rbf-site.xml";
static {
Configuration.addDefaultResource(HDFS_RBF_SITE_XML);
}
// HDFS Router-based federation
public static final String FEDERATION_ROUTER_PREFIX =
"dfs.federation.router.";
@ -82,6 +91,8 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
public static final String DFS_ROUTER_HEARTBEAT_ENABLE =
FEDERATION_ROUTER_PREFIX + "heartbeat.enable";
public static final boolean DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT = true;
public static final String DFS_ROUTER_NAMENODE_HEARTBEAT_ENABLE =
FEDERATION_ROUTER_PREFIX + "namenode.heartbeat.enable";
public static final String DFS_ROUTER_HEARTBEAT_INTERVAL_MS =
FEDERATION_ROUTER_PREFIX + "heartbeat.interval";
public static final long DFS_ROUTER_HEARTBEAT_INTERVAL_MS_DEFAULT =
@ -102,6 +113,11 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
FEDERATION_ROUTER_PREFIX + "connection.creator.queue-size";
public static final int
DFS_ROUTER_NAMENODE_CONNECTION_CREATOR_QUEUE_SIZE_DEFAULT = 100;
public static final String
DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO =
FEDERATION_ROUTER_PREFIX + "connection.min-active-ratio";
public static final float
DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT = 0.5f;
public static final String DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE =
FEDERATION_ROUTER_PREFIX + "connection.pool-size";
public static final int DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT =
@ -125,6 +141,20 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
public static final String DFS_ROUTER_CLIENT_REJECT_OVERLOAD =
FEDERATION_ROUTER_PREFIX + "client.reject.overload";
public static final boolean DFS_ROUTER_CLIENT_REJECT_OVERLOAD_DEFAULT = false;
public static final String DFS_ROUTER_ALLOW_PARTIAL_LIST =
FEDERATION_ROUTER_PREFIX + "client.allow-partial-listing";
public static final boolean DFS_ROUTER_ALLOW_PARTIAL_LIST_DEFAULT = true;
public static final String DFS_ROUTER_CLIENT_MOUNT_TIME_OUT =
FEDERATION_ROUTER_PREFIX + "client.mount-status.time-out";
public static final long DFS_ROUTER_CLIENT_MOUNT_TIME_OUT_DEFAULT =
TimeUnit.SECONDS.toMillis(1);
public static final String DFS_ROUTER_CLIENT_MAX_RETRIES_TIME_OUT =
FEDERATION_ROUTER_PREFIX + "connect.max.retries.on.timeouts";
public static final int DFS_ROUTER_CLIENT_MAX_RETRIES_TIME_OUT_DEFAULT = 0;
public static final String DFS_ROUTER_CLIENT_CONNECT_TIMEOUT =
FEDERATION_ROUTER_PREFIX + "connect.timeout";
public static final long DFS_ROUTER_CLIENT_CONNECT_TIMEOUT_DEFAULT =
TimeUnit.SECONDS.toMillis(2);
// HDFS Router State Store connection
public static final String FEDERATION_FILE_RESOLVER_CLIENT_CLASS =
@ -195,6 +225,31 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
FEDERATION_ROUTER_PREFIX + "mount-table.max-cache-size";
/** Remove cache entries if we have more than 10k. */
public static final int FEDERATION_MOUNT_TABLE_MAX_CACHE_SIZE_DEFAULT = 10000;
/**
* If true then cache updated immediately after mount table entry change
* otherwise it is updated periodically based configuration.
*/
public static final String MOUNT_TABLE_CACHE_UPDATE =
FEDERATION_ROUTER_PREFIX + "mount-table.cache.update";
public static final boolean MOUNT_TABLE_CACHE_UPDATE_DEFAULT =
false;
/**
* Timeout to update mount table cache on all the routers.
*/
public static final String MOUNT_TABLE_CACHE_UPDATE_TIMEOUT =
FEDERATION_ROUTER_PREFIX + "mount-table.cache.update.timeout";
public static final long MOUNT_TABLE_CACHE_UPDATE_TIMEOUT_DEFAULT =
TimeUnit.MINUTES.toMillis(1);
/**
* Remote router mount table cache is updated through RouterClient(RPC call).
* To improve performance, RouterClient connections are cached but it should
* not be kept in cache forever. This property defines the max time a
* connection can be cached.
*/
public static final String MOUNT_TABLE_CACHE_UPDATE_CLIENT_MAX_TIME =
FEDERATION_ROUTER_PREFIX + "mount-table.cache.update.client.max.time";
public static final long MOUNT_TABLE_CACHE_UPDATE_CLIENT_MAX_TIME_DEFAULT =
TimeUnit.MINUTES.toMillis(5);
public static final String FEDERATION_MOUNT_TABLE_CACHE_ENABLE =
FEDERATION_ROUTER_PREFIX + "mount-table.cache.enable";
public static final boolean FEDERATION_MOUNT_TABLE_CACHE_ENABLE_DEFAULT =
@ -233,6 +288,13 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
FEDERATION_ROUTER_PREFIX + "https-bind-host";
public static final String DFS_ROUTER_HTTPS_ADDRESS_DEFAULT =
"0.0.0.0:" + DFS_ROUTER_HTTPS_PORT_DEFAULT;
public static final String DN_REPORT_TIME_OUT =
FEDERATION_ROUTER_PREFIX + "dn-report.time-out";
public static final long DN_REPORT_TIME_OUT_MS_DEFAULT = 1000;
public static final String DN_REPORT_CACHE_EXPIRE =
FEDERATION_ROUTER_PREFIX + "dn-report.cache-expire";
public static final long DN_REPORT_CACHE_EXPIRE_MS_DEFAULT =
TimeUnit.SECONDS.toMillis(10);
// HDFS Router-based federation quota
public static final String DFS_ROUTER_QUOTA_ENABLE =
@ -242,4 +304,22 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
FEDERATION_ROUTER_PREFIX + "quota-cache.update.interval";
public static final long DFS_ROUTER_QUOTA_CACHE_UPATE_INTERVAL_DEFAULT =
60000;
// HDFS Router security
public static final String DFS_ROUTER_KEYTAB_FILE_KEY =
FEDERATION_ROUTER_PREFIX + "keytab.file";
public static final String DFS_ROUTER_KERBEROS_PRINCIPAL_KEY =
FEDERATION_ROUTER_PREFIX + "kerberos.principal";
public static final String DFS_ROUTER_KERBEROS_PRINCIPAL_HOSTNAME_KEY =
FEDERATION_ROUTER_PREFIX + "kerberos.principal.hostname";
public static final String DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY =
FEDERATION_ROUTER_PREFIX + "kerberos.internal.spnego.principal";
// HDFS Router secret manager for delegation token
public static final String DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS =
FEDERATION_ROUTER_PREFIX + "secret.manager.class";
public static final Class<? extends AbstractDelegationTokenSecretManager>
DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS_DEFAULT =
ZKDelegationTokenSecretManagerImpl.class;
}

View File

@ -21,6 +21,8 @@ import java.io.IOException;
import java.lang.reflect.Method;
import java.util.Arrays;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo;
import org.apache.hadoop.hdfs.protocol.ClientProtocol;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -198,9 +200,16 @@ public class RemoteMethod {
for (int i = 0; i < this.params.length; i++) {
Object currentObj = this.params[i];
if (currentObj instanceof RemoteParam) {
// Map the parameter using the context
RemoteParam paramGetter = (RemoteParam) currentObj;
objList[i] = paramGetter.getParameterForContext(context);
// Map the parameter using the context
if (this.types[i] == CacheDirectiveInfo.class) {
CacheDirectiveInfo path =
(CacheDirectiveInfo) paramGetter.getParameterForContext(context);
objList[i] = new CacheDirectiveInfo.Builder(path)
.setPath(new Path(context.getDest())).build();
} else {
objList[i] = paramGetter.getParameterForContext(context);
}
} else {
objList[i] = currentObj;
}
@ -210,7 +219,13 @@ public class RemoteMethod {
@Override
public String toString() {
return this.protocol.getSimpleName() + "#" + this.methodName + " " +
Arrays.toString(this.params);
return new StringBuilder()
.append(this.protocol.getSimpleName())
.append("#")
.append(this.methodName)
.append("(")
.append(Arrays.deepToString(this.params))
.append(")")
.toString();
}
}

View File

@ -68,4 +68,13 @@ public class RemoteParam {
return context.getDest();
}
}
@Override
public String toString() {
return new StringBuilder()
.append("RemoteParam(")
.append(this.paramMap)
.append(")")
.toString();
}
}

View File

@ -0,0 +1,84 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.router;
import java.io.IOException;
/**
* Result from a remote location.
* It includes the exception if there was any error.
* @param <T> Type of the remote location.
* @param <R> Type of the result.
*/
public class RemoteResult<T extends RemoteLocationContext, R> {
/** The remote location. */
private final T loc;
/** The result from the remote location. */
private final R result;
/** If the result is set; used for void types. */
private final boolean resultSet;
/** The exception if we couldn't get the result. */
private final IOException ioe;
public RemoteResult(T location, R r) {
this.loc = location;
this.result = r;
this.resultSet = true;
this.ioe = null;
}
public RemoteResult(T location, IOException e) {
this.loc = location;
this.result = null;
this.resultSet = false;
this.ioe = e;
}
public T getLocation() {
return loc;
}
public boolean hasResult() {
return resultSet;
}
public R getResult() {
return result;
}
public boolean hasException() {
return getException() != null;
}
public IOException getException() {
return ioe;
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder()
.append("loc=").append(getLocation());
if (hasResult()) {
sb.append(" result=").append(getResult());
}
if (hasException()) {
sb.append(" exception=").append(getException());
}
return sb.toString();
}
}

View File

@ -17,6 +17,10 @@
*/
package org.apache.hadoop.hdfs.server.federation.router;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KERBEROS_PRINCIPAL_HOSTNAME_KEY;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KERBEROS_PRINCIPAL_KEY;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KEYTAB_FILE_KEY;
import static org.apache.hadoop.hdfs.server.federation.router.FederationUtil.newActiveNamenodeResolver;
import static org.apache.hadoop.hdfs.server.federation.router.FederationUtil.newFileSubclusterResolver;
@ -33,7 +37,9 @@ import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdfs.HAUtil;
import org.apache.hadoop.hdfs.server.federation.metrics.FederationMetrics;
import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
import org.apache.hadoop.hdfs.server.common.TokenVerifier;
import org.apache.hadoop.hdfs.server.federation.metrics.RBFMetrics;
import org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics;
import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
@ -41,6 +47,8 @@ import org.apache.hadoop.hdfs.server.federation.store.RouterStore;
import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
import org.apache.hadoop.metrics2.source.JvmMetrics;
import org.apache.hadoop.security.SecurityUtil;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.service.CompositeService;
import org.apache.hadoop.util.JvmPauseMonitor;
import org.apache.hadoop.util.Time;
@ -70,7 +78,8 @@ import com.google.common.annotations.VisibleForTesting;
*/
@InterfaceAudience.Private
@InterfaceStability.Evolving
public class Router extends CompositeService {
public class Router extends CompositeService implements
TokenVerifier<DelegationTokenIdentifier> {
private static final Logger LOG = LoggerFactory.getLogger(Router.class);
@ -145,6 +154,11 @@ public class Router extends CompositeService {
this.conf = configuration;
updateRouterState(RouterServiceState.INITIALIZING);
// Enable the security for the Router
UserGroupInformation.setConfiguration(conf);
SecurityUtil.login(conf, DFS_ROUTER_KEYTAB_FILE_KEY,
DFS_ROUTER_KERBEROS_PRINCIPAL_KEY, getHostName(conf));
if (conf.getBoolean(
RBFConfigKeys.DFS_ROUTER_STORE_ENABLE,
RBFConfigKeys.DFS_ROUTER_STORE_ENABLE_DEFAULT)) {
@ -191,21 +205,26 @@ public class Router extends CompositeService {
addService(this.httpServer);
}
if (conf.getBoolean(
boolean isRouterHeartbeatEnabled = conf.getBoolean(
RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT);
boolean isNamenodeHeartbeatEnable = conf.getBoolean(
RBFConfigKeys.DFS_ROUTER_NAMENODE_HEARTBEAT_ENABLE,
isRouterHeartbeatEnabled);
if (isNamenodeHeartbeatEnable) {
// Create status updater for each monitored Namenode
this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
for (NamenodeHeartbeatService hearbeatService :
for (NamenodeHeartbeatService heartbeatService :
this.namenodeHeartbeatServices) {
addService(hearbeatService);
addService(heartbeatService);
}
if (this.namenodeHeartbeatServices.isEmpty()) {
LOG.error("Heartbeat is enabled but there are no namenodes to monitor");
}
}
if (isRouterHeartbeatEnabled) {
// Periodically update the router state
this.routerHeartbeatService = new RouterHeartbeatService(this);
addService(this.routerHeartbeatService);
@ -243,9 +262,67 @@ public class Router extends CompositeService {
addService(this.safemodeService);
}
/*
* Refresh mount table cache immediately after adding, modifying or deleting
* the mount table entries. If this service is not enabled mount table cache
* are refreshed periodically by StateStoreCacheUpdateService
*/
if (conf.getBoolean(RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE,
RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_DEFAULT)) {
// There is no use of starting refresh service if state store and admin
// servers are not enabled
String disabledDependentServices = getDisabledDependentServices();
/*
* disabledDependentServices null means all dependent services are
* enabled.
*/
if (disabledDependentServices == null) {
MountTableRefresherService refreshService =
new MountTableRefresherService(this);
addService(refreshService);
LOG.info("Service {} is enabled.",
MountTableRefresherService.class.getSimpleName());
} else {
LOG.warn(
"Service {} not enabled: depenendent service(s) {} not enabled.",
MountTableRefresherService.class.getSimpleName(),
disabledDependentServices);
}
}
super.serviceInit(conf);
}
private String getDisabledDependentServices() {
if (this.stateStore == null && this.adminServer == null) {
return StateStoreService.class.getSimpleName() + ","
+ RouterAdminServer.class.getSimpleName();
} else if (this.stateStore == null) {
return StateStoreService.class.getSimpleName();
} else if (this.adminServer == null) {
return RouterAdminServer.class.getSimpleName();
}
return null;
}
/**
* Returns the hostname for this Router. If the hostname is not
* explicitly configured in the given config, then it is determined.
*
* @param config configuration
* @return the hostname (NB: may not be a FQDN)
* @throws UnknownHostException if the hostname cannot be determined
*/
private static String getHostName(Configuration config)
throws UnknownHostException {
String name = config.get(DFS_ROUTER_KERBEROS_PRINCIPAL_HOSTNAME_KEY);
if (name == null) {
name = InetAddress.getLocalHost().getHostName();
}
return name;
}
@Override
protected void serviceStart() throws Exception {
@ -401,6 +478,12 @@ public class Router extends CompositeService {
return null;
}
@Override
public void verifyToken(DelegationTokenIdentifier tokenId, byte[] password)
throws IOException {
getRpcServer().getRouterSecurityManager().verifyToken(tokenId, password);
}
/////////////////////////////////////////////////////////
// Namenode heartbeat monitors
/////////////////////////////////////////////////////////
@ -418,9 +501,9 @@ public class Router extends CompositeService {
if (conf.getBoolean(
RBFConfigKeys.DFS_ROUTER_MONITOR_LOCAL_NAMENODE,
RBFConfigKeys.DFS_ROUTER_MONITOR_LOCAL_NAMENODE_DEFAULT)) {
// Create a local heartbet service
// Create a local heartbeat service
NamenodeHeartbeatService localHeartbeatService =
createLocalNamenodeHearbeatService();
createLocalNamenodeHeartbeatService();
if (localHeartbeatService != null) {
String nnDesc = localHeartbeatService.getNamenodeDesc();
ret.put(nnDesc, localHeartbeatService);
@ -428,27 +511,25 @@ public class Router extends CompositeService {
}
// Create heartbeat services for a list specified by the admin
String namenodes = this.conf.get(
Collection<String> namenodes = this.conf.getTrimmedStringCollection(
RBFConfigKeys.DFS_ROUTER_MONITOR_NAMENODE);
if (namenodes != null) {
for (String namenode : namenodes.split(",")) {
String[] namenodeSplit = namenode.split("\\.");
String nsId = null;
String nnId = null;
if (namenodeSplit.length == 2) {
nsId = namenodeSplit[0];
nnId = namenodeSplit[1];
} else if (namenodeSplit.length == 1) {
nsId = namenode;
} else {
LOG.error("Wrong Namenode to monitor: {}", namenode);
}
if (nsId != null) {
NamenodeHeartbeatService heartbeatService =
createNamenodeHearbeatService(nsId, nnId);
if (heartbeatService != null) {
ret.put(heartbeatService.getNamenodeDesc(), heartbeatService);
}
for (String namenode : namenodes) {
String[] namenodeSplit = namenode.split("\\.");
String nsId = null;
String nnId = null;
if (namenodeSplit.length == 2) {
nsId = namenodeSplit[0];
nnId = namenodeSplit[1];
} else if (namenodeSplit.length == 1) {
nsId = namenode;
} else {
LOG.error("Wrong Namenode to monitor: {}", namenode);
}
if (nsId != null) {
NamenodeHeartbeatService heartbeatService =
createNamenodeHeartbeatService(nsId, nnId);
if (heartbeatService != null) {
ret.put(heartbeatService.getNamenodeDesc(), heartbeatService);
}
}
}
@ -461,7 +542,7 @@ public class Router extends CompositeService {
*
* @return Updater of the status for the local Namenode.
*/
protected NamenodeHeartbeatService createLocalNamenodeHearbeatService() {
protected NamenodeHeartbeatService createLocalNamenodeHeartbeatService() {
// Detect NN running in this machine
String nsId = DFSUtil.getNamenodeNameServiceId(conf);
String nnId = null;
@ -472,7 +553,7 @@ public class Router extends CompositeService {
}
}
return createNamenodeHearbeatService(nsId, nnId);
return createNamenodeHeartbeatService(nsId, nnId);
}
/**
@ -482,7 +563,7 @@ public class Router extends CompositeService {
* @param nnId Identifier of the namenode (HA) to monitor.
* @return Updater of the status for the specified Namenode.
*/
protected NamenodeHeartbeatService createNamenodeHearbeatService(
protected NamenodeHeartbeatService createNamenodeHeartbeatService(
String nsId, String nnId) {
LOG.info("Creating heartbeat service for Namenode {} in {}", nnId, nsId);
@ -516,6 +597,13 @@ public class Router extends CompositeService {
return this.state;
}
/**
* Compare router state.
*/
public boolean isRouterState(RouterServiceState routerState) {
return routerState.equals(this.state);
}
/////////////////////////////////////////////////////////
// Submodule getters
/////////////////////////////////////////////////////////
@ -546,9 +634,9 @@ public class Router extends CompositeService {
*
* @return Federation metrics.
*/
public FederationMetrics getMetrics() {
public RBFMetrics getMetrics() {
if (this.metrics != null) {
return this.metrics.getFederationMetrics();
return this.metrics.getRBFMetrics();
}
return null;
}
@ -558,11 +646,11 @@ public class Router extends CompositeService {
*
* @return Namenode metrics.
*/
public NamenodeBeanMetrics getNamenodeMetrics() {
if (this.metrics != null) {
return this.metrics.getNamenodeMetrics();
public NamenodeBeanMetrics getNamenodeMetrics() throws IOException {
if (this.metrics == null) {
throw new IOException("Namenode metrics is not initialized");
}
return null;
return this.metrics.getNamenodeMetrics();
}
/**
@ -663,14 +751,32 @@ public class Router extends CompositeService {
* Get the list of namenode heartbeat service.
*/
@VisibleForTesting
Collection<NamenodeHeartbeatService> getNamenodeHearbeatServices() {
Collection<NamenodeHeartbeatService> getNamenodeHeartbeatServices() {
return this.namenodeHeartbeatServices;
}
/**
* Get the Router safe mode service
* Get this router heartbeat service.
*/
@VisibleForTesting
RouterHeartbeatService getRouterHeartbeatService() {
return this.routerHeartbeatService;
}
/**
* Get the Router safe mode service.
*/
RouterSafemodeService getSafemodeService() {
return this.safemodeService;
}
/**
* Get router admin server.
*
* @return Null if admin is not enabled.
*/
public RouterAdminServer getAdminServer() {
return adminServer;
}
}

View File

@ -17,26 +17,35 @@
*/
package org.apache.hadoop.hdfs.server.federation.router;
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_ENABLED_DEFAULT;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.Map;
import java.util.Set;
import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
import org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos.RouterAdminProtocolService;
import org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol;
import org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolPB;
import org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB;
import org.apache.hadoop.hdfs.protocolPB.RouterPolicyProvider;
import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
import org.apache.hadoop.hdfs.server.federation.store.DisabledNameserviceStore;
import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
import org.apache.hadoop.hdfs.server.federation.store.StateStoreCache;
import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.DisableNameserviceRequest;
@ -47,12 +56,16 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeRequ
import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@ -62,6 +75,11 @@ import org.apache.hadoop.hdfs.server.namenode.NameNode;
import org.apache.hadoop.ipc.ProtobufRpcEngine;
import org.apache.hadoop.ipc.RPC;
import org.apache.hadoop.ipc.RPC.Server;
import org.apache.hadoop.ipc.RefreshRegistry;
import org.apache.hadoop.ipc.RefreshResponse;
import org.apache.hadoop.ipc.proto.GenericRefreshProtocolProtos;
import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolPB;
import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolServerSideTranslatorPB;
import org.apache.hadoop.security.AccessControlException;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.service.AbstractService;
@ -76,7 +94,7 @@ import com.google.protobuf.BlockingService;
* router. It is created, started, and stopped by {@link Router}.
*/
public class RouterAdminServer extends AbstractService
implements MountTableManager, RouterStateManager, NameserviceManager {
implements RouterAdminProtocol {
private static final Logger LOG =
LoggerFactory.getLogger(RouterAdminServer.class);
@ -100,6 +118,7 @@ public class RouterAdminServer extends AbstractService
private static String routerOwner;
private static String superGroup;
private static boolean isPermissionEnabled;
private boolean iStateStoreCache;
public RouterAdminServer(Configuration conf, Router router)
throws IOException {
@ -142,11 +161,27 @@ public class RouterAdminServer extends AbstractService
.setVerbose(false)
.build();
// Set service-level authorization security policy
if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
this.adminServer.refreshServiceAcl(conf, new RouterPolicyProvider());
}
// The RPC-server port can be ephemeral... ensure we have the correct info
InetSocketAddress listenAddress = this.adminServer.getListenerAddress();
this.adminAddress = new InetSocketAddress(
confRpcAddress.getHostName(), listenAddress.getPort());
router.setAdminServerAddress(this.adminAddress);
iStateStoreCache =
router.getSubclusterResolver() instanceof StateStoreCache;
GenericRefreshProtocolServerSideTranslatorPB genericRefreshXlator =
new GenericRefreshProtocolServerSideTranslatorPB(this);
BlockingService genericRefreshService =
GenericRefreshProtocolProtos.GenericRefreshProtocolService.
newReflectiveBlockingService(genericRefreshXlator);
DFSUtil.addPBProtocol(conf, GenericRefreshProtocolPB.class,
genericRefreshService, adminServer);
}
/**
@ -236,24 +271,26 @@ public class RouterAdminServer extends AbstractService
getMountTableStore().updateMountTableEntry(request);
MountTable mountTable = request.getEntry();
if (mountTable != null) {
synchronizeQuota(mountTable);
if (mountTable != null && router.isQuotaEnabled()) {
synchronizeQuota(mountTable.getSourcePath(),
mountTable.getQuota().getQuota(),
mountTable.getQuota().getSpaceQuota());
}
return response;
}
/**
* Synchronize the quota value across mount table and subclusters.
* @param mountTable Quota set in given mount table.
* @param path Source path in given mount table.
* @param nsQuota Name quota definition in given mount table.
* @param ssQuota Space quota definition in given mount table.
* @throws IOException
*/
private void synchronizeQuota(MountTable mountTable) throws IOException {
String path = mountTable.getSourcePath();
long nsQuota = mountTable.getQuota().getQuota();
long ssQuota = mountTable.getQuota().getSpaceQuota();
if (nsQuota != HdfsConstants.QUOTA_DONT_SET
|| ssQuota != HdfsConstants.QUOTA_DONT_SET) {
private void synchronizeQuota(String path, long nsQuota, long ssQuota)
throws IOException {
if (router.isQuotaEnabled() &&
(nsQuota != HdfsConstants.QUOTA_DONT_SET
|| ssQuota != HdfsConstants.QUOTA_DONT_SET)) {
HdfsFileStatus ret = this.router.getRpcServer().getFileInfo(path);
if (ret != null) {
this.router.getRpcServer().getQuotaModule().setQuota(path, nsQuota,
@ -265,6 +302,16 @@ public class RouterAdminServer extends AbstractService
@Override
public RemoveMountTableEntryResponse removeMountTableEntry(
RemoveMountTableEntryRequest request) throws IOException {
// clear sub-cluster's quota definition
try {
synchronizeQuota(request.getSrcPath(), HdfsConstants.QUOTA_RESET,
HdfsConstants.QUOTA_RESET);
} catch (Exception e) {
// Ignore exception, if any while reseting quota. Specifically to handle
// if the actual destination doesn't exist.
LOG.warn("Unable to clear quota at the destinations for {}: {}",
request.getSrcPath(), e.getMessage());
}
return getMountTableStore().removeMountTableEntry(request);
}
@ -324,6 +371,56 @@ public class RouterAdminServer extends AbstractService
return GetSafeModeResponse.newInstance(isInSafeMode);
}
@Override
public RefreshMountTableEntriesResponse refreshMountTableEntries(
RefreshMountTableEntriesRequest request) throws IOException {
if (iStateStoreCache) {
/*
* MountTableResolver updates MountTableStore cache also. Expecting other
* SubclusterResolver implementations to update MountTableStore cache also
* apart from updating its cache.
*/
boolean result = ((StateStoreCache) this.router.getSubclusterResolver())
.loadCache(true);
RefreshMountTableEntriesResponse response =
RefreshMountTableEntriesResponse.newInstance();
response.setResult(result);
return response;
} else {
return getMountTableStore().refreshMountTableEntries(request);
}
}
@Override
public GetDestinationResponse getDestination(
GetDestinationRequest request) throws IOException {
final String src = request.getSrcPath();
final List<String> nsIds = new ArrayList<>();
RouterRpcServer rpcServer = this.router.getRpcServer();
List<RemoteLocation> locations = rpcServer.getLocationsForPath(src, false);
RouterRpcClient rpcClient = rpcServer.getRPCClient();
RemoteMethod method = new RemoteMethod("getFileInfo",
new Class<?>[] {String.class}, new RemoteParam());
try {
Map<RemoteLocation, HdfsFileStatus> responses =
rpcClient.invokeConcurrent(
locations, method, false, false, HdfsFileStatus.class);
for (RemoteLocation location : locations) {
if (responses.get(location) != null) {
nsIds.add(location.getNameserviceId());
}
}
} catch (IOException ioe) {
LOG.error("Cannot get location for {}: {}",
src, ioe.getMessage());
}
if (nsIds.isEmpty() && !locations.isEmpty()) {
String nsId = locations.get(0).getNameserviceId();
nsIds.add(nsId);
}
return GetDestinationResponse.newInstance(nsIds);
}
/**
* Verify if Router set safe mode state correctly.
* @param isInSafeMode Expected state to be set.
@ -449,4 +546,10 @@ public class RouterAdminServer extends AbstractService
public static String getSuperGroup(){
return superGroup;
}
@Override // GenericRefreshProtocol
public Collection<RefreshResponse> refresh(String identifier, String[] args) {
// Let the registry handle as needed
return RefreshRegistry.defaultRegistry().dispatch(identifier, args);
}
}

View File

@ -0,0 +1,173 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.router;
import java.io.IOException;
import java.util.EnumSet;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.hadoop.fs.CacheFlag;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries;
import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry;
import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo;
import org.apache.hadoop.hdfs.protocol.CachePoolEntry;
import org.apache.hadoop.hdfs.protocol.CachePoolInfo;
import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
import org.apache.hadoop.hdfs.server.namenode.NameNode;
/**
* Module that implements all the RPC calls in
* {@link org.apache.hadoop.hdfs.protocol.ClientProtocol} related to Cache Admin
* in the {@link RouterRpcServer}.
*/
public class RouterCacheAdmin {
/** RPC server to receive client calls. */
private final RouterRpcServer rpcServer;
/** RPC clients to connect to the Namenodes. */
private final RouterRpcClient rpcClient;
/** Interface to identify the active NN for a nameservice or blockpool ID. */
private final ActiveNamenodeResolver namenodeResolver;
public RouterCacheAdmin(RouterRpcServer server) {
this.rpcServer = server;
this.rpcClient = this.rpcServer.getRPCClient();
this.namenodeResolver = this.rpcClient.getNamenodeResolver();
}
public long addCacheDirective(CacheDirectiveInfo path,
EnumSet<CacheFlag> flags) throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.WRITE, true);
final List<RemoteLocation> locations =
rpcServer.getLocationsForPath(path.getPath().toString(), true, false);
RemoteMethod method = new RemoteMethod("addCacheDirective",
new Class<?>[] {CacheDirectiveInfo.class, EnumSet.class},
new RemoteParam(getRemoteMap(path, locations)), flags);
Map<RemoteLocation, Long> response =
rpcClient.invokeConcurrent(locations, method, false, false, long.class);
return response.values().iterator().next();
}
public void modifyCacheDirective(CacheDirectiveInfo directive,
EnumSet<CacheFlag> flags) throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.WRITE, true);
Path p = directive.getPath();
if (p != null) {
final List<RemoteLocation> locations = rpcServer
.getLocationsForPath(directive.getPath().toString(), true, false);
RemoteMethod method = new RemoteMethod("modifyCacheDirective",
new Class<?>[] {CacheDirectiveInfo.class, EnumSet.class},
new RemoteParam(getRemoteMap(directive, locations)), flags);
rpcClient.invokeConcurrent(locations, method);
return;
}
RemoteMethod method = new RemoteMethod("modifyCacheDirective",
new Class<?>[] {CacheDirectiveInfo.class, EnumSet.class}, directive,
flags);
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
rpcClient.invokeConcurrent(nss, method, false, false);
}
public void removeCacheDirective(long id) throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.WRITE, true);
RemoteMethod method = new RemoteMethod("removeCacheDirective",
new Class<?>[] {long.class}, id);
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
rpcClient.invokeConcurrent(nss, method, false, false);
}
public BatchedEntries<CacheDirectiveEntry> listCacheDirectives(long prevId,
CacheDirectiveInfo filter) throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.READ, true);
if (filter.getPath() != null) {
final List<RemoteLocation> locations = rpcServer
.getLocationsForPath(filter.getPath().toString(), true, false);
RemoteMethod method = new RemoteMethod("listCacheDirectives",
new Class<?>[] {long.class, CacheDirectiveInfo.class}, prevId,
new RemoteParam(getRemoteMap(filter, locations)));
Map<RemoteLocation, BatchedEntries> response = rpcClient.invokeConcurrent(
locations, method, false, false, BatchedEntries.class);
return response.values().iterator().next();
}
RemoteMethod method = new RemoteMethod("listCacheDirectives",
new Class<?>[] {long.class, CacheDirectiveInfo.class}, prevId,
filter);
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
Map<FederationNamespaceInfo, BatchedEntries> results = rpcClient
.invokeConcurrent(nss, method, true, false, BatchedEntries.class);
return results.values().iterator().next();
}
public void addCachePool(CachePoolInfo info) throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.WRITE, true);
RemoteMethod method = new RemoteMethod("addCachePool",
new Class<?>[] {CachePoolInfo.class}, info);
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
rpcClient.invokeConcurrent(nss, method, true, false);
}
public void modifyCachePool(CachePoolInfo info) throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.WRITE, true);
RemoteMethod method = new RemoteMethod("modifyCachePool",
new Class<?>[] {CachePoolInfo.class}, info);
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
rpcClient.invokeConcurrent(nss, method, true, false);
}
public void removeCachePool(String cachePoolName) throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.WRITE, true);
RemoteMethod method = new RemoteMethod("removeCachePool",
new Class<?>[] {String.class}, cachePoolName);
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
rpcClient.invokeConcurrent(nss, method, true, false);
}
public BatchedEntries<CachePoolEntry> listCachePools(String prevKey)
throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.READ, true);
RemoteMethod method = new RemoteMethod("listCachePools",
new Class<?>[] {String.class}, prevKey);
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
Map<FederationNamespaceInfo, BatchedEntries> results = rpcClient
.invokeConcurrent(nss, method, true, false, BatchedEntries.class);
return results.values().iterator().next();
}
/**
* Returns a map with the CacheDirectiveInfo mapped to each location.
* @param path CacheDirectiveInfo to be mapped to the locations.
* @param locations the locations to map.
* @return map with CacheDirectiveInfo mapped to the locations.
*/
private Map<RemoteLocation, CacheDirectiveInfo> getRemoteMap(
CacheDirectiveInfo path, final List<RemoteLocation> locations) {
final Map<RemoteLocation, CacheDirectiveInfo> dstMap = new HashMap<>();
Iterator<RemoteLocation> iterator = locations.iterator();
while (iterator.hasNext()) {
dstMap.put(iterator.next(), path);
}
return dstMap;
}
}

View File

@ -29,6 +29,7 @@ import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
import org.apache.hadoop.hdfs.server.federation.store.RecordStore;
import org.apache.hadoop.hdfs.server.federation.store.RouterStore;
import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
import org.apache.hadoop.hdfs.server.federation.store.StateStoreUtils;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RouterHeartbeatRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RouterHeartbeatResponse;
import org.apache.hadoop.hdfs.server.federation.store.records.BaseRecord;
@ -91,6 +92,10 @@ public class RouterHeartbeatService extends PeriodicService {
getStateStoreVersion(MembershipStore.class),
getStateStoreVersion(MountTableStore.class));
record.setStateStoreVersion(stateStoreVersion);
// if admin server not started then hostPort will be empty
String hostPort =
StateStoreUtils.getHostPortString(router.getAdminServerAddress());
record.setAdminAddress(hostPort);
RouterHeartbeatRequest request =
RouterHeartbeatRequest.newInstance(record);
RouterHeartbeatResponse response = routerStore.routerHeartbeat(request);

View File

@ -20,7 +20,6 @@ package org.apache.hadoop.hdfs.server.federation.router;
import java.net.InetSocketAddress;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdfs.server.common.JspHelper;
import org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer;
@ -84,12 +83,14 @@ public class RouterHttpServer extends AbstractService {
String webApp = "router";
HttpServer2.Builder builder = DFSUtil.httpServerTemplateForNNAndJN(
this.conf, this.httpAddress, this.httpsAddress, webApp,
DFSConfigKeys.DFS_NAMENODE_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY,
DFSConfigKeys.DFS_NAMENODE_KEYTAB_FILE_KEY);
RBFConfigKeys.DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY,
RBFConfigKeys.DFS_ROUTER_KEYTAB_FILE_KEY);
this.httpServer = builder.build();
NameNodeHttpServer.initWebHdfs(conf, httpAddress.getHostName(), null,
String httpKeytab = conf.get(DFSUtil.getSpnegoKeytabKey(conf,
RBFConfigKeys.DFS_ROUTER_KEYTAB_FILE_KEY));
NameNodeHttpServer.initWebHdfs(conf, httpAddress.getHostName(), httpKeytab,
httpServer, RouterWebHdfsMethods.class.getPackage().getName());
this.httpServer.setAttribute(NAMENODE_ATTRIBUTE_KEY, this.router);

View File

@ -18,7 +18,7 @@
package org.apache.hadoop.hdfs.server.federation.router;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.server.federation.metrics.FederationMetrics;
import org.apache.hadoop.hdfs.server.federation.metrics.RBFMetrics;
import org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics;
import org.apache.hadoop.metrics2.source.JvmMetrics;
import org.apache.hadoop.service.AbstractService;
@ -34,7 +34,7 @@ public class RouterMetricsService extends AbstractService {
/** Router metrics. */
private RouterMetrics routerMetrics;
/** Federation metrics. */
private FederationMetrics federationMetrics;
private RBFMetrics rbfMetrics;
/** Namenode mock metrics. */
private NamenodeBeanMetrics nnMetrics;
@ -55,14 +55,14 @@ public class RouterMetricsService extends AbstractService {
this.nnMetrics = new NamenodeBeanMetrics(this.router);
// Federation MBean JMX interface
this.federationMetrics = new FederationMetrics(this.router);
this.rbfMetrics = new RBFMetrics(this.router);
}
@Override
protected void serviceStop() throws Exception {
// Remove JMX interfaces
if (this.federationMetrics != null) {
this.federationMetrics.close();
if (this.rbfMetrics != null) {
this.rbfMetrics.close();
}
// Remove Namenode JMX interfaces
@ -90,8 +90,8 @@ public class RouterMetricsService extends AbstractService {
*
* @return Federation metrics.
*/
public FederationMetrics getFederationMetrics() {
return this.federationMetrics;
public RBFMetrics getRBFMetrics() {
return this.rbfMetrics;
}
/**

View File

@ -24,7 +24,6 @@ import java.util.Map.Entry;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
import org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
import org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
import org.apache.hadoop.hdfs.server.namenode.CheckpointSignature;
import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory;
import org.apache.hadoop.hdfs.server.protocol.BlocksWithLocations;
@ -45,14 +44,11 @@ public class RouterNamenodeProtocol implements NamenodeProtocol {
private final RouterRpcServer rpcServer;
/** RPC clients to connect to the Namenodes. */
private final RouterRpcClient rpcClient;
/** Interface to map global name space to HDFS subcluster name spaces. */
private final FileSubclusterResolver subclusterResolver;
public RouterNamenodeProtocol(RouterRpcServer server) {
this.rpcServer = server;
this.rpcClient = this.rpcServer.getRPCClient();
this.subclusterResolver = this.rpcServer.getSubclusterResolver();
}
@Override
@ -94,33 +90,27 @@ public class RouterNamenodeProtocol implements NamenodeProtocol {
public ExportedBlockKeys getBlockKeys() throws IOException {
rpcServer.checkOperation(OperationCategory.READ);
// We return the information from the default name space
String defaultNsId = subclusterResolver.getDefaultNamespace();
RemoteMethod method =
new RemoteMethod(NamenodeProtocol.class, "getBlockKeys");
return rpcClient.invokeSingle(defaultNsId, method, ExportedBlockKeys.class);
return rpcServer.invokeAtAvailableNs(method, ExportedBlockKeys.class);
}
@Override
public long getTransactionID() throws IOException {
rpcServer.checkOperation(OperationCategory.READ);
// We return the information from the default name space
String defaultNsId = subclusterResolver.getDefaultNamespace();
RemoteMethod method =
new RemoteMethod(NamenodeProtocol.class, "getTransactionID");
return rpcClient.invokeSingle(defaultNsId, method, long.class);
return rpcServer.invokeAtAvailableNs(method, long.class);
}
@Override
public long getMostRecentCheckpointTxId() throws IOException {
rpcServer.checkOperation(OperationCategory.READ);
// We return the information from the default name space
String defaultNsId = subclusterResolver.getDefaultNamespace();
RemoteMethod method =
new RemoteMethod(NamenodeProtocol.class, "getMostRecentCheckpointTxId");
return rpcClient.invokeSingle(defaultNsId, method, long.class);
return rpcServer.invokeAtAvailableNs(method, long.class);
}
@Override
@ -133,11 +123,9 @@ public class RouterNamenodeProtocol implements NamenodeProtocol {
public NamespaceInfo versionRequest() throws IOException {
rpcServer.checkOperation(OperationCategory.READ);
// We return the information from the default name space
String defaultNsId = subclusterResolver.getDefaultNamespace();
RemoteMethod method =
new RemoteMethod(NamenodeProtocol.class, "versionRequest");
return rpcClient.invokeSingle(defaultNsId, method, NamespaceInfo.class);
return rpcServer.invokeAtAvailableNs(method, NamespaceInfo.class);
}
@Override

View File

@ -88,7 +88,7 @@ public class RouterQuotaManager {
}
/**
* Get children paths (can including itself) under specified federation path.
* Get children paths (can include itself) under specified federation path.
* @param parentPath Federated path.
* @return Set of children paths.
*/

View File

@ -87,11 +87,12 @@ public class RouterQuotaUpdateService extends PeriodicService {
QuotaUsage currentQuotaUsage = null;
// Check whether destination path exists in filesystem. If destination
// is not present, reset the usage. For other mount entry get current
// quota usage
// Check whether destination path exists in filesystem. When the
// mtime is zero, the destination is not present and reset the usage.
// This is because mount table does not have mtime.
// For other mount entry get current quota usage
HdfsFileStatus ret = this.rpcServer.getFileInfo(src);
if (ret == null) {
if (ret == null || ret.getModificationTime() == 0) {
currentQuotaUsage = new RouterQuotaUsage.Builder()
.fileAndDirectoryCount(0)
.quota(nsQuota)
@ -185,10 +186,8 @@ public class RouterQuotaUpdateService extends PeriodicService {
*/
private List<MountTable> getQuotaSetMountTables() throws IOException {
List<MountTable> mountTables = getMountTableEntries();
Set<String> stalePaths = new HashSet<>();
for (String path : this.quotaManager.getAll()) {
stalePaths.add(path);
}
Set<String> allPaths = this.quotaManager.getAll();
Set<String> stalePaths = new HashSet<>(allPaths);
List<MountTable> neededMountTables = new LinkedList<>();
for (MountTable entry : mountTables) {

View File

@ -75,9 +75,10 @@ public final class RouterQuotaUsage extends QuotaUsage {
* @throws NSQuotaExceededException If the quota is exceeded.
*/
public void verifyNamespaceQuota() throws NSQuotaExceededException {
if (Quota.isViolated(getQuota(), getFileAndDirectoryCount())) {
throw new NSQuotaExceededException(getQuota(),
getFileAndDirectoryCount());
long quota = getQuota();
long fileAndDirectoryCount = getFileAndDirectoryCount();
if (Quota.isViolated(quota, fileAndDirectoryCount)) {
throw new NSQuotaExceededException(quota, fileAndDirectoryCount);
}
}
@ -87,25 +88,29 @@ public final class RouterQuotaUsage extends QuotaUsage {
* @throws DSQuotaExceededException If the quota is exceeded.
*/
public void verifyStoragespaceQuota() throws DSQuotaExceededException {
if (Quota.isViolated(getSpaceQuota(), getSpaceConsumed())) {
throw new DSQuotaExceededException(getSpaceQuota(), getSpaceConsumed());
long spaceQuota = getSpaceQuota();
long spaceConsumed = getSpaceConsumed();
if (Quota.isViolated(spaceQuota, spaceConsumed)) {
throw new DSQuotaExceededException(spaceQuota, spaceConsumed);
}
}
@Override
public String toString() {
String nsQuota = String.valueOf(getQuota());
String nsCount = String.valueOf(getFileAndDirectoryCount());
if (getQuota() == HdfsConstants.QUOTA_RESET) {
nsQuota = "-";
nsCount = "-";
String nsQuota = "-";
String nsCount = "-";
long quota = getQuota();
if (quota != HdfsConstants.QUOTA_RESET) {
nsQuota = String.valueOf(quota);
nsCount = String.valueOf(getFileAndDirectoryCount());
}
String ssQuota = StringUtils.byteDesc(getSpaceQuota());
String ssCount = StringUtils.byteDesc(getSpaceConsumed());
if (getSpaceQuota() == HdfsConstants.QUOTA_RESET) {
ssQuota = "-";
ssCount = "-";
String ssQuota = "-";
String ssCount = "-";
long spaceQuota = getSpaceQuota();
if (spaceQuota != HdfsConstants.QUOTA_RESET) {
ssQuota = StringUtils.byteDesc(spaceQuota);
ssCount = StringUtils.byteDesc(getSpaceConsumed());
}
StringBuilder str = new StringBuilder();

View File

@ -18,22 +18,25 @@
package org.apache.hadoop.hdfs.server.federation.router;
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY;
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_TIMEOUT_KEY;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.net.ConnectException;
import java.net.InetSocketAddress;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.LinkedHashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import java.util.TreeMap;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
@ -53,6 +56,7 @@ import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo;
import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
import org.apache.hadoop.hdfs.protocol.SnapshotException;
import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext;
import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeServiceState;
@ -61,7 +65,9 @@ import org.apache.hadoop.io.retry.RetryPolicies;
import org.apache.hadoop.io.retry.RetryPolicy;
import org.apache.hadoop.io.retry.RetryPolicy.RetryAction.RetryDecision;
import org.apache.hadoop.ipc.RemoteException;
import org.apache.hadoop.ipc.RetriableException;
import org.apache.hadoop.ipc.StandbyException;
import org.apache.hadoop.net.ConnectTimeoutException;
import org.apache.hadoop.security.UserGroupInformation;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -126,7 +132,8 @@ public class RouterRpcClient {
this.namenodeResolver = resolver;
this.connectionManager = new ConnectionManager(conf);
Configuration clientConf = getClientConfiguration(conf);
this.connectionManager = new ConnectionManager(clientConf);
this.connectionManager.start();
int numThreads = conf.getInt(
@ -165,6 +172,31 @@ public class RouterRpcClient {
failoverSleepBaseMillis, failoverSleepMaxMillis);
}
/**
* Get the configuration for the RPC client. It takes the Router
* configuration and transforms it into regular RPC Client configuration.
* @param conf Input configuration.
* @return Configuration for the RPC client.
*/
private Configuration getClientConfiguration(final Configuration conf) {
Configuration clientConf = new Configuration(conf);
int maxRetries = conf.getInt(
RBFConfigKeys.DFS_ROUTER_CLIENT_MAX_RETRIES_TIME_OUT,
RBFConfigKeys.DFS_ROUTER_CLIENT_MAX_RETRIES_TIME_OUT_DEFAULT);
if (maxRetries >= 0) {
clientConf.setInt(
IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY, maxRetries);
}
long connectTimeOut = conf.getTimeDuration(
RBFConfigKeys.DFS_ROUTER_CLIENT_CONNECT_TIMEOUT,
RBFConfigKeys.DFS_ROUTER_CLIENT_CONNECT_TIMEOUT_DEFAULT,
TimeUnit.MILLISECONDS);
if (connectTimeOut >= 0) {
clientConf.setLong(IPC_CLIENT_CONNECT_TIMEOUT_KEY, connectTimeOut);
}
return clientConf;
}
/**
* Get the active namenode resolver used by this client.
* @return Active namenode resolver.
@ -255,7 +287,14 @@ public class RouterRpcClient {
// for each individual request.
// TODO Add tokens from the federated UGI
connection = this.connectionManager.getConnection(ugi, rpcAddress, proto);
UserGroupInformation connUGI = ugi;
if (UserGroupInformation.isSecurityEnabled()) {
UserGroupInformation routerUser = UserGroupInformation.getLoginUser();
connUGI = UserGroupInformation.createProxyUser(
ugi.getUserName(), routerUser);
}
connection = this.connectionManager.getConnection(
connUGI, rpcAddress, proto);
LOG.debug("User {} NN {} is using connection {}",
ugi.getUserName(), rpcAddress, connection);
} catch (Exception ex) {
@ -263,7 +302,8 @@ public class RouterRpcClient {
}
if (connection == null) {
throw new IOException("Cannot get a connection to " + rpcAddress);
throw new ConnectionNullException("Cannot get a connection to "
+ rpcAddress);
}
return connection;
}
@ -294,8 +334,8 @@ public class RouterRpcClient {
* @param retryCount Number of retries.
* @param nsId Nameservice ID.
* @return Retry decision.
* @throws IOException Original exception if the retry policy generates one
* or IOException for no available namenodes.
* @throws NoNamenodesAvailableException Exception that the retry policy
* generates for no available namenodes.
*/
private RetryDecision shouldRetry(final IOException ioe, final int retryCount,
final String nsId) throws IOException {
@ -305,8 +345,7 @@ public class RouterRpcClient {
if (retryCount == 0) {
return RetryDecision.RETRY;
} else {
throw new IOException("No namenode available under nameservice " + nsId,
ioe);
throw new NoNamenodesAvailableException(nsId, ioe);
}
}
@ -334,17 +373,19 @@ public class RouterRpcClient {
* @param method Remote ClientProtcol method to invoke.
* @param params Variable list of parameters matching the method.
* @return The result of invoking the method.
* @throws IOException
* @throws ConnectException If it cannot connect to any Namenode.
* @throws StandbyException If all Namenodes are in Standby.
* @throws IOException If it cannot invoke the method.
*/
private Object invokeMethod(
final UserGroupInformation ugi,
final List<? extends FederationNamenodeContext> namenodes,
final Class<?> protocol, final Method method, final Object... params)
throws IOException {
throws ConnectException, StandbyException, IOException {
if (namenodes == null || namenodes.isEmpty()) {
throw new IOException("No namenodes to invoke " + method.getName() +
" with params " + Arrays.toString(params) + " from "
" with params " + Arrays.deepToString(params) + " from "
+ router.getRouterId());
}
@ -356,9 +397,9 @@ public class RouterRpcClient {
Map<FederationNamenodeContext, IOException> ioes = new LinkedHashMap<>();
for (FederationNamenodeContext namenode : namenodes) {
ConnectionContext connection = null;
String nsId = namenode.getNameserviceId();
String rpcAddress = namenode.getRpcAddress();
try {
String nsId = namenode.getNameserviceId();
String rpcAddress = namenode.getRpcAddress();
connection = this.getConnection(ugi, nsId, rpcAddress, protocol);
ProxyAndInfo<?> client = connection.getClient();
final Object proxy = client.getProxy();
@ -381,12 +422,36 @@ public class RouterRpcClient {
this.rpcMonitor.proxyOpFailureStandby();
}
failover = true;
} else if (ioe instanceof ConnectException ||
ioe instanceof ConnectTimeoutException) {
if (this.rpcMonitor != null) {
this.rpcMonitor.proxyOpFailureCommunicate();
}
failover = true;
} else if (ioe instanceof RemoteException) {
if (this.rpcMonitor != null) {
this.rpcMonitor.proxyOpComplete(true);
}
// RemoteException returned by NN
throw (RemoteException) ioe;
} else if (ioe instanceof ConnectionNullException) {
if (this.rpcMonitor != null) {
this.rpcMonitor.proxyOpFailureCommunicate();
}
LOG.error("Get connection for {} {} error: {}", nsId, rpcAddress,
ioe.getMessage());
// Throw StandbyException so that client can retry
StandbyException se = new StandbyException(ioe.getMessage());
se.initCause(ioe);
throw se;
} else if (ioe instanceof NoNamenodesAvailableException) {
if (this.rpcMonitor != null) {
this.rpcMonitor.proxyOpNoNamenodes();
}
LOG.error("Cannot get available namenode for {} {} error: {}",
nsId, rpcAddress, ioe.getMessage());
// Throw RetriableException so that client can retry
throw new RetriableException(ioe);
} else {
// Other communication error, this is a failure
// Communication retries are handled by the retry policy
@ -408,23 +473,33 @@ public class RouterRpcClient {
// All namenodes were unavailable or in standby
String msg = "No namenode available to invoke " + method.getName() + " " +
Arrays.toString(params);
Arrays.deepToString(params) + " in " + namenodes + " from " +
router.getRouterId();
LOG.error(msg);
int exConnect = 0;
for (Entry<FederationNamenodeContext, IOException> entry :
ioes.entrySet()) {
FederationNamenodeContext namenode = entry.getKey();
String nsId = namenode.getNameserviceId();
String nnId = namenode.getNamenodeId();
String nnKey = namenode.getNamenodeKey();
String addr = namenode.getRpcAddress();
IOException ioe = entry.getValue();
if (ioe instanceof StandbyException) {
LOG.error("{} {} at {} is in Standby", nsId, nnId, addr);
LOG.error("{} at {} is in Standby: {}",
nnKey, addr, ioe.getMessage());
} else if (ioe instanceof ConnectException ||
ioe instanceof ConnectTimeoutException) {
exConnect++;
LOG.error("{} at {} cannot be reached: {}",
nnKey, addr, ioe.getMessage());
} else {
LOG.error("{} {} at {} error: \"{}\"",
nsId, nnId, addr, ioe.getMessage());
LOG.error("{} at {} error: \"{}\"", nnKey, addr, ioe.getMessage());
}
}
throw new StandbyException(msg);
if (exConnect == ioes.size()) {
throw new ConnectException(msg);
} else {
throw new StandbyException(msg);
}
}
/**
@ -471,6 +546,9 @@ public class RouterRpcClient {
// failover, invoker looks for standby exceptions for failover.
if (ioe instanceof StandbyException) {
throw ioe;
} else if (ioe instanceof ConnectException ||
ioe instanceof ConnectTimeoutException) {
throw ioe;
} else {
throw new StandbyException(ioe.getMessage());
}
@ -805,6 +883,14 @@ public class RouterRpcClient {
return newException;
}
if (ioe instanceof SnapshotException) {
String newMsg = processExceptionMsg(
ioe.getMessage(), loc.getDest(), loc.getSrc());
SnapshotException newException = new SnapshotException(newMsg);
newException.setStackTrace(ioe.getStackTrace());
return newException;
}
return ioe;
}
@ -1006,31 +1092,84 @@ public class RouterRpcClient {
* @throws IOException If requiredResponse=true and any of the calls throw an
* exception.
*/
@SuppressWarnings("unchecked")
public <T extends RemoteLocationContext, R> Map<T, R> invokeConcurrent(
final Collection<T> locations, final RemoteMethod method,
boolean requireResponse, boolean standby, long timeOutMs, Class<R> clazz)
throws IOException {
final List<RemoteResult<T, R>> results = invokeConcurrent(
locations, method, standby, timeOutMs, clazz);
final Map<T, R> ret = new TreeMap<>();
for (final RemoteResult<T, R> result : results) {
// Response from all servers required, use this error.
if (requireResponse && result.hasException()) {
throw result.getException();
}
if (result.hasResult()) {
ret.put(result.getLocation(), result.getResult());
}
}
// Throw the exception for the first location if there are no results
if (ret.isEmpty()) {
final RemoteResult<T, R> result = results.get(0);
if (result.hasException()) {
throw result.getException();
}
}
return ret;
}
/**
* Invokes multiple concurrent proxy calls to different clients. Returns an
* array of results.
*
* Re-throws exceptions generated by the remote RPC call as either
* RemoteException or IOException.
*
* @param <T> The type of the remote location.
* @param <R> The type of the remote method return
* @param locations List of remote locations to call concurrently.
* @param method The remote method and parameters to invoke.
* @param standby If the requests should go to the standby namenodes too.
* @param timeOutMs Timeout for each individual call.
* @param clazz Type of the remote return type.
* @return Result of invoking the method per subcluster (list of results).
* This includes the exception for each remote location.
* @throws IOException If there are errors invoking the method.
*/
@SuppressWarnings("unchecked")
public <T extends RemoteLocationContext, R> List<RemoteResult<T, R>>
invokeConcurrent(final Collection<T> locations,
final RemoteMethod method, boolean standby, long timeOutMs,
Class<R> clazz) throws IOException {
final UserGroupInformation ugi = RouterRpcServer.getRemoteUser();
final Method m = method.getMethod();
if (locations.isEmpty()) {
throw new IOException("No remote locations available");
} else if (locations.size() == 1) {
} else if (locations.size() == 1 && timeOutMs <= 0) {
// Shortcut, just one call
T location = locations.iterator().next();
String ns = location.getNameserviceId();
final List<? extends FederationNamenodeContext> namenodes =
getNamenodesForNameservice(ns);
Class<?> proto = method.getProtocol();
Object[] paramList = method.getParams(location);
Object result = invokeMethod(ugi, namenodes, proto, m, paramList);
return Collections.singletonMap(location, clazz.cast(result));
try {
Class<?> proto = method.getProtocol();
Object[] paramList = method.getParams(location);
R result = (R) invokeMethod(ugi, namenodes, proto, m, paramList);
RemoteResult<T, R> remoteResult = new RemoteResult<>(location, result);
return Collections.singletonList(remoteResult);
} catch (IOException ioe) {
// Localize the exception
throw processException(ioe, location);
}
}
List<T> orderedLocations = new LinkedList<>();
Set<Callable<Object>> callables = new HashSet<>();
List<T> orderedLocations = new ArrayList<>();
List<Callable<Object>> callables = new ArrayList<>();
for (final T location : locations) {
String nsId = location.getNameserviceId();
final List<? extends FederationNamenodeContext> namenodes =
@ -1048,20 +1187,12 @@ public class RouterRpcClient {
nnLocation = (T)new RemoteLocation(nsId, nnId, location.getDest());
}
orderedLocations.add(nnLocation);
callables.add(new Callable<Object>() {
public Object call() throws Exception {
return invokeMethod(ugi, nnList, proto, m, paramList);
}
});
callables.add(() -> invokeMethod(ugi, nnList, proto, m, paramList));
}
} else {
// Call the objectGetter in order of nameservices in the NS list
orderedLocations.add(location);
callables.add(new Callable<Object>() {
public Object call() throws Exception {
return invokeMethod(ugi, namenodes, proto, m, paramList);
}
});
callables.add(() -> invokeMethod(ugi, namenodes, proto, m, paramList));
}
}
@ -1077,21 +1208,20 @@ public class RouterRpcClient {
} else {
futures = executorService.invokeAll(callables);
}
Map<T, R> results = new TreeMap<>();
Map<T, IOException> exceptions = new TreeMap<>();
List<RemoteResult<T, R>> results = new ArrayList<>();
for (int i=0; i<futures.size(); i++) {
T location = orderedLocations.get(i);
try {
Future<Object> future = futures.get(i);
Object result = future.get();
results.put(location, clazz.cast(result));
R result = (R) future.get();
results.add(new RemoteResult<>(location, result));
} catch (CancellationException ce) {
T loc = orderedLocations.get(i);
String msg = "Invocation to \"" + loc + "\" for \""
+ method.getMethodName() + "\" timed out";
LOG.error(msg);
IOException ioe = new SubClusterTimeoutException(msg);
exceptions.put(location, ioe);
results.add(new RemoteResult<>(location, ioe));
} catch (ExecutionException ex) {
Throwable cause = ex.getCause();
LOG.debug("Canot execute {} in {}: {}",
@ -1106,22 +1236,8 @@ public class RouterRpcClient {
m.getName() + ": " + cause.getMessage(), cause);
}
// Response from all servers required, use this error.
if (requireResponse) {
throw ioe;
}
// Store the exceptions
exceptions.put(location, ioe);
}
}
// Throw the exception for the first location if there are no results
if (results.isEmpty()) {
T location = orderedLocations.get(0);
IOException ioe = exceptions.get(location);
if (ioe != null) {
throw ioe;
results.add(new RemoteResult<>(location, ioe));
}
}

View File

@ -92,6 +92,11 @@ public interface RouterRpcMonitor {
*/
void proxyOpRetries();
/**
* Failed to proxy an operation because of no namenodes available.
*/
void proxyOpNoNamenodes();
/**
* If the Router cannot contact the State Store in an operation.
*/

View File

@ -17,6 +17,7 @@
*/
package org.apache.hadoop.hdfs.server.federation.router;
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_DEFAULT;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_QUEUE_SIZE_DEFAULT;
@ -29,6 +30,7 @@ import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_
import java.io.FileNotFoundException;
import java.io.IOException;
import java.lang.reflect.Array;
import java.net.ConnectException;
import java.net.InetSocketAddress;
import java.util.ArrayList;
import java.util.Collection;
@ -101,6 +103,7 @@ import org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB;
import org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB;
import org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolPB;
import org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB;
import org.apache.hadoop.hdfs.protocolPB.RouterPolicyProvider;
import org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey;
import org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
@ -111,7 +114,9 @@ import org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.PathLocation;
import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
import org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableException;
import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
import org.apache.hadoop.hdfs.server.federation.router.security.RouterSecurityManager;
import org.apache.hadoop.hdfs.server.namenode.CheckpointSignature;
import org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException;
import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory;
@ -130,12 +135,21 @@ import org.apache.hadoop.ipc.ProtobufRpcEngine;
import org.apache.hadoop.ipc.RPC;
import org.apache.hadoop.ipc.RPC.Server;
import org.apache.hadoop.ipc.RemoteException;
import org.apache.hadoop.ipc.RetriableException;
import org.apache.hadoop.ipc.StandbyException;
import org.apache.hadoop.net.NodeBase;
import org.apache.hadoop.security.AccessControlException;
import org.apache.hadoop.security.RefreshUserMappingsProtocol;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.security.proto.RefreshUserMappingsProtocolProtos;
import org.apache.hadoop.security.protocolPB.RefreshUserMappingsProtocolPB;
import org.apache.hadoop.security.protocolPB.RefreshUserMappingsProtocolServerSideTranslatorPB;
import org.apache.hadoop.security.token.Token;
import org.apache.hadoop.service.AbstractService;
import org.apache.hadoop.tools.GetUserMappingsProtocol;
import org.apache.hadoop.tools.proto.GetUserMappingsProtocolProtos;
import org.apache.hadoop.tools.protocolPB.GetUserMappingsProtocolPB;
import org.apache.hadoop.tools.protocolPB.GetUserMappingsProtocolServerSideTranslatorPB;
import org.apache.hadoop.util.ReflectionUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -151,8 +165,8 @@ import com.google.protobuf.BlockingService;
* the requests to the active
* {@link org.apache.hadoop.hdfs.server.namenode.NameNode NameNode}.
*/
public class RouterRpcServer extends AbstractService
implements ClientProtocol, NamenodeProtocol {
public class RouterRpcServer extends AbstractService implements ClientProtocol,
NamenodeProtocol, RefreshUserMappingsProtocol, GetUserMappingsProtocol {
private static final Logger LOG =
LoggerFactory.getLogger(RouterRpcServer.class);
@ -175,6 +189,9 @@ public class RouterRpcServer extends AbstractService
/** Monitor metrics for the RPC calls. */
private final RouterRpcMonitor rpcMonitor;
/** If we use authentication for the connections. */
private final boolean serviceAuthEnabled;
/** Interface to identify the active NN for a nameservice or blockpool ID. */
private final ActiveNamenodeResolver namenodeResolver;
@ -192,6 +209,13 @@ public class RouterRpcServer extends AbstractService
private final RouterNamenodeProtocol nnProto;
/** ClientProtocol calls. */
private final RouterClientProtocol clientProto;
/** Other protocol calls. */
private final RouterUserProtocol routerProto;
/** Router security manager to handle token operations. */
private RouterSecurityManager securityManager = null;
/** Super user credentials that a thread may use. */
private static final ThreadLocal<UserGroupInformation> CUR_USER =
new ThreadLocal<>();
/**
* Construct a router RPC server.
@ -243,6 +267,18 @@ public class RouterRpcServer extends AbstractService
BlockingService nnPbService = NamenodeProtocolService
.newReflectiveBlockingService(namenodeProtocolXlator);
RefreshUserMappingsProtocolServerSideTranslatorPB refreshUserMappingXlator =
new RefreshUserMappingsProtocolServerSideTranslatorPB(this);
BlockingService refreshUserMappingService =
RefreshUserMappingsProtocolProtos.RefreshUserMappingsProtocolService.
newReflectiveBlockingService(refreshUserMappingXlator);
GetUserMappingsProtocolServerSideTranslatorPB getUserMappingXlator =
new GetUserMappingsProtocolServerSideTranslatorPB(this);
BlockingService getUserMappingService =
GetUserMappingsProtocolProtos.GetUserMappingsProtocolService.
newReflectiveBlockingService(getUserMappingXlator);
InetSocketAddress confRpcAddress = conf.getSocketAddr(
RBFConfigKeys.DFS_ROUTER_RPC_BIND_HOST_KEY,
RBFConfigKeys.DFS_ROUTER_RPC_ADDRESS_KEY,
@ -251,6 +287,9 @@ public class RouterRpcServer extends AbstractService
LOG.info("RPC server binding to {} with {} handlers for Router {}",
confRpcAddress, handlerCount, this.router.getRouterId());
// Create security manager
this.securityManager = new RouterSecurityManager(this.conf);
this.rpcServer = new RPC.Builder(this.conf)
.setProtocol(ClientNamenodeProtocolPB.class)
.setInstance(clientNNPbService)
@ -260,11 +299,23 @@ public class RouterRpcServer extends AbstractService
.setnumReaders(readerCount)
.setQueueSizePerHandler(handlerQueueSize)
.setVerbose(false)
.setSecretManager(this.securityManager.getSecretManager())
.build();
// Add all the RPC protocols that the Router implements
DFSUtil.addPBProtocol(
conf, NamenodeProtocolPB.class, nnPbService, this.rpcServer);
DFSUtil.addPBProtocol(conf, RefreshUserMappingsProtocolPB.class,
refreshUserMappingService, this.rpcServer);
DFSUtil.addPBProtocol(conf, GetUserMappingsProtocolPB.class,
getUserMappingService, this.rpcServer);
// Set service-level authorization security policy
this.serviceAuthEnabled = conf.getBoolean(
HADOOP_SECURITY_AUTHORIZATION, false);
if (this.serviceAuthEnabled) {
rpcServer.refreshServiceAcl(conf, new RouterPolicyProvider());
}
// We don't want the server to log the full stack trace for some exceptions
this.rpcServer.addTerseExceptions(
@ -275,7 +326,9 @@ public class RouterRpcServer extends AbstractService
AccessControlException.class,
LeaseExpiredException.class,
NotReplicatedYetException.class,
IOException.class);
IOException.class,
ConnectException.class,
RetriableException.class);
this.rpcServer.addSuppressedLoggingExceptions(
StandbyException.class);
@ -285,12 +338,17 @@ public class RouterRpcServer extends AbstractService
this.rpcAddress = new InetSocketAddress(
confRpcAddress.getHostName(), listenAddress.getPort());
// Create metrics monitor
Class<? extends RouterRpcMonitor> rpcMonitorClass = this.conf.getClass(
RBFConfigKeys.DFS_ROUTER_METRICS_CLASS,
RBFConfigKeys.DFS_ROUTER_METRICS_CLASS_DEFAULT,
RouterRpcMonitor.class);
this.rpcMonitor = ReflectionUtils.newInstance(rpcMonitorClass, conf);
if (conf.getBoolean(RBFConfigKeys.DFS_ROUTER_METRICS_ENABLE,
RBFConfigKeys.DFS_ROUTER_METRICS_ENABLE_DEFAULT)) {
// Create metrics monitor
Class<? extends RouterRpcMonitor> rpcMonitorClass = this.conf.getClass(
RBFConfigKeys.DFS_ROUTER_METRICS_CLASS,
RBFConfigKeys.DFS_ROUTER_METRICS_CLASS_DEFAULT,
RouterRpcMonitor.class);
this.rpcMonitor = ReflectionUtils.newInstance(rpcMonitorClass, conf);
} else {
this.rpcMonitor = null;
}
// Create the client
this.rpcClient = new RouterRpcClient(this.conf, this.router,
@ -300,6 +358,7 @@ public class RouterRpcServer extends AbstractService
this.quotaCall = new Quota(this.router, this);
this.nnProto = new RouterNamenodeProtocol(this);
this.clientProto = new RouterClientProtocol(conf, this);
this.routerProto = new RouterUserProtocol(this);
}
@Override
@ -307,7 +366,7 @@ public class RouterRpcServer extends AbstractService
this.conf = configuration;
if (this.rpcMonitor == null) {
LOG.error("Cannot instantiate Router RPC metrics class");
LOG.info("Do not start Router RPC metrics");
} else {
this.rpcMonitor.init(this.conf, this, this.router.getStateStore());
}
@ -332,9 +391,21 @@ public class RouterRpcServer extends AbstractService
if (rpcMonitor != null) {
this.rpcMonitor.close();
}
if (securityManager != null) {
this.securityManager.stop();
}
super.serviceStop();
}
/**
* Get the RPC security manager.
*
* @return RPC security manager.
*/
public RouterSecurityManager getRouterSecurityManager() {
return this.securityManager;
}
/**
* Get the RPC client to the Namenode.
*
@ -440,17 +511,26 @@ public class RouterRpcServer extends AbstractService
// Store the category of the operation category for this thread
opCategory.set(op);
// We allow unchecked and read operations
// We allow unchecked and read operations to try, fail later
if (op == OperationCategory.UNCHECKED || op == OperationCategory.READ) {
return;
}
checkSafeMode();
}
/**
* Check if the Router is in safe mode.
* @throws StandbyException If the Router is in safe mode and cannot serve
* client requests.
*/
private void checkSafeMode() throws StandbyException {
RouterSafemodeService safemodeService = router.getSafemodeService();
if (safemodeService != null && safemodeService.isInSafeMode()) {
// Throw standby exception, router is not available
if (rpcMonitor != null) {
rpcMonitor.routerFailureSafemode();
}
OperationCategory op = opCategory.get();
throw new StandbyException("Router " + router.getRouterId() +
" is in safe mode and cannot handle " + op + " requests");
}
@ -467,6 +547,29 @@ public class RouterRpcServer extends AbstractService
return methodName;
}
/**
* Invokes the method at default namespace, if default namespace is not
* available then at the first available namespace.
* @param <T> expected return type.
* @param method the remote method.
* @return the response received after invoking method.
* @throws IOException
*/
<T> T invokeAtAvailableNs(RemoteMethod method, Class<T> clazz)
throws IOException {
String nsId = subclusterResolver.getDefaultNamespace();
if (!nsId.isEmpty()) {
return rpcClient.invokeSingle(nsId, method, clazz);
}
// If default Ns is not present return result from first namespace.
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
if (nss.isEmpty()) {
throw new IOException("No namespace available.");
}
nsId = nss.iterator().next().getNameserviceId();
return rpcClient.invokeSingle(nsId, method, clazz);
}
@Override // ClientProtocol
public Token<DelegationTokenIdentifier> getDelegationToken(Text renewer)
throws IOException {
@ -507,6 +610,7 @@ public class RouterRpcServer extends AbstractService
replication, blockSize, supportedVersions, ecPolicyName, storagePolicy);
}
/**
* Get the location to create a file. It checks if the file already existed
* in one of the locations.
@ -515,10 +619,24 @@ public class RouterRpcServer extends AbstractService
* @return The remote location for this file.
* @throws IOException If the file has no creation location.
*/
RemoteLocation getCreateLocation(final String src)
RemoteLocation getCreateLocation(final String src) throws IOException {
final List<RemoteLocation> locations = getLocationsForPath(src, true);
return getCreateLocation(src, locations);
}
/**
* Get the location to create a file. It checks if the file already existed
* in one of the locations.
*
* @param src Path of the file to check.
* @param locations Prefetched locations for the file.
* @return The remote location for this file.
* @throws IOException If the file has no creation location.
*/
RemoteLocation getCreateLocation(
final String src, final List<RemoteLocation> locations)
throws IOException {
final List<RemoteLocation> locations = getLocationsForPath(src, true);
if (locations == null || locations.isEmpty()) {
throw new IOException("Cannot get locations to create " + src);
}
@ -526,28 +644,11 @@ public class RouterRpcServer extends AbstractService
RemoteLocation createLocation = locations.get(0);
if (locations.size() > 1) {
try {
// Check if this file already exists in other subclusters
LocatedBlocks existingLocation = getBlockLocations(src, 0, 1);
RemoteLocation existingLocation = getExistingLocation(src, locations);
// Forward to the existing location and let the NN handle the error
if (existingLocation != null) {
// Forward to the existing location and let the NN handle the error
LocatedBlock existingLocationLastLocatedBlock =
existingLocation.getLastLocatedBlock();
if (existingLocationLastLocatedBlock == null) {
// The block has no blocks yet, check for the meta data
for (RemoteLocation location : locations) {
RemoteMethod method = new RemoteMethod("getFileInfo",
new Class<?>[] {String.class}, new RemoteParam());
if (rpcClient.invokeSingle(location, method) != null) {
createLocation = location;
break;
}
}
} else {
ExtendedBlock existingLocationLastBlock =
existingLocationLastLocatedBlock.getBlock();
String blockPoolId = existingLocationLastBlock.getBlockPoolId();
createLocation = getLocationForPath(src, true, blockPoolId);
}
LOG.debug("{} already exists in {}.", src, existingLocation);
createLocation = existingLocation;
}
} catch (FileNotFoundException fne) {
// Ignore if the file is not found
@ -556,6 +657,27 @@ public class RouterRpcServer extends AbstractService
return createLocation;
}
/**
* Gets the remote location where the file exists.
* @param src the name of file.
* @param locations all the remote locations.
* @return the remote location of the file if it exists, else null.
* @throws IOException in case of any exception.
*/
private RemoteLocation getExistingLocation(String src,
List<RemoteLocation> locations) throws IOException {
RemoteMethod method = new RemoteMethod("getFileInfo",
new Class<?>[] {String.class}, new RemoteParam());
Map<RemoteLocation, HdfsFileStatus> results = rpcClient.invokeConcurrent(
locations, method, false, false, HdfsFileStatus.class);
for (RemoteLocation loc : locations) {
if (results.get(loc) != null) {
return loc;
}
}
return null;
}
@Override // ClientProtocol
public LastBlockWithStatus append(String src, final String clientName,
final EnumSetWritable<CreateFlag> flag) throws IOException {
@ -898,12 +1020,12 @@ public class RouterRpcServer extends AbstractService
return clientProto.getLinkTarget(path);
}
@Override // Client Protocol
@Override // ClientProtocol
public void allowSnapshot(String snapshotRoot) throws IOException {
clientProto.allowSnapshot(snapshotRoot);
}
@Override // Client Protocol
@Override // ClientProtocol
public void disallowSnapshot(String snapshot) throws IOException {
clientProto.disallowSnapshot(snapshot);
}
@ -914,7 +1036,7 @@ public class RouterRpcServer extends AbstractService
clientProto.renameSnapshot(snapshotRoot, snapshotOldName, snapshotNewName);
}
@Override // Client Protocol
@Override // ClientProtocol
public SnapshottableDirectoryStatus[] getSnapshottableDirListing()
throws IOException {
return clientProto.getSnapshottableDirListing();
@ -1341,7 +1463,8 @@ public class RouterRpcServer extends AbstractService
* Get the possible locations of a path in the federated cluster.
*
* @param path Path to check.
* @param failIfLocked Fail the request if locked (top mount point).
* @param failIfLocked Fail the request if there is any mount point under
* the path.
* @param needQuotaVerify If need to do the quota verification.
* @return Prioritized list of locations in the federated cluster.
* @throws IOException If the location for this path cannot be determined.
@ -1349,6 +1472,27 @@ public class RouterRpcServer extends AbstractService
protected List<RemoteLocation> getLocationsForPath(String path,
boolean failIfLocked, boolean needQuotaVerify) throws IOException {
try {
if (failIfLocked) {
// check if there is any mount point under the path
final List<String> mountPoints =
this.subclusterResolver.getMountPoints(path);
if (mountPoints != null) {
StringBuilder sb = new StringBuilder();
sb.append("The operation is not allowed because ");
if (mountPoints.isEmpty()) {
sb.append("the path: ")
.append(path)
.append(" is a mount point");
} else {
sb.append("there are mount points: ")
.append(String.join(",", mountPoints))
.append(" under the path: ")
.append(path);
}
throw new AccessControlException(sb.toString());
}
}
// Check the location for this path
final PathLocation location =
this.subclusterResolver.getDestinationForPath(path);
@ -1391,6 +1535,9 @@ public class RouterRpcServer extends AbstractService
if (this.rpcMonitor != null) {
this.rpcMonitor.routerFailureStateStore();
}
if (ioe instanceof StateStoreUnavailableException) {
checkSafeMode();
}
throw ioe;
}
}
@ -1422,11 +1569,26 @@ public class RouterRpcServer extends AbstractService
* @return Remote user group information.
* @throws IOException If we cannot get the user information.
*/
static UserGroupInformation getRemoteUser() throws IOException {
UserGroupInformation ugi = Server.getRemoteUser();
public static UserGroupInformation getRemoteUser() throws IOException {
UserGroupInformation ugi = CUR_USER.get();
ugi = (ugi != null) ? ugi : Server.getRemoteUser();
return (ugi != null) ? ugi : UserGroupInformation.getCurrentUser();
}
/**
* Set super user credentials if needed.
*/
static void setCurrentUser(UserGroupInformation ugi) {
CUR_USER.set(ugi);
}
/**
* Reset to discard super user credentials.
*/
static void resetCurrentUser() {
CUR_USER.set(null);
}
/**
* Merge the outputs from multiple namespaces.
*
@ -1435,14 +1597,16 @@ public class RouterRpcServer extends AbstractService
* @param clazz Class of the values.
* @return Array with the outputs.
*/
protected static <T> T[] merge(
static <T> T[] merge(
Map<FederationNamespaceInfo, T[]> map, Class<T> clazz) {
// Put all results into a set to avoid repeats
Set<T> ret = new LinkedHashSet<>();
for (T[] values : map.values()) {
for (T val : values) {
ret.add(val);
if (values != null) {
for (T val : values) {
ret.add(val);
}
}
}
@ -1456,7 +1620,7 @@ public class RouterRpcServer extends AbstractService
* @param clazz Class of the values.
* @return Array with the values in set.
*/
private static <T> T[] toArray(Collection<T> set, Class<T> clazz) {
static <T> T[] toArray(Collection<T> set, Class<T> clazz) {
@SuppressWarnings("unchecked")
T[] combinedData = (T[]) Array.newInstance(clazz, set.size());
combinedData = set.toArray(combinedData);
@ -1471,6 +1635,15 @@ public class RouterRpcServer extends AbstractService
return this.quotaCall;
}
/**
* Get ClientProtocol module implementation.
* @return ClientProtocol implementation
*/
@VisibleForTesting
public RouterClientProtocol getClientProtocolModule() {
return this.clientProto;
}
/**
* Get RPC metrics info.
* @return The instance of FederationRPCMetrics.
@ -1478,4 +1651,84 @@ public class RouterRpcServer extends AbstractService
public FederationRPCMetrics getRPCMetrics() {
return this.rpcMonitor.getRPCMetrics();
}
}
/**
* Check if a path should be in all subclusters.
*
* @param path Path to check.
* @return If a path should be in all subclusters.
*/
boolean isPathAll(final String path) {
if (subclusterResolver instanceof MountTableResolver) {
try {
MountTableResolver mountTable = (MountTableResolver) subclusterResolver;
MountTable entry = mountTable.getMountPoint(path);
if (entry != null) {
return entry.isAll();
}
} catch (IOException e) {
LOG.error("Cannot get mount point", e);
}
}
return false;
}
/**
* Check if a path supports failed subclusters.
*
* @param path Path to check.
* @return If a path should support failed subclusters.
*/
boolean isPathFaultTolerant(final String path) {
if (subclusterResolver instanceof MountTableResolver) {
try {
MountTableResolver mountTable = (MountTableResolver) subclusterResolver;
MountTable entry = mountTable.getMountPoint(path);
if (entry != null) {
return entry.isFaultTolerant();
}
} catch (IOException e) {
LOG.error("Cannot get mount point", e);
}
}
return false;
}
/**
* Check if call needs to be invoked to all the locations. The call is
* supposed to be invoked in all the locations in case the order of the mount
* entry is amongst HASH_ALL, RANDOM or SPACE or if the source is itself a
* mount entry.
* @param path The path on which the operation need to be invoked.
* @return true if the call is supposed to invoked on all locations.
* @throws IOException
*/
boolean isInvokeConcurrent(final String path) throws IOException {
if (subclusterResolver instanceof MountTableResolver) {
MountTableResolver mountTableResolver =
(MountTableResolver) subclusterResolver;
List<String> mountPoints = mountTableResolver.getMountPoints(path);
// If this is a mount point, we need to invoke everywhere.
if (mountPoints != null) {
return true;
}
return isPathAll(path);
}
return false;
}
@Override
public void refreshUserToGroupsMappings() throws IOException {
routerProto.refreshUserToGroupsMappings();
}
@Override
public void refreshSuperUserGroupsConfiguration() throws IOException {
routerProto.refreshSuperUserGroupsConfiguration();
}
@Override
public String[] getGroupsForUser(String user) throws IOException {
return routerProto.getGroupsForUser(user);
}
}

View File

@ -0,0 +1,208 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
* <p>
* http://www.apache.org/licenses/LICENSE-2.0
* <p>
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.router;
import java.io.IOException;
import java.util.Collection;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import org.apache.hadoop.hdfs.protocol.ClientProtocol;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
import org.apache.hadoop.hdfs.server.namenode.NameNode;
import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory;
/**
* Module that implements all the RPC calls related to snapshots in
* {@link ClientProtocol} in the {@link RouterRpcServer}.
*/
public class RouterSnapshot {
/** RPC server to receive client calls. */
private final RouterRpcServer rpcServer;
/** RPC clients to connect to the Namenodes. */
private final RouterRpcClient rpcClient;
/** Find generic locations. */
private final ActiveNamenodeResolver namenodeResolver;
public RouterSnapshot(RouterRpcServer server) {
this.rpcServer = server;
this.rpcClient = this.rpcServer.getRPCClient();
this.namenodeResolver = rpcServer.getNamenodeResolver();
}
public void allowSnapshot(String snapshotRoot) throws IOException {
rpcServer.checkOperation(OperationCategory.WRITE);
final List<RemoteLocation> locations =
rpcServer.getLocationsForPath(snapshotRoot, true, false);
RemoteMethod method = new RemoteMethod("allowSnapshot",
new Class<?>[] {String.class}, new RemoteParam());
if (rpcServer.isInvokeConcurrent(snapshotRoot)) {
rpcClient.invokeConcurrent(locations, method);
} else {
rpcClient.invokeSequential(locations, method);
}
}
public void disallowSnapshot(String snapshotRoot) throws IOException {
rpcServer.checkOperation(OperationCategory.WRITE);
final List<RemoteLocation> locations =
rpcServer.getLocationsForPath(snapshotRoot, true, false);
RemoteMethod method = new RemoteMethod("disallowSnapshot",
new Class<?>[] {String.class}, new RemoteParam());
if (rpcServer.isInvokeConcurrent(snapshotRoot)) {
rpcClient.invokeConcurrent(locations, method);
} else {
rpcClient.invokeSequential(locations, method);
}
}
public String createSnapshot(String snapshotRoot, String snapshotName)
throws IOException {
rpcServer.checkOperation(OperationCategory.WRITE);
final List<RemoteLocation> locations =
rpcServer.getLocationsForPath(snapshotRoot, true, false);
RemoteMethod method = new RemoteMethod("createSnapshot",
new Class<?>[] {String.class, String.class}, new RemoteParam(),
snapshotName);
String result = null;
if (rpcServer.isInvokeConcurrent(snapshotRoot)) {
Map<RemoteLocation, String> results = rpcClient.invokeConcurrent(
locations, method, String.class);
Entry<RemoteLocation, String> firstelement =
results.entrySet().iterator().next();
RemoteLocation loc = firstelement.getKey();
result = firstelement.getValue();
result = result.replaceFirst(loc.getDest(), loc.getSrc());
} else {
result = rpcClient.invokeSequential(
locations, method, String.class, null);
RemoteLocation loc = locations.get(0);
result = result.replaceFirst(loc.getDest(), loc.getSrc());
}
return result;
}
public void deleteSnapshot(String snapshotRoot, String snapshotName)
throws IOException {
rpcServer.checkOperation(OperationCategory.WRITE);
final List<RemoteLocation> locations =
rpcServer.getLocationsForPath(snapshotRoot, true, false);
RemoteMethod method = new RemoteMethod("deleteSnapshot",
new Class<?>[] {String.class, String.class},
new RemoteParam(), snapshotName);
if (rpcServer.isInvokeConcurrent(snapshotRoot)) {
rpcClient.invokeConcurrent(locations, method);
} else {
rpcClient.invokeSequential(locations, method);
}
}
public void renameSnapshot(String snapshotRoot, String oldSnapshotName,
String newSnapshot) throws IOException {
rpcServer.checkOperation(OperationCategory.WRITE);
final List<RemoteLocation> locations =
rpcServer.getLocationsForPath(snapshotRoot, true, false);
RemoteMethod method = new RemoteMethod("renameSnapshot",
new Class<?>[] {String.class, String.class, String.class},
new RemoteParam(), oldSnapshotName, newSnapshot);
if (rpcServer.isInvokeConcurrent(snapshotRoot)) {
rpcClient.invokeConcurrent(locations, method);
} else {
rpcClient.invokeSequential(locations, method);
}
}
public SnapshottableDirectoryStatus[] getSnapshottableDirListing()
throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.READ);
RemoteMethod method = new RemoteMethod("getSnapshottableDirListing");
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
Map<FederationNamespaceInfo, SnapshottableDirectoryStatus[]> ret =
rpcClient.invokeConcurrent(
nss, method, true, false, SnapshottableDirectoryStatus[].class);
return RouterRpcServer.merge(ret, SnapshottableDirectoryStatus.class);
}
public SnapshotDiffReport getSnapshotDiffReport(String snapshotRoot,
String earlierSnapshotName, String laterSnapshotName)
throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.READ);
final List<RemoteLocation> locations =
rpcServer.getLocationsForPath(snapshotRoot, true, false);
RemoteMethod remoteMethod = new RemoteMethod("getSnapshotDiffReport",
new Class<?>[] {String.class, String.class, String.class},
new RemoteParam(), earlierSnapshotName, laterSnapshotName);
if (rpcServer.isInvokeConcurrent(snapshotRoot)) {
Map<RemoteLocation, SnapshotDiffReport> ret = rpcClient.invokeConcurrent(
locations, remoteMethod, true, false, SnapshotDiffReport.class);
return ret.values().iterator().next();
} else {
return rpcClient.invokeSequential(
locations, remoteMethod, SnapshotDiffReport.class, null);
}
}
public SnapshotDiffReportListing getSnapshotDiffReportListing(
String snapshotRoot, String earlierSnapshotName, String laterSnapshotName,
byte[] startPath, int index) throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.READ);
final List<RemoteLocation> locations =
rpcServer.getLocationsForPath(snapshotRoot, true, false);
Class<?>[] params = new Class<?>[] {
String.class, String.class, String.class,
byte[].class, int.class};
RemoteMethod remoteMethod = new RemoteMethod(
"getSnapshotDiffReportListing", params,
new RemoteParam(), earlierSnapshotName, laterSnapshotName,
startPath, index);
if (rpcServer.isInvokeConcurrent(snapshotRoot)) {
Map<RemoteLocation, SnapshotDiffReportListing> ret =
rpcClient.invokeConcurrent(locations, remoteMethod, false, false,
SnapshotDiffReportListing.class);
Collection<SnapshotDiffReportListing> listings = ret.values();
SnapshotDiffReportListing listing0 = listings.iterator().next();
return listing0;
} else {
return rpcClient.invokeSequential(
locations, remoteMethod, SnapshotDiffReportListing.class, null);
}
}
}

View File

@ -0,0 +1,105 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
* <p>
* http://www.apache.org/licenses/LICENSE-2.0
* <p>
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.router;
import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
import org.apache.hadoop.hdfs.server.namenode.NameNode;
import java.io.IOException;
import java.util.List;
/**
* Module that implements all the RPC calls in
* {@link org.apache.hadoop.hdfs.protocol.ClientProtocol} related to
* Storage Policy in the {@link RouterRpcServer}.
*/
public class RouterStoragePolicy {
/** RPC server to receive client calls. */
private final RouterRpcServer rpcServer;
/** RPC clients to connect to the Namenodes. */
private final RouterRpcClient rpcClient;
public RouterStoragePolicy(RouterRpcServer server) {
this.rpcServer = server;
this.rpcClient = this.rpcServer.getRPCClient();
}
public void setStoragePolicy(String src, String policyName)
throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.WRITE);
List<RemoteLocation> locations =
rpcServer.getLocationsForPath(src, false, false);
RemoteMethod method = new RemoteMethod("setStoragePolicy",
new Class<?>[] {String.class, String.class},
new RemoteParam(),
policyName);
if (rpcServer.isInvokeConcurrent(src)) {
rpcClient.invokeConcurrent(locations, method);
} else {
rpcClient.invokeSequential(locations, method);
}
}
public BlockStoragePolicy[] getStoragePolicies() throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.READ);
RemoteMethod method = new RemoteMethod("getStoragePolicies");
return rpcServer.invokeAtAvailableNs(method, BlockStoragePolicy[].class);
}
public void unsetStoragePolicy(String src) throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.WRITE, true);
List<RemoteLocation> locations =
rpcServer.getLocationsForPath(src, false, false);
RemoteMethod method = new RemoteMethod("unsetStoragePolicy",
new Class<?>[] {String.class},
new RemoteParam());
if (rpcServer.isInvokeConcurrent(src)) {
rpcClient.invokeConcurrent(locations, method);
} else {
rpcClient.invokeSequential(locations, method);
}
}
public BlockStoragePolicy getStoragePolicy(String path)
throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.READ, true);
List<RemoteLocation> locations =
rpcServer.getLocationsForPath(path, false, false);
RemoteMethod method = new RemoteMethod("getStoragePolicy",
new Class<?>[] {String.class},
new RemoteParam());
return (BlockStoragePolicy) rpcClient.invokeSequential(locations, method);
}
public void satisfyStoragePolicy(String path) throws IOException {
rpcServer.checkOperation(NameNode.OperationCategory.READ, true);
List<RemoteLocation> locations =
rpcServer.getLocationsForPath(path, true, false);
RemoteMethod method = new RemoteMethod("satisfyStoragePolicy",
new Class<?>[] {String.class},
new RemoteParam());
rpcClient.invokeSequential(locations, method);
}
}

View File

@ -0,0 +1,104 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.router;
import static org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.merge;
import java.io.IOException;
import java.util.Map;
import java.util.Set;
import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory;
import org.apache.hadoop.security.Groups;
import org.apache.hadoop.security.RefreshUserMappingsProtocol;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.security.authorize.ProxyUsers;
import org.apache.hadoop.tools.GetUserMappingsProtocol;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Module that implements all the RPC calls in
* {@link RefreshUserMappingsProtocol} {@link GetUserMappingsProtocol} in the
* {@link RouterRpcServer}.
*/
public class RouterUserProtocol
implements RefreshUserMappingsProtocol, GetUserMappingsProtocol {
private static final Logger LOG =
LoggerFactory.getLogger(RouterUserProtocol.class);
/** RPC server to receive client calls. */
private final RouterRpcServer rpcServer;
/** RPC clients to connect to the Namenodes. */
private final RouterRpcClient rpcClient;
private final ActiveNamenodeResolver namenodeResolver;
public RouterUserProtocol(RouterRpcServer server) {
this.rpcServer = server;
this.rpcClient = this.rpcServer.getRPCClient();
this.namenodeResolver = this.rpcServer.getNamenodeResolver();
}
@Override
public void refreshUserToGroupsMappings() throws IOException {
LOG.debug("Refresh user groups mapping in Router.");
rpcServer.checkOperation(OperationCategory.UNCHECKED);
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
if (nss.isEmpty()) {
Groups.getUserToGroupsMappingService().refresh();
} else {
RemoteMethod method = new RemoteMethod(RefreshUserMappingsProtocol.class,
"refreshUserToGroupsMappings");
rpcClient.invokeConcurrent(nss, method);
}
}
@Override
public void refreshSuperUserGroupsConfiguration() throws IOException {
LOG.debug("Refresh superuser groups configuration in Router.");
rpcServer.checkOperation(OperationCategory.UNCHECKED);
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
if (nss.isEmpty()) {
ProxyUsers.refreshSuperUserGroupsConfiguration();
} else {
RemoteMethod method = new RemoteMethod(RefreshUserMappingsProtocol.class,
"refreshSuperUserGroupsConfiguration");
rpcClient.invokeConcurrent(nss, method);
}
}
@Override
public String[] getGroupsForUser(String user) throws IOException {
LOG.debug("Getting groups for user {}", user);
rpcServer.checkOperation(OperationCategory.UNCHECKED);
Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
if (nss.isEmpty()) {
return UserGroupInformation.createRemoteUser(user).getGroupNames();
} else {
RemoteMethod method = new RemoteMethod(GetUserMappingsProtocol.class,
"getGroupsForUser", new Class<?>[] {String.class}, user);
Map<FederationNamespaceInfo, String[]> results =
rpcClient.invokeConcurrent(nss, method, String[].class);
return merge(results, String.class);
}
}
}

View File

@ -19,7 +19,6 @@ package org.apache.hadoop.hdfs.server.federation.router;
import static org.apache.hadoop.util.StringUtils.getTrimmedStringCollection;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.protocol.ClientProtocol;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
@ -27,13 +26,10 @@ import org.apache.hadoop.hdfs.protocol.LocatedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
import org.apache.hadoop.hdfs.server.common.JspHelper;
import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext;
import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
import org.apache.hadoop.hdfs.server.federation.router.security.RouterSecurityManager;
import org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.ws.rs.Path;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
@ -42,7 +38,6 @@ import javax.ws.rs.core.Response;
import com.sun.jersey.spi.container.ResourceFilters;
import org.apache.hadoop.hdfs.web.JsonUtil;
import org.apache.hadoop.hdfs.web.ParamFilter;
import org.apache.hadoop.hdfs.web.URLConnectionFactory;
import org.apache.hadoop.hdfs.web.WebHdfsFileSystem;
import org.apache.hadoop.hdfs.web.resources.AccessTimeParam;
import org.apache.hadoop.hdfs.web.resources.AclPermissionParam;
@ -91,6 +86,7 @@ import org.apache.hadoop.hdfs.web.resources.XAttrValueParam;
import org.apache.hadoop.ipc.ExternalCall;
import org.apache.hadoop.ipc.RetriableException;
import org.apache.hadoop.net.Node;
import org.apache.hadoop.security.Credentials;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.security.token.Token;
import org.apache.hadoop.security.token.TokenIdentifier;
@ -99,12 +95,8 @@ import org.slf4j.LoggerFactory;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.URI;
import java.net.URISyntaxException;
import java.net.URL;
import java.net.URLDecoder;
import java.security.PrivilegedAction;
import java.util.Collection;
import java.util.HashSet;
import java.util.List;
@ -224,7 +216,11 @@ public class RouterWebHdfsMethods extends NamenodeWebHdfsMethods {
case CREATE:
{
final Router router = getRouter();
final URI uri = redirectURI(router, fullpath);
final URI uri = redirectURI(router, ugi, delegation, username,
doAsUser, fullpath, op.getValue(), -1L,
exclDatanodes.getValue(), permission, unmaskedPermission,
overwrite, bufferSize, replication, blockSize, createParent,
createFlagParam);
if (!noredirectParam.getValue()) {
return Response.temporaryRedirect(uri)
.type(MediaType.APPLICATION_OCTET_STREAM).build();
@ -366,6 +362,7 @@ public class RouterWebHdfsMethods extends NamenodeWebHdfsMethods {
return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
}
}
case GETDELEGATIONTOKEN:
case GET_BLOCK_LOCATIONS:
case GETFILESTATUS:
case LISTSTATUS:
@ -389,104 +386,6 @@ public class RouterWebHdfsMethods extends NamenodeWebHdfsMethods {
}
}
/**
* Get the redirect URI from the Namenode responsible for a path.
* @param router Router to check.
* @param path Path to get location for.
* @return URI returned by the Namenode.
* @throws IOException If it cannot get the redirect URI.
*/
private URI redirectURI(final Router router, final String path)
throws IOException {
// Forward the request to the proper Namenode
final HttpURLConnection conn = forwardRequest(router, path);
try {
conn.setInstanceFollowRedirects(false);
conn.setDoOutput(true);
conn.connect();
// Read the reply from the Namenode
int responseCode = conn.getResponseCode();
if (responseCode != HttpServletResponse.SC_TEMPORARY_REDIRECT) {
LOG.info("We expected a redirection from the Namenode, not {}",
responseCode);
return null;
}
// Extract the redirect location and return it
String redirectLocation = conn.getHeaderField("Location");
try {
// We modify the namenode location and the path
redirectLocation = redirectLocation
.replaceAll("(?<=[?&;])namenoderpcaddress=.*?(?=[&;])",
"namenoderpcaddress=" + router.getRouterId())
.replaceAll("(?<=[/])webhdfs/v1/.*?(?=[?])",
"webhdfs/v1" + path);
return new URI(redirectLocation);
} catch (URISyntaxException e) {
LOG.error("Cannot parse redirect location {}", redirectLocation);
}
} finally {
if (conn != null) {
conn.disconnect();
}
}
return null;
}
/**
* Forwards a request to a subcluster.
* @param router Router to check.
* @param path Path in HDFS.
* @return Reply from the subcluster.
* @throws IOException
*/
private HttpURLConnection forwardRequest(
final Router router, final String path) throws IOException {
final Configuration conf =
(Configuration)getContext().getAttribute(JspHelper.CURRENT_CONF);
URLConnectionFactory connectionFactory =
URLConnectionFactory.newDefaultURLConnectionFactory(conf);
// Find the namespace responsible for a path
final RouterRpcServer rpcServer = getRPCServer(router);
RemoteLocation createLoc = rpcServer.getCreateLocation(path);
String nsId = createLoc.getNameserviceId();
String dest = createLoc.getDest();
ActiveNamenodeResolver nnResolver = router.getNamenodeResolver();
List<? extends FederationNamenodeContext> namenodes =
nnResolver.getNamenodesForNameserviceId(nsId);
// Go over the namenodes responsible for that namespace
for (FederationNamenodeContext namenode : namenodes) {
try {
// Generate the request for the namenode
String nnWebAddress = namenode.getWebAddress();
String[] nnWebAddressSplit = nnWebAddress.split(":");
String host = nnWebAddressSplit[0];
int port = Integer.parseInt(nnWebAddressSplit[1]);
// Avoid double-encoding here
query = URLDecoder.decode(query, "UTF-8");
URI uri = new URI(getScheme(), null, host, port,
reqPath + dest, query, null);
URL url = uri.toURL();
// Send a request to the proper Namenode
final HttpURLConnection conn =
(HttpURLConnection)connectionFactory.openConnection(url);
conn.setRequestMethod(method);
connectionFactory.destroy();
return conn;
} catch (Exception e) {
LOG.error("Cannot redirect request to {}", namenode, e);
}
}
connectionFactory.destroy();
return null;
}
/**
* Get a URI to redirect an operation to.
* @param router Router to check.
@ -526,7 +425,7 @@ public class RouterWebHdfsMethods extends NamenodeWebHdfsMethods {
} else {
// generate a token
final Token<? extends TokenIdentifier> t = generateDelegationToken(
router, ugi, request.getUserPrincipal().getName());
ugi, ugi.getUserName());
delegationQuery = "&delegation=" + t.encodeToUrlString();
}
@ -552,19 +451,17 @@ public class RouterWebHdfsMethods extends NamenodeWebHdfsMethods {
// We need to get the DNs as a privileged user
final RouterRpcServer rpcServer = getRPCServer(router);
UserGroupInformation loginUser = UserGroupInformation.getLoginUser();
RouterRpcServer.setCurrentUser(loginUser);
DatanodeInfo[] dns = loginUser.doAs(
new PrivilegedAction<DatanodeInfo[]>() {
@Override
public DatanodeInfo[] run() {
try {
return rpcServer.getDatanodeReport(DatanodeReportType.LIVE);
} catch (IOException e) {
LOG.error("Cannot get the datanodes from the RPC server", e);
return null;
}
}
});
DatanodeInfo[] dns = null;
try {
dns = rpcServer.getDatanodeReport(DatanodeReportType.LIVE);
} catch (IOException e) {
LOG.error("Cannot get the datanodes from the RPC server", e);
} finally {
// Reset ugi to remote user for remaining operations.
RouterRpcServer.resetCurrentUser();
}
HashSet<Node> excludes = new HashSet<Node>();
if (excludeDatanodes != null) {
@ -646,17 +543,19 @@ public class RouterWebHdfsMethods extends NamenodeWebHdfsMethods {
}
/**
* Generate the delegation tokens for this request.
* @param router Router.
* Generate the credentials for this request.
* @param ugi User group information.
* @param renewer Who is asking for the renewal.
* @return The delegation tokens.
* @throws IOException If it cannot create the tokens.
* @return Credentials holding delegation token.
* @throws IOException If it cannot create the credentials.
*/
private Token<? extends TokenIdentifier> generateDelegationToken(
final Router router, final UserGroupInformation ugi,
@Override
public Credentials createCredentials(
final UserGroupInformation ugi,
final String renewer) throws IOException {
throw new UnsupportedOperationException("TODO Generate token for ugi=" +
ugi + " request=" + request);
final Router router = (Router)getContext().getAttribute("name.node");
final Credentials c = RouterSecurityManager.createCredentials(router, ugi,
renewer != null? renewer: ugi.getShortUserName());
return c;
}
}

View File

@ -0,0 +1,288 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.router.security;
import com.google.common.annotations.VisibleForTesting;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
import org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer;
import org.apache.hadoop.hdfs.server.federation.router.Router;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.security.AccessControlException;
import org.apache.hadoop.security.Credentials;
import org.apache.hadoop.security.SecurityUtil;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;
import org.apache.hadoop.security.token.SecretManager;
import org.apache.hadoop.security.token.Token;
import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.lang.reflect.Constructor;
/**
* Manager to hold underlying delegation token secret manager implementations.
*/
public class RouterSecurityManager {
private static final Logger LOG =
LoggerFactory.getLogger(RouterSecurityManager.class);
private AbstractDelegationTokenSecretManager<DelegationTokenIdentifier>
dtSecretManager = null;
public RouterSecurityManager(Configuration conf) {
AuthenticationMethod authMethodConfigured =
SecurityUtil.getAuthenticationMethod(conf);
AuthenticationMethod authMethodToInit =
AuthenticationMethod.KERBEROS;
if (authMethodConfigured.equals(authMethodToInit)) {
this.dtSecretManager = newSecretManager(conf);
}
}
@VisibleForTesting
public RouterSecurityManager(AbstractDelegationTokenSecretManager
<DelegationTokenIdentifier> dtSecretManager) {
this.dtSecretManager = dtSecretManager;
}
/**
* Creates an instance of a SecretManager from the configuration.
*
* @param conf Configuration that defines the secret manager class.
* @return New secret manager.
*/
public static AbstractDelegationTokenSecretManager<DelegationTokenIdentifier>
newSecretManager(Configuration conf) {
Class<? extends AbstractDelegationTokenSecretManager> clazz =
conf.getClass(
RBFConfigKeys.DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS,
RBFConfigKeys.DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS_DEFAULT,
AbstractDelegationTokenSecretManager.class);
AbstractDelegationTokenSecretManager secretManager;
try {
Constructor constructor = clazz.getConstructor(Configuration.class);
secretManager = (AbstractDelegationTokenSecretManager)
constructor.newInstance(conf);
LOG.info("Delegation token secret manager object instantiated");
} catch (ReflectiveOperationException e) {
LOG.error("Could not instantiate: {}", clazz.getSimpleName(),
e.getCause());
return null;
} catch (RuntimeException e) {
LOG.error("RuntimeException to instantiate: {}",
clazz.getSimpleName(), e);
return null;
}
return secretManager;
}
public AbstractDelegationTokenSecretManager<DelegationTokenIdentifier>
getSecretManager() {
return this.dtSecretManager;
}
public void stop() {
LOG.info("Stopping security manager");
if(this.dtSecretManager != null) {
this.dtSecretManager.stopThreads();
}
}
private static UserGroupInformation getRemoteUser() throws IOException {
return RouterRpcServer.getRemoteUser();
}
/**
* Returns authentication method used to establish the connection.
* @return AuthenticationMethod used to establish connection.
* @throws IOException
*/
private UserGroupInformation.AuthenticationMethod
getConnectionAuthenticationMethod() throws IOException {
UserGroupInformation ugi = getRemoteUser();
UserGroupInformation.AuthenticationMethod authMethod
= ugi.getAuthenticationMethod();
if (authMethod == UserGroupInformation.AuthenticationMethod.PROXY) {
authMethod = ugi.getRealUser().getAuthenticationMethod();
}
return authMethod;
}
/**
*
* @return true if delegation token operation is allowed
*/
private boolean isAllowedDelegationTokenOp() throws IOException {
AuthenticationMethod authMethod = getConnectionAuthenticationMethod();
if (UserGroupInformation.isSecurityEnabled()
&& (authMethod != AuthenticationMethod.KERBEROS)
&& (authMethod != AuthenticationMethod.KERBEROS_SSL)
&& (authMethod != AuthenticationMethod.CERTIFICATE)) {
return false;
}
return true;
}
/**
* @param renewer Renewer information
* @return delegation token
* @throws IOException on error
*/
public Token<DelegationTokenIdentifier> getDelegationToken(Text renewer)
throws IOException {
LOG.debug("Generate delegation token with renewer " + renewer);
final String operationName = "getDelegationToken";
boolean success = false;
String tokenId = "";
Token<DelegationTokenIdentifier> token;
try {
if (!isAllowedDelegationTokenOp()) {
throw new IOException(
"Delegation Token can be issued only " +
"with kerberos or web authentication");
}
if (dtSecretManager == null || !dtSecretManager.isRunning()) {
LOG.warn("trying to get DT with no secret manager running");
return null;
}
UserGroupInformation ugi = getRemoteUser();
String user = ugi.getUserName();
Text owner = new Text(user);
Text realUser = null;
if (ugi.getRealUser() != null) {
realUser = new Text(ugi.getRealUser().getUserName());
}
DelegationTokenIdentifier dtId = new DelegationTokenIdentifier(owner,
renewer, realUser);
token = new Token<DelegationTokenIdentifier>(
dtId, dtSecretManager);
tokenId = dtId.toStringStable();
success = true;
} finally {
logAuditEvent(success, operationName, tokenId);
}
return token;
}
/**
* @param token token to renew
* @return new expiryTime of the token
* @throws SecretManager.InvalidToken if {@code token} is invalid
* @throws IOException on errors
*/
public long renewDelegationToken(Token<DelegationTokenIdentifier> token)
throws SecretManager.InvalidToken, IOException {
LOG.debug("Renew delegation token");
final String operationName = "renewDelegationToken";
boolean success = false;
String tokenId = "";
long expiryTime;
try {
if (!isAllowedDelegationTokenOp()) {
throw new IOException(
"Delegation Token can be renewed only " +
"with kerberos or web authentication");
}
String renewer = getRemoteUser().getShortUserName();
expiryTime = dtSecretManager.renewToken(token, renewer);
final DelegationTokenIdentifier id = DFSUtil.decodeDelegationToken(token);
tokenId = id.toStringStable();
success = true;
} catch (AccessControlException ace) {
final DelegationTokenIdentifier id = DFSUtil.decodeDelegationToken(token);
tokenId = id.toStringStable();
throw ace;
} finally {
logAuditEvent(success, operationName, tokenId);
}
return expiryTime;
}
/**
* @param token token to cancel
* @throws IOException on error
*/
public void cancelDelegationToken(Token<DelegationTokenIdentifier> token)
throws IOException {
LOG.debug("Cancel delegation token");
final String operationName = "cancelDelegationToken";
boolean success = false;
String tokenId = "";
try {
String canceller = getRemoteUser().getUserName();
LOG.info("Cancel request by " + canceller);
DelegationTokenIdentifier id =
dtSecretManager.cancelToken(token, canceller);
tokenId = id.toStringStable();
success = true;
} catch (AccessControlException ace) {
final DelegationTokenIdentifier id = DFSUtil.decodeDelegationToken(token);
tokenId = id.toStringStable();
throw ace;
} finally {
logAuditEvent(success, operationName, tokenId);
}
}
/**
* A utility method for creating credentials.
* Used by web hdfs to return url encoded token.
*/
public static Credentials createCredentials(
final Router router, final UserGroupInformation ugi,
final String renewer) throws IOException {
final Token<DelegationTokenIdentifier> token =
router.getRpcServer().getDelegationToken(new Text(renewer));
if (token == null) {
return null;
}
final InetSocketAddress addr = router.getRpcServerAddress();
SecurityUtil.setTokenService(token, addr);
final Credentials c = new Credentials();
c.addToken(new Text(ugi.getShortUserName()), token);
return c;
}
/**
* Delegation token verification.
* Used by web hdfs to verify url encoded token.
*/
public void verifyToken(DelegationTokenIdentifier identifier,
byte[] password) throws SecretManager.InvalidToken {
this.dtSecretManager.verifyToken(identifier, password);
}
/**
* Log status of delegation token related operation.
* Extend in future to use audit logger instead of local logging.
*/
void logAuditEvent(boolean succeeded, String cmd, String tokenId)
throws IOException {
LOG.debug(
"Operation:" + cmd +
" Status:" + succeeded +
" TokenId:" + tokenId);
}
}

View File

@ -0,0 +1,28 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Includes router security manager and token store implementations.
*/
@InterfaceAudience.Private
@InterfaceStability.Evolving
package org.apache.hadoop.hdfs.server.federation.router.security;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;

View File

@ -0,0 +1,56 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.router.security.token;
import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier;
import org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hadoop.conf.Configuration;
import java.io.IOException;
/**
* Zookeeper based router delegation token store implementation.
*/
public class ZKDelegationTokenSecretManagerImpl extends
ZKDelegationTokenSecretManager<AbstractDelegationTokenIdentifier> {
private static final Logger LOG =
LoggerFactory.getLogger(ZKDelegationTokenSecretManagerImpl.class);
private Configuration conf = null;
public ZKDelegationTokenSecretManagerImpl(Configuration conf) {
super(conf);
this.conf = conf;
try {
super.startThreads();
} catch (IOException e) {
LOG.error("Error starting threads for zkDelegationTokens ");
}
LOG.info("Zookeeper delegation token secret manager instantiated");
}
@Override
public DelegationTokenIdentifier createIdentifier() {
return new DelegationTokenIdentifier();
}
}

View File

@ -0,0 +1,29 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Includes implementations of token secret managers.
* Implementations should extend {@link AbstractDelegationTokenSecretManager}.
*/
@InterfaceAudience.Private
@InterfaceStability.Evolving
package org.apache.hadoop.hdfs.server.federation.router.security.token;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;

View File

@ -20,8 +20,11 @@ package org.apache.hadoop.hdfs.server.federation.store;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
import org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherService;
import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Management API for the HDFS mount table information stored in
@ -42,8 +45,29 @@ import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
@InterfaceStability.Evolving
public abstract class MountTableStore extends CachedRecordStore<MountTable>
implements MountTableManager {
private static final Logger LOG =
LoggerFactory.getLogger(MountTableStore.class);
private MountTableRefresherService refreshService;
public MountTableStore(StateStoreDriver driver) {
super(MountTable.class, driver);
}
public void setRefreshService(MountTableRefresherService refreshService) {
this.refreshService = refreshService;
}
/**
* Update mount table cache of this router as well as all other routers.
*/
protected void updateCacheAllRouters() {
if (refreshService != null) {
try {
refreshService.refresh();
} catch (StateStoreUnavailableException e) {
LOG.error("Cannot refresh mount table: state store not available", e);
}
}
}
}

View File

@ -33,6 +33,7 @@ import javax.management.StandardMBean;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.server.federation.metrics.NullStateStoreMetrics;
import org.apache.hadoop.hdfs.server.federation.metrics.StateStoreMBean;
import org.apache.hadoop.hdfs.server.federation.metrics.StateStoreMetrics;
import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
@ -172,19 +173,25 @@ public class StateStoreService extends CompositeService {
this.cacheUpdater = new StateStoreCacheUpdateService(this);
addService(this.cacheUpdater);
// Create metrics for the State Store
this.metrics = StateStoreMetrics.create(conf);
if (conf.getBoolean(RBFConfigKeys.DFS_ROUTER_METRICS_ENABLE,
RBFConfigKeys.DFS_ROUTER_METRICS_ENABLE_DEFAULT)) {
// Create metrics for the State Store
this.metrics = StateStoreMetrics.create(conf);
// Adding JMX interface
try {
StandardMBean bean = new StandardMBean(metrics, StateStoreMBean.class);
ObjectName registeredObject =
MBeans.register("Router", "StateStore", bean);
LOG.info("Registered StateStoreMBean: {}", registeredObject);
} catch (NotCompliantMBeanException e) {
throw new RuntimeException("Bad StateStoreMBean setup", e);
} catch (MetricsException e) {
LOG.error("Failed to register State Store bean {}", e.getMessage());
// Adding JMX interface
try {
StandardMBean bean = new StandardMBean(metrics, StateStoreMBean.class);
ObjectName registeredObject =
MBeans.register("Router", "StateStore", bean);
LOG.info("Registered StateStoreMBean: {}", registeredObject);
} catch (NotCompliantMBeanException e) {
throw new RuntimeException("Bad StateStoreMBean setup", e);
} catch (MetricsException e) {
LOG.error("Failed to register State Store bean {}", e.getMessage());
}
} else {
LOG.info("State Store metrics not enabled");
this.metrics = new NullStateStoreMetrics();
}
super.serviceInit(this.conf);

View File

@ -17,6 +17,9 @@
*/
package org.apache.hadoop.hdfs.server.federation.store;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.UnknownHostException;
import java.util.ArrayList;
import java.util.List;
@ -110,4 +113,27 @@ public final class StateStoreUtils {
}
return matchingList;
}
/**
* Returns address in form of host:port, empty string if address is null.
*
* @param address address
* @return host:port
*/
public static String getHostPortString(InetSocketAddress address) {
if (null == address) {
return "";
}
String hostName = address.getHostName();
if (hostName.equals("0.0.0.0")) {
try {
hostName = InetAddress.getLocalHost().getHostName();
} catch (UnknownHostException e) {
LOG.error("Failed to get local host name", e);
return "";
}
}
return hostName + ":" + address.getPort();
}
}

View File

@ -31,8 +31,12 @@ import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@ -62,12 +66,14 @@ public class MountTableStoreImpl extends MountTableStore {
if (pc != null) {
pc.checkPermission(mountTable, FsAction.WRITE);
}
mountTable.validate();
}
boolean status = getDriver().put(mountTable, false, true);
AddMountTableEntryResponse response =
AddMountTableEntryResponse.newInstance();
response.setStatus(status);
updateCacheAllRouters();
return response;
}
@ -80,12 +86,14 @@ public class MountTableStoreImpl extends MountTableStore {
if (pc != null) {
pc.checkPermission(mountTable, FsAction.WRITE);
}
mountTable.validate();
}
boolean status = getDriver().put(mountTable, true, true);
UpdateMountTableEntryResponse response =
UpdateMountTableEntryResponse.newInstance();
response.setStatus(status);
updateCacheAllRouters();
return response;
}
@ -110,6 +118,7 @@ public class MountTableStoreImpl extends MountTableStore {
RemoveMountTableEntryResponse response =
RemoveMountTableEntryResponse.newInstance();
response.setStatus(status);
updateCacheAllRouters();
return response;
}
@ -151,4 +160,22 @@ public class MountTableStoreImpl extends MountTableStore {
response.setTimestamp(Time.now());
return response;
}
@Override
public RefreshMountTableEntriesResponse refreshMountTableEntries(
RefreshMountTableEntriesRequest request) throws IOException {
// Because this refresh is done through admin API, it should always be force
// refresh.
boolean result = loadCache(true);
RefreshMountTableEntriesResponse response =
RefreshMountTableEntriesResponse.newInstance();
response.setResult(result);
return response;
}
@Override
public GetDestinationResponse getDestination(
GetDestinationRequest request) throws IOException {
throw new UnsupportedOperationException("Requires the RouterRpcServer");
}
}

View File

@ -0,0 +1,57 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.store.protocol;
import java.io.IOException;
import org.apache.hadoop.classification.InterfaceAudience.Public;
import org.apache.hadoop.classification.InterfaceStability.Unstable;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreSerializer;
/**
* API request for getting the destination subcluster of a file.
*/
public abstract class GetDestinationRequest {
public static GetDestinationRequest newInstance()
throws IOException {
return StateStoreSerializer
.newRecord(GetDestinationRequest.class);
}
public static GetDestinationRequest newInstance(String srcPath)
throws IOException {
GetDestinationRequest request = newInstance();
request.setSrcPath(srcPath);
return request;
}
public static GetDestinationRequest newInstance(Path srcPath)
throws IOException {
return newInstance(srcPath.toString());
}
@Public
@Unstable
public abstract String getSrcPath();
@Public
@Unstable
public abstract void setSrcPath(String srcPath);
}

View File

@ -0,0 +1,59 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.store.protocol;
import java.io.IOException;
import java.util.Collection;
import java.util.Collections;
import org.apache.hadoop.classification.InterfaceAudience.Public;
import org.apache.hadoop.classification.InterfaceStability.Unstable;
import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreSerializer;
/**
* API response for getting the destination subcluster of a file.
*/
public abstract class GetDestinationResponse {
public static GetDestinationResponse newInstance()
throws IOException {
return StateStoreSerializer
.newRecord(GetDestinationResponse.class);
}
public static GetDestinationResponse newInstance(
Collection<String> nsIds) throws IOException {
GetDestinationResponse request = newInstance();
request.setDestinations(nsIds);
return request;
}
@Public
@Unstable
public abstract Collection<String> getDestinations();
@Public
@Unstable
public void setDestination(String nsId) {
setDestinations(Collections.singletonList(nsId));
}
@Public
@Unstable
public abstract void setDestinations(Collection<String> nsIds);
}

View File

@ -0,0 +1,34 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.store.protocol;
import java.io.IOException;
import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreSerializer;
/**
* API request for refreshing mount table cached entries from state store.
*/
public abstract class RefreshMountTableEntriesRequest {
public static RefreshMountTableEntriesRequest newInstance()
throws IOException {
return StateStoreSerializer
.newRecord(RefreshMountTableEntriesRequest.class);
}
}

View File

@ -0,0 +1,44 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.store.protocol;
import java.io.IOException;
import org.apache.hadoop.classification.InterfaceAudience.Public;
import org.apache.hadoop.classification.InterfaceStability.Unstable;
import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreSerializer;
/**
* API response for refreshing mount table entries cache from state store.
*/
public abstract class RefreshMountTableEntriesResponse {
public static RefreshMountTableEntriesResponse newInstance()
throws IOException {
return StateStoreSerializer
.newRecord(RefreshMountTableEntriesResponse.class);
}
@Public
@Unstable
public abstract boolean getResult();
@Public
@Unstable
public abstract void setResult(boolean result);
}

View File

@ -0,0 +1,73 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
import java.io.IOException;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProtoOrBuilder;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProto.Builder;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
import org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.PBRecord;
import com.google.protobuf.Message;
/**
* Protobuf implementation of the state store API object
* GetDestinationRequest.
*/
public class GetDestinationRequestPBImpl extends GetDestinationRequest
implements PBRecord {
private FederationProtocolPBTranslator<GetDestinationRequestProto,
Builder, GetDestinationRequestProtoOrBuilder> translator =
new FederationProtocolPBTranslator<>(
GetDestinationRequestProto.class);
public GetDestinationRequestPBImpl() {
}
public GetDestinationRequestPBImpl(GetDestinationRequestProto proto) {
this.translator.setProto(proto);
}
@Override
public GetDestinationRequestProto getProto() {
return this.translator.build();
}
@Override
public void setProto(Message proto) {
this.translator.setProto(proto);
}
@Override
public void readInstance(String base64String) throws IOException {
this.translator.readInstance(base64String);
}
@Override
public String getSrcPath() {
return this.translator.getProtoOrBuilder().getSrcPath();
}
@Override
public void setSrcPath(String path) {
this.translator.getBuilder().setSrcPath(path);
}
}

View File

@ -0,0 +1,83 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collection;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProto.Builder;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProtoOrBuilder;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
import org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.PBRecord;
import com.google.protobuf.Message;
/**
* Protobuf implementation of the state store API object
* GetDestinationResponse.
*/
public class GetDestinationResponsePBImpl
extends GetDestinationResponse implements PBRecord {
private FederationProtocolPBTranslator<GetDestinationResponseProto,
Builder, GetDestinationResponseProtoOrBuilder> translator =
new FederationProtocolPBTranslator<>(
GetDestinationResponseProto.class);
public GetDestinationResponsePBImpl() {
}
public GetDestinationResponsePBImpl(
GetDestinationResponseProto proto) {
this.translator.setProto(proto);
}
@Override
public GetDestinationResponseProto getProto() {
// if builder is null build() returns null, calling getBuilder() to
// instantiate builder
this.translator.getBuilder();
return this.translator.build();
}
@Override
public void setProto(Message proto) {
this.translator.setProto(proto);
}
@Override
public void readInstance(String base64String) throws IOException {
this.translator.readInstance(base64String);
}
@Override
public Collection<String> getDestinations() {
return new ArrayList<>(
this.translator.getProtoOrBuilder().getDestinationsList());
}
@Override
public void setDestinations(Collection<String> nsIds) {
this.translator.getBuilder().clearDestinations();
for (String nsId : nsIds) {
this.translator.getBuilder().addDestinations(nsId);
}
}
}

View File

@ -0,0 +1,67 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
import java.io.IOException;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProto.Builder;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProtoOrBuilder;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.PBRecord;
import com.google.protobuf.Message;
/**
* Protobuf implementation of the state store API object
* RefreshMountTableEntriesRequest.
*/
public class RefreshMountTableEntriesRequestPBImpl
extends RefreshMountTableEntriesRequest implements PBRecord {
private FederationProtocolPBTranslator<RefreshMountTableEntriesRequestProto,
Builder, RefreshMountTableEntriesRequestProtoOrBuilder> translator =
new FederationProtocolPBTranslator<>(
RefreshMountTableEntriesRequestProto.class);
public RefreshMountTableEntriesRequestPBImpl() {
}
public RefreshMountTableEntriesRequestPBImpl(
RefreshMountTableEntriesRequestProto proto) {
this.translator.setProto(proto);
}
@Override
public RefreshMountTableEntriesRequestProto getProto() {
// if builder is null build() returns null, calling getBuilder() to
// instantiate builder
this.translator.getBuilder();
return this.translator.build();
}
@Override
public void setProto(Message proto) {
this.translator.setProto(proto);
}
@Override
public void readInstance(String base64String) throws IOException {
this.translator.readInstance(base64String);
}
}

View File

@ -0,0 +1,74 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
import java.io.IOException;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProto;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProto.Builder;
import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProtoOrBuilder;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.PBRecord;
import com.google.protobuf.Message;
/**
* Protobuf implementation of the state store API object
* RefreshMountTableEntriesResponse.
*/
public class RefreshMountTableEntriesResponsePBImpl
extends RefreshMountTableEntriesResponse implements PBRecord {
private FederationProtocolPBTranslator<RefreshMountTableEntriesResponseProto,
Builder, RefreshMountTableEntriesResponseProtoOrBuilder> translator =
new FederationProtocolPBTranslator<>(
RefreshMountTableEntriesResponseProto.class);
public RefreshMountTableEntriesResponsePBImpl() {
}
public RefreshMountTableEntriesResponsePBImpl(
RefreshMountTableEntriesResponseProto proto) {
this.translator.setProto(proto);
}
@Override
public RefreshMountTableEntriesResponseProto getProto() {
return this.translator.build();
}
@Override
public void setProto(Message proto) {
this.translator.setProto(proto);
}
@Override
public void readInstance(String base64String) throws IOException {
this.translator.readInstance(base64String);
}
@Override
public boolean getResult() {
return this.translator.getProtoOrBuilder().getResult();
};
@Override
public void setResult(boolean result) {
this.translator.getBuilder().setResult(result);
}
}

View File

@ -81,6 +81,10 @@ public abstract class MembershipStats extends BaseRecord {
public abstract int getNumOfDeadDatanodes();
public abstract void setNumOfStaleDatanodes(int nodes);
public abstract int getNumOfStaleDatanodes();
public abstract void setNumOfDecommissioningDatanodes(int nodes);
public abstract int getNumOfDecommissioningDatanodes();
@ -93,6 +97,18 @@ public abstract class MembershipStats extends BaseRecord {
public abstract int getNumOfDecomDeadDatanodes();
public abstract void setNumOfInMaintenanceLiveDataNodes(int nodes);
public abstract int getNumOfInMaintenanceLiveDataNodes();
public abstract void setNumOfInMaintenanceDeadDataNodes(int nodes);
public abstract int getNumOfInMaintenanceDeadDataNodes();
public abstract void setNumOfEnteringMaintenanceDataNodes(int nodes);
public abstract int getNumOfEnteringMaintenanceDataNodes();
@Override
public SortedMap<String, String> getPrimaryKeys() {
// This record is not stored directly, no key needed

View File

@ -26,6 +26,7 @@ import java.util.Map.Entry;
import java.util.SortedMap;
import java.util.TreeMap;
import org.apache.commons.lang3.builder.EqualsBuilder;
import org.apache.commons.lang3.builder.HashCodeBuilder;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;
@ -59,6 +60,10 @@ public abstract class MountTable extends BaseRecord {
"Invalid entry, invalid destination path ";
public static final String ERROR_MSG_ALL_DEST_MUST_START_WITH_BACK_SLASH =
"Invalid entry, all destination must start with / ";
private static final String ERROR_MSG_FAULT_TOLERANT_MULTI_DEST =
"Invalid entry, fault tolerance requires multiple destinations ";
private static final String ERROR_MSG_FAULT_TOLERANT_ALL =
"Invalid entry, fault tolerance only supported for ALL order ";
/** Comparator for paths which considers the /. */
public static final Comparator<String> PATH_COMPARATOR =
@ -228,6 +233,20 @@ public abstract class MountTable extends BaseRecord {
*/
public abstract void setDestOrder(DestinationOrder order);
/**
* Check if the mount point supports a failed destination.
*
* @return If it supports failures.
*/
public abstract boolean isFaultTolerant();
/**
* Set if the mount point supports failed destinations.
*
* @param faultTolerant If it supports failures.
*/
public abstract void setFaultTolerant(boolean faultTolerant);
/**
* Get owner name of this mount table entry.
*
@ -321,11 +340,14 @@ public abstract class MountTable extends BaseRecord {
List<RemoteLocation> destinations = this.getDestinations();
sb.append(destinations);
if (destinations != null && destinations.size() > 1) {
sb.append("[" + this.getDestOrder() + "]");
sb.append("[").append(this.getDestOrder()).append("]");
}
if (this.isReadOnly()) {
sb.append("[RO]");
}
if (this.isFaultTolerant()) {
sb.append("[FT]");
}
if (this.getOwnerName() != null) {
sb.append("[owner:").append(this.getOwnerName()).append("]");
@ -383,6 +405,16 @@ public abstract class MountTable extends BaseRecord {
ERROR_MSG_ALL_DEST_MUST_START_WITH_BACK_SLASH + this);
}
}
if (isFaultTolerant()) {
if (getDestinations().size() < 2) {
throw new IllegalArgumentException(
ERROR_MSG_FAULT_TOLERANT_MULTI_DEST + this);
}
if (!isAll()) {
throw new IllegalArgumentException(
ERROR_MSG_FAULT_TOLERANT_ALL + this);
}
}
}
@Override
@ -397,6 +429,7 @@ public abstract class MountTable extends BaseRecord {
.append(this.getDestinations())
.append(this.isReadOnly())
.append(this.getDestOrder())
.append(this.isFaultTolerant())
.toHashCode();
}
@ -404,16 +437,13 @@ public abstract class MountTable extends BaseRecord {
public boolean equals(Object obj) {
if (obj instanceof MountTable) {
MountTable other = (MountTable)obj;
if (!this.getSourcePath().equals(other.getSourcePath())) {
return false;
} else if (!this.getDestinations().equals(other.getDestinations())) {
return false;
} else if (this.isReadOnly() != other.isReadOnly()) {
return false;
} else if (!this.getDestOrder().equals(other.getDestOrder())) {
return false;
}
return true;
return new EqualsBuilder()
.append(this.getSourcePath(), other.getSourcePath())
.append(this.getDestinations(), other.getDestinations())
.append(this.isReadOnly(), other.isReadOnly())
.append(this.getDestOrder(), other.getDestOrder())
.append(this.isFaultTolerant(), other.isFaultTolerant())
.isEquals();
}
return false;
}
@ -424,9 +454,7 @@ public abstract class MountTable extends BaseRecord {
*/
public boolean isAll() {
DestinationOrder order = getDestOrder();
return order == DestinationOrder.HASH_ALL ||
order == DestinationOrder.RANDOM ||
order == DestinationOrder.SPACE;
return DestinationOrder.FOLDER_ALL.contains(order);
}
/**

View File

@ -88,6 +88,10 @@ public abstract class RouterState extends BaseRecord {
public abstract long getDateStarted();
public abstract void setAdminAddress(String adminAddress);
public abstract String getAdminAddress();
/**
* Get the identifier for the Router. It uses the address.
*

View File

@ -168,6 +168,16 @@ public class MembershipStatsPBImpl extends MembershipStats
return this.translator.getProtoOrBuilder().getNumOfDeadDatanodes();
}
@Override
public void setNumOfStaleDatanodes(int nodes) {
this.translator.getBuilder().setNumOfStaleDatanodes(nodes);
}
@Override
public int getNumOfStaleDatanodes() {
return this.translator.getProtoOrBuilder().getNumOfStaleDatanodes();
}
@Override
public void setNumOfDecommissioningDatanodes(int nodes) {
this.translator.getBuilder().setNumOfDecommissioningDatanodes(nodes);
@ -198,4 +208,37 @@ public class MembershipStatsPBImpl extends MembershipStats
public int getNumOfDecomDeadDatanodes() {
return this.translator.getProtoOrBuilder().getNumOfDecomDeadDatanodes();
}
@Override
public void setNumOfInMaintenanceLiveDataNodes(int nodes) {
this.translator.getBuilder().setNumOfInMaintenanceLiveDataNodes(nodes);
}
@Override
public int getNumOfInMaintenanceLiveDataNodes() {
return this.translator.getProtoOrBuilder()
.getNumOfInMaintenanceLiveDataNodes();
}
@Override
public void setNumOfInMaintenanceDeadDataNodes(int nodes) {
this.translator.getBuilder().setNumOfInMaintenanceDeadDataNodes(nodes);
}
@Override
public int getNumOfInMaintenanceDeadDataNodes() {
return this.translator.getProtoOrBuilder()
.getNumOfInMaintenanceDeadDataNodes();
}
@Override
public void setNumOfEnteringMaintenanceDataNodes(int nodes) {
this.translator.getBuilder().setNumOfEnteringMaintenanceDataNodes(nodes);
}
@Override
public int getNumOfEnteringMaintenanceDataNodes() {
return this.translator.getProtoOrBuilder()
.getNumOfEnteringMaintenanceDataNodes();
}
}

View File

@ -195,6 +195,20 @@ public class MountTablePBImpl extends MountTable implements PBRecord {
}
}
@Override
public boolean isFaultTolerant() {
MountTableRecordProtoOrBuilder proto = this.translator.getProtoOrBuilder();
if (!proto.hasFaultTolerant()) {
return false;
}
return proto.getFaultTolerant();
}
@Override
public void setFaultTolerant(boolean faultTolerant) {
this.translator.getBuilder().setFaultTolerant(faultTolerant);
}
@Override
public String getOwnerName() {
MountTableRecordProtoOrBuilder proto = this.translator.getProtoOrBuilder();

View File

@ -199,4 +199,14 @@ public class RouterStatePBImpl extends RouterState implements PBRecord {
public long getDateCreated() {
return this.translator.getProtoOrBuilder().getDateCreated();
}
@Override
public void setAdminAddress(String adminAddress) {
this.translator.getBuilder().setAdminAddress(adminAddress);
}
@Override
public String getAdminAddress() {
return this.translator.getProtoOrBuilder().getAdminAddress();
}
}

View File

@ -19,15 +19,21 @@ package org.apache.hadoop.hdfs.tools.federation;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.util.Arrays;
import java.util.Collection;
import java.util.LinkedHashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.regex.Pattern;
import org.apache.hadoop.classification.InterfaceAudience.Private;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.CommonConfigurationKeys;
import org.apache.hadoop.fs.permission.FsPermission;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.HdfsConfiguration;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
@ -48,20 +54,29 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeRequ
import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryResponse;
import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
import org.apache.hadoop.ipc.ProtobufRpcEngine;
import org.apache.hadoop.ipc.RPC;
import org.apache.hadoop.ipc.RefreshResponse;
import org.apache.hadoop.ipc.RemoteException;
import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolClientSideTranslatorPB;
import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolPB;
import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
@ -78,6 +93,9 @@ public class RouterAdmin extends Configured implements Tool {
private RouterClient client;
/** Pre-compiled regular expressions to detect duplicated slashes. */
private static final Pattern SLASHES = Pattern.compile("/+");
public static void main(String[] argv) throws Exception {
Configuration conf = new HdfsConfiguration();
RouterAdmin admin = new RouterAdmin(conf);
@ -106,10 +124,12 @@ public class RouterAdmin extends Configured implements Tool {
private String getUsage(String cmd) {
if (cmd == null) {
String[] commands =
{"-add", "-update", "-rm", "-ls", "-setQuota", "-clrQuota",
"-safemode", "-nameservice", "-getDisabledNameservices"};
{"-add", "-update", "-rm", "-ls", "-getDestination",
"-setQuota", "-clrQuota",
"-safemode", "-nameservice", "-getDisabledNameservices",
"-refresh"};
StringBuilder usage = new StringBuilder();
usage.append("Usage: hdfs routeradmin :\n");
usage.append("Usage: hdfs dfsrouteradmin :\n");
for (int i = 0; i < commands.length; i++) {
usage.append(getUsage(commands[i]));
if (i + 1 < commands.length) {
@ -120,17 +140,21 @@ public class RouterAdmin extends Configured implements Tool {
}
if (cmd.equals("-add")) {
return "\t[-add <source> <nameservice1, nameservice2, ...> <destination> "
+ "[-readonly] [-order HASH|LOCAL|RANDOM|HASH_ALL] "
+ "[-readonly] [-faulttolerant] "
+ "[-order HASH|LOCAL|RANDOM|HASH_ALL|SPACE] "
+ "-owner <owner> -group <group> -mode <mode>]";
} else if (cmd.equals("-update")) {
return "\t[-update <source> <nameservice1, nameservice2, ...> "
+ "<destination> "
+ "[-readonly] [-order HASH|LOCAL|RANDOM|HASH_ALL] "
return "\t[-update <source>"
+ " [<nameservice1, nameservice2, ...> <destination>] "
+ "[-readonly true|false] [-faulttolerant true|false] "
+ "[-order HASH|LOCAL|RANDOM|HASH_ALL|SPACE] "
+ "-owner <owner> -group <group> -mode <mode>]";
} else if (cmd.equals("-rm")) {
return "\t[-rm <source>]";
} else if (cmd.equals("-ls")) {
return "\t[-ls <path>]";
} else if (cmd.equals("-getDestination")) {
return "\t[-getDestination <path>]";
} else if (cmd.equals("-setQuota")) {
return "\t[-setQuota <path> -nsQuota <nsQuota> -ssQuota "
+ "<quota in bytes or quota size string>]";
@ -142,6 +166,10 @@ public class RouterAdmin extends Configured implements Tool {
return "\t[-nameservice enable | disable <nameservice>]";
} else if (cmd.equals("-getDisabledNameservices")) {
return "\t[-getDisabledNameservices]";
} else if (cmd.equals("-refresh")) {
return "\t[-refresh]";
} else if (cmd.equals("-refreshRouterArgs")) {
return "\t[-refreshRouterArgs <host:ipc_port> <key> [arg1..argn]]";
}
return getUsage(null);
}
@ -151,20 +179,15 @@ public class RouterAdmin extends Configured implements Tool {
* @param arg List of of command line parameters.
*/
private void validateMax(String[] arg) {
if (arg[0].equals("-rm")) {
if (arg[0].equals("-ls")) {
if (arg.length > 2) {
throw new IllegalArgumentException(
"Too many arguments, Max=1 argument allowed");
}
} else if (arg[0].equals("-ls")) {
} else if (arg[0].equals("-getDestination")) {
if (arg.length > 2) {
throw new IllegalArgumentException(
"Too many arguments, Max=1 argument allowed");
}
} else if (arg[0].equals("-clrQuota")) {
if (arg.length > 2) {
throw new IllegalArgumentException(
"Too many arguments, Max=1 argument allowed");
"Too many arguments, Max=1 argument allowed only");
}
} else if (arg[0].equals("-safemode")) {
if (arg.length > 2) {
@ -183,6 +206,53 @@ public class RouterAdmin extends Configured implements Tool {
}
}
/**
* Usage: validates the minimum number of arguments for a command.
* @param argv List of of command line parameters.
* @return true if number of arguments are valid for the command else false.
*/
private boolean validateMin(String[] argv) {
String cmd = argv[0];
if ("-add".equals(cmd)) {
if (argv.length < 4) {
return false;
}
} else if ("-update".equals(cmd)) {
if (argv.length < 4) {
return false;
}
} else if ("-rm".equals(cmd)) {
if (argv.length < 2) {
return false;
}
} else if ("-getDestination".equals(cmd)) {
if (argv.length < 2) {
return false;
}
} else if ("-setQuota".equals(cmd)) {
if (argv.length < 4) {
return false;
}
} else if ("-clrQuota".equals(cmd)) {
if (argv.length < 2) {
return false;
}
} else if ("-safemode".equals(cmd)) {
if (argv.length < 2) {
return false;
}
} else if ("-nameservice".equals(cmd)) {
if (argv.length < 3) {
return false;
}
} else if ("-refreshRouterArgs".equals(cmd)) {
if (argv.length < 2) {
return false;
}
}
return true;
}
@Override
public int run(String[] argv) throws Exception {
if (argv.length < 1) {
@ -196,53 +266,15 @@ public class RouterAdmin extends Configured implements Tool {
String cmd = argv[i++];
// Verify that we have enough command line parameters
if ("-add".equals(cmd)) {
if (argv.length < 4) {
System.err.println("Not enough parameters specified for cmd " + cmd);
printUsage(cmd);
return exitCode;
}
} else if ("-update".equals(cmd)) {
if (argv.length < 4) {
System.err.println("Not enough parameters specified for cmd " + cmd);
printUsage(cmd);
return exitCode;
}
} else if ("-rm".equals(cmd)) {
if (argv.length < 2) {
System.err.println("Not enough parameters specified for cmd " + cmd);
printUsage(cmd);
return exitCode;
}
} else if ("-setQuota".equals(cmd)) {
if (argv.length < 4) {
System.err.println("Not enough parameters specified for cmd " + cmd);
printUsage(cmd);
return exitCode;
}
} else if ("-clrQuota".equals(cmd)) {
if (argv.length < 2) {
System.err.println("Not enough parameters specified for cmd " + cmd);
printUsage(cmd);
return exitCode;
}
} else if ("-safemode".equals(cmd)) {
if (argv.length < 2) {
System.err.println("Not enough parameters specified for cmd " + cmd);
printUsage(cmd);
return exitCode;
}
} else if ("-nameservice".equals(cmd)) {
if (argv.length < 3) {
System.err.println("Not enough parameters specificed for cmd " + cmd);
printUsage(cmd);
return exitCode;
}
if (!validateMin(argv)) {
System.err.println("Not enough parameters specificed for cmd " + cmd);
printUsage(cmd);
return exitCode;
}
String address = null;
// Initialize RouterClient
try {
String address = getConf().getTrimmed(
address = getConf().getTrimmed(
RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_KEY,
RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_DEFAULT);
InetSocketAddress routerSocket = NetUtils.createSocketAddr(address);
@ -269,12 +301,23 @@ public class RouterAdmin extends Configured implements Tool {
} else if ("-update".equals(cmd)) {
if (updateMount(argv, i)) {
System.out.println("Successfully updated mount point " + argv[i]);
System.out.println(
"WARN: Changing order/destinations may lead to inconsistencies");
} else {
exitCode = -1;
}
} else if ("-rm".equals(cmd)) {
if (removeMount(argv[i])) {
System.out.println("Successfully removed mount point " + argv[i]);
while (i < argv.length) {
try {
if (removeMount(argv[i])) {
System.out.println("Successfully removed mount point " + argv[i]);
}
} catch (IOException e) {
exitCode = -1;
System.err
.println(cmd.substring(1) + ": " + e.getLocalizedMessage());
}
i++;
}
} else if ("-ls".equals(cmd)) {
if (argv.length > 1) {
@ -282,15 +325,20 @@ public class RouterAdmin extends Configured implements Tool {
} else {
listMounts("/");
}
} else if ("-getDestination".equals(cmd)) {
getDestination(argv[i]);
} else if ("-setQuota".equals(cmd)) {
if (setQuota(argv, i)) {
System.out.println(
"Successfully set quota for mount point " + argv[i]);
}
} else if ("-clrQuota".equals(cmd)) {
if (clrQuota(argv[i])) {
System.out.println(
"Successfully clear quota for mount point " + argv[i]);
while (i < argv.length) {
if (clrQuota(argv[i])) {
System.out
.println("Successfully clear quota for mount point " + argv[i]);
i++;
}
}
} else if ("-safemode".equals(cmd)) {
manageSafeMode(argv[i]);
@ -300,6 +348,10 @@ public class RouterAdmin extends Configured implements Tool {
manageNameservice(subcmd, nsId);
} else if ("-getDisabledNameservices".equals(cmd)) {
getDisabledNameservices();
} else if ("-refresh".equals(cmd)) {
refresh(address);
} else if ("-refreshRouterArgs".equals(cmd)) {
exitCode = genericRefresh(argv, i);
} else {
throw new IllegalArgumentException("Unknown Command: " + cmd);
}
@ -323,6 +375,10 @@ public class RouterAdmin extends Configured implements Tool {
e.printStackTrace();
debugException = ex;
}
} catch (IOException ioe) {
exitCode = -1;
System.err.println(cmd.substring(1) + ": " + ioe.getLocalizedMessage());
printUsage(cmd);
} catch (Exception e) {
exitCode = -1;
debugException = e;
@ -335,6 +391,27 @@ public class RouterAdmin extends Configured implements Tool {
return exitCode;
}
private void refresh(String address) throws IOException {
if (refreshRouterCache()) {
System.out.println(
"Successfully updated mount table cache on router " + address);
}
}
/**
* Refresh mount table cache on connected router.
*
* @return true if cache refreshed successfully
* @throws IOException
*/
private boolean refreshRouterCache() throws IOException {
RefreshMountTableEntriesResponse response =
client.getMountTableManager().refreshMountTableEntries(
RefreshMountTableEntriesRequest.newInstance());
return response.getResult();
}
/**
* Add a mount table entry or update if it exists.
*
@ -351,6 +428,7 @@ public class RouterAdmin extends Configured implements Tool {
// Optional parameters
boolean readOnly = false;
boolean faultTolerant = false;
String owner = null;
String group = null;
FsPermission mode = null;
@ -358,6 +436,8 @@ public class RouterAdmin extends Configured implements Tool {
while (i < parameters.length) {
if (parameters[i].equals("-readonly")) {
readOnly = true;
} else if (parameters[i].equals("-faulttolerant")) {
faultTolerant = true;
} else if (parameters[i].equals("-order")) {
i++;
try {
@ -383,7 +463,7 @@ public class RouterAdmin extends Configured implements Tool {
i++;
}
return addMount(mount, nss, dest, readOnly, order,
return addMount(mount, nss, dest, readOnly, faultTolerant, order,
new ACLEntity(owner, group, mode));
}
@ -400,22 +480,13 @@ public class RouterAdmin extends Configured implements Tool {
* @throws IOException Error adding the mount point.
*/
public boolean addMount(String mount, String[] nss, String dest,
boolean readonly, DestinationOrder order, ACLEntity aclInfo)
boolean readonly, boolean faultTolerant, DestinationOrder order,
ACLEntity aclInfo)
throws IOException {
mount = normalizeFileSystemPath(mount);
// Get the existing entry
MountTableManager mountTable = client.getMountTableManager();
GetMountTableEntriesRequest getRequest =
GetMountTableEntriesRequest.newInstance(mount);
GetMountTableEntriesResponse getResponse =
mountTable.getMountTableEntries(getRequest);
List<MountTable> results = getResponse.getEntries();
MountTable existingEntry = null;
for (MountTable result : results) {
if (mount.equals(result.getSourcePath())) {
existingEntry = result;
}
}
MountTable existingEntry = getMountEntry(mount, mountTable);
if (existingEntry == null) {
// Create and add the entry if it doesn't exist
@ -427,6 +498,9 @@ public class RouterAdmin extends Configured implements Tool {
if (readonly) {
newEntry.setReadOnly(true);
}
if (faultTolerant) {
newEntry.setFaultTolerant(true);
}
if (order != null) {
newEntry.setDestOrder(order);
}
@ -444,6 +518,8 @@ public class RouterAdmin extends Configured implements Tool {
newEntry.setMode(aclInfo.getMode());
}
newEntry.validate();
AddMountTableEntryRequest request =
AddMountTableEntryRequest.newInstance(newEntry);
AddMountTableEntryResponse addResponse =
@ -463,6 +539,9 @@ public class RouterAdmin extends Configured implements Tool {
if (readonly) {
existingEntry.setReadOnly(true);
}
if (faultTolerant) {
existingEntry.setFaultTolerant(true);
}
if (order != null) {
existingEntry.setDestOrder(order);
}
@ -480,6 +559,8 @@ public class RouterAdmin extends Configured implements Tool {
existingEntry.setMode(aclInfo.getMode());
}
existingEntry.validate();
UpdateMountTableEntryRequest updateRequest =
UpdateMountTableEntryRequest.newInstance(existingEntry);
UpdateMountTableEntryResponse updateResponse =
@ -501,95 +582,81 @@ public class RouterAdmin extends Configured implements Tool {
* @throws IOException If there is an error.
*/
public boolean updateMount(String[] parameters, int i) throws IOException {
// Mandatory parameters
String mount = parameters[i++];
String[] nss = parameters[i++].split(",");
String dest = parameters[i++];
// Optional parameters
boolean readOnly = false;
String owner = null;
String group = null;
FsPermission mode = null;
DestinationOrder order = null;
while (i < parameters.length) {
if (parameters[i].equals("-readonly")) {
readOnly = true;
} else if (parameters[i].equals("-order")) {
i++;
try {
order = DestinationOrder.valueOf(parameters[i]);
} catch(Exception e) {
System.err.println("Cannot parse order: " + parameters[i]);
}
} else if (parameters[i].equals("-owner")) {
i++;
owner = parameters[i];
} else if (parameters[i].equals("-group")) {
i++;
group = parameters[i];
} else if (parameters[i].equals("-mode")) {
i++;
short modeValue = Short.parseShort(parameters[i], 8);
mode = new FsPermission(modeValue);
} else {
printUsage("-update");
return false;
}
i++;
}
return updateMount(mount, nss, dest, readOnly, order,
new ACLEntity(owner, group, mode));
}
/**
* Update a mount table entry.
*
* @param mount Mount point.
* @param nss Nameservices where this is mounted to.
* @param dest Destination path.
* @param readonly If the mount point is read only.
* @param order Order of the destination locations.
* @param aclInfo the ACL info for mount point.
* @return If the mount point was updated.
* @throws IOException Error updating the mount point.
*/
public boolean updateMount(String mount, String[] nss, String dest,
boolean readonly, DestinationOrder order, ACLEntity aclInfo)
throws IOException {
mount = normalizeFileSystemPath(mount);
MountTableManager mountTable = client.getMountTableManager();
// Create a new entry
Map<String, String> destMap = new LinkedHashMap<>();
for (String ns : nss) {
destMap.put(ns, dest);
MountTable existingEntry = getMountEntry(mount, mountTable);
if (existingEntry == null) {
throw new IOException(mount + " doesn't exist.");
}
MountTable newEntry = MountTable.newInstance(mount, destMap);
// Check if the destination needs to be updated.
newEntry.setReadOnly(readonly);
if (order != null) {
newEntry.setDestOrder(order);
if (!parameters[i].startsWith("-")) {
String[] nss = parameters[i++].split(",");
String dest = parameters[i++];
Map<String, String> destMap = new LinkedHashMap<>();
for (String ns : nss) {
destMap.put(ns, dest);
}
final List<RemoteLocation> locations = new LinkedList<>();
for (Entry<String, String> entry : destMap.entrySet()) {
String nsId = entry.getKey();
String path = normalizeFileSystemPath(entry.getValue());
RemoteLocation location = new RemoteLocation(nsId, path, mount);
locations.add(location);
}
existingEntry.setDestinations(locations);
}
// Update ACL info of mount table entry
if (aclInfo.getOwner() != null) {
newEntry.setOwnerName(aclInfo.getOwner());
try {
while (i < parameters.length) {
switch (parameters[i]) {
case "-readonly":
i++;
existingEntry.setReadOnly(getBooleanValue(parameters[i]));
break;
case "-faulttolerant":
i++;
existingEntry.setFaultTolerant(getBooleanValue(parameters[i]));
break;
case "-order":
i++;
try {
existingEntry.setDestOrder(DestinationOrder.valueOf(parameters[i]));
break;
} catch (Exception e) {
throw new Exception("Cannot parse order: " + parameters[i]);
}
case "-owner":
i++;
existingEntry.setOwnerName(parameters[i]);
break;
case "-group":
i++;
existingEntry.setGroupName(parameters[i]);
break;
case "-mode":
i++;
short modeValue = Short.parseShort(parameters[i], 8);
existingEntry.setMode(new FsPermission(modeValue));
break;
default:
printUsage("-update");
return false;
}
i++;
}
} catch (IllegalArgumentException iae) {
throw iae;
} catch (Exception e) {
String msg = "Unable to parse arguments: " + e.getMessage();
if (e instanceof ArrayIndexOutOfBoundsException) {
msg = "Unable to parse arguments: no value provided for "
+ parameters[i - 1];
}
throw new IOException(msg);
}
if (aclInfo.getGroup() != null) {
newEntry.setGroupName(aclInfo.getGroup());
}
if (aclInfo.getMode() != null) {
newEntry.setMode(aclInfo.getMode());
}
UpdateMountTableEntryRequest updateRequest =
UpdateMountTableEntryRequest.newInstance(newEntry);
UpdateMountTableEntryRequest.newInstance(existingEntry);
UpdateMountTableEntryResponse updateResponse =
mountTable.updateMountTableEntry(updateRequest);
boolean updated = updateResponse.getStatus();
@ -599,6 +666,45 @@ public class RouterAdmin extends Configured implements Tool {
return updated;
}
/**
* Parse string to boolean.
* @param value the string to be parsed.
* @return parsed boolean value.
* @throws Exception if other than true|false is provided.
*/
private boolean getBooleanValue(String value) throws Exception {
if (value.equalsIgnoreCase("true")) {
return true;
} else if (value.equalsIgnoreCase("false")) {
return false;
}
throw new IllegalArgumentException("Invalid argument: " + value
+ ". Please specify either true or false.");
}
/**
* Gets the mount table entry.
* @param mount name of the mount entry.
* @param mountTable the mount table.
* @return corresponding mount entry.
* @throws IOException in case of failure to retrieve mount entry.
*/
private MountTable getMountEntry(String mount, MountTableManager mountTable)
throws IOException {
GetMountTableEntriesRequest getRequest =
GetMountTableEntriesRequest.newInstance(mount);
GetMountTableEntriesResponse getResponse =
mountTable.getMountTableEntries(getRequest);
List<MountTable> results = getResponse.getEntries();
MountTable existingEntry = null;
for (MountTable result : results) {
if (mount.equals(result.getSourcePath())) {
existingEntry = result;
}
}
return existingEntry;
}
/**
* Remove mount point.
*
@ -661,6 +767,16 @@ public class RouterAdmin extends Configured implements Tool {
}
}
private void getDestination(String path) throws IOException {
path = normalizeFileSystemPath(path);
MountTableManager mountTable = client.getMountTableManager();
GetDestinationRequest request =
GetDestinationRequest.newInstance(path);
GetDestinationResponse response = mountTable.getDestination(request);
System.out.println("Destination: " +
StringUtils.join(",", response.getDestinations()));
}
/**
* Set quota for a mount table entry.
*
@ -892,15 +1008,73 @@ public class RouterAdmin extends Configured implements Tool {
}
}
public int genericRefresh(String[] argv, int i) throws IOException {
String hostport = argv[i++];
String identifier = argv[i++];
String[] args = Arrays.copyOfRange(argv, i, argv.length);
// Get the current configuration
Configuration conf = getConf();
// for security authorization
// server principal for this call
// should be NN's one.
conf.set(CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY,
conf.get(DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, ""));
// Create the client
Class<?> xface = GenericRefreshProtocolPB.class;
InetSocketAddress address = NetUtils.createSocketAddr(hostport);
UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
RPC.setProtocolEngine(conf, xface, ProtobufRpcEngine.class);
GenericRefreshProtocolPB proxy = (GenericRefreshProtocolPB)RPC.getProxy(
xface, RPC.getProtocolVersion(xface), address, ugi, conf,
NetUtils.getDefaultSocketFactory(conf), 0);
Collection<RefreshResponse> responses = null;
try (GenericRefreshProtocolClientSideTranslatorPB xlator =
new GenericRefreshProtocolClientSideTranslatorPB(proxy)) {
// Refresh
responses = xlator.refresh(identifier, args);
int returnCode = 0;
// Print refresh responses
System.out.println("Refresh Responses:\n");
for (RefreshResponse response : responses) {
System.out.println(response.toString());
if (returnCode == 0 && response.getReturnCode() != 0) {
// This is the first non-zero return code, so we should return this
returnCode = response.getReturnCode();
} else if (returnCode != 0 && response.getReturnCode() != 0) {
// Then now we have multiple non-zero return codes,
// so we merge them into -1
returnCode = -1;
}
}
return returnCode;
} finally {
if (responses == null) {
System.out.println("Failed to get response.\n");
return -1;
}
}
}
/**
* Normalize a path for that filesystem.
*
* @param path Path to normalize.
* @param str Path to normalize. The path doesn't have scheme or authority.
* @return Normalized path.
*/
private static String normalizeFileSystemPath(final String path) {
Path normalizedPath = new Path(path);
return normalizedPath.toString();
public static String normalizeFileSystemPath(final String str) {
String path = SLASHES.matcher(str).replaceAll("/");
if (path.length() > 1 && path.endsWith("/")) {
path = path.substring(0, path.length()-1);
}
return path;
}
/**

View File

@ -45,6 +45,10 @@ message NamenodeMembershipStatsRecordProto {
optional uint32 numOfDecommissioningDatanodes = 22;
optional uint32 numOfDecomActiveDatanodes = 23;
optional uint32 numOfDecomDeadDatanodes = 24;
optional uint32 numOfStaleDatanodes = 25;
optional uint32 numOfInMaintenanceLiveDataNodes = 26;
optional uint32 numOfInMaintenanceDeadDataNodes = 27;
optional uint32 numOfEnteringMaintenanceDataNodes = 28;
}
message NamenodeMembershipRecordProto {
@ -139,6 +143,8 @@ message MountTableRecordProto {
optional int32 mode = 12;
optional QuotaUsageProto quota = 13;
optional bool faultTolerant = 14 [default = false];
}
message AddMountTableEntryRequestProto {
@ -174,6 +180,14 @@ message GetMountTableEntriesResponseProto {
optional uint64 timestamp = 2;
}
message GetDestinationRequestProto {
optional string srcPath = 1;
}
message GetDestinationResponseProto {
repeated string destinations = 1;
}
/////////////////////////////////////////////////
// Routers
@ -193,6 +207,7 @@ message RouterRecordProto {
optional string version = 6;
optional string compileInfo = 7;
optional uint64 dateStarted = 8;
optional string adminAddress = 9;
}
message GetRouterRegistrationRequestProto {
@ -219,6 +234,13 @@ message RouterHeartbeatResponseProto {
optional bool status = 1;
}
message RefreshMountTableEntriesRequestProto {
}
message RefreshMountTableEntriesResponseProto {
optional bool result = 1;
}
/////////////////////////////////////////////////
// Route State
/////////////////////////////////////////////////

View File

@ -74,4 +74,14 @@ service RouterAdminProtocolService {
* Get the list of disabled name services.
*/
rpc getDisabledNameservices(GetDisabledNameservicesRequestProto) returns (GetDisabledNameservicesResponseProto);
/**
* Refresh mount entries
*/
rpc refreshMountTableEntries(RefreshMountTableEntriesRequestProto) returns(RefreshMountTableEntriesResponseProto);
/**
* Get the destination of a file/directory in the federation.
*/
rpc getDestination(GetDestinationRequestProto) returns (GetDestinationResponseProto);
}

View File

@ -19,8 +19,8 @@
-->
<!-- Do not modify this file directly. Instead, copy entries that you -->
<!-- wish to modify from this file into hdfs-site.xml and change them -->
<!-- there. If hdfs-site.xml does not already exist, create it. -->
<!-- wish to modify from this file into hdfs-rbf-site.xml and change them -->
<!-- there. If hdfs-rbf-site.xml does not already exist, create it. -->
<configuration>
<property>
@ -117,6 +117,14 @@
</description>
</property>
<property>
<name>dfs.federation.router.connection.min-active-ratio</name>
<value>0.5f</value>
<description>
Minimum active ratio of connections from the router to namenodes.
</description>
</property>
<property>
<name>dfs.federation.router.connection.clean.ms</name>
<value>10000</value>
@ -143,6 +151,23 @@
</description>
</property>
<property>
<name>dfs.federation.router.dn-report.time-out</name>
<value>1000</value>
<description>
Time out, in milliseconds for getDatanodeReport.
</description>
</property>
<property>
<name>dfs.federation.router.dn-report.cache-expire</name>
<value>10s</value>
<description>
Expiration time in seconds for datanodereport.
</description>
</property>
<property>
<name>dfs.federation.router.metrics.class</name>
<value>org.apache.hadoop.hdfs.server.federation.metrics.FederationRPCPerformanceMonitor</value>
@ -250,7 +275,8 @@
<name>dfs.federation.router.file.resolver.client.class</name>
<value>org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver</value>
<description>
Class to resolve files to subclusters.
Class to resolve files to subclusters. To enable multiple subclusters for a mount point,
set to org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver.
</description>
</property>
@ -345,6 +371,16 @@
</description>
</property>
<property>
<name>dfs.federation.router.namenode.heartbeat.enable</name>
<value>true</value>
<description>
If true, get namenode heartbeats and send into the State Store.
If not explicitly specified takes the same value as for
dfs.federation.router.heartbeat.enable.
</description>
</property>
<property>
<name>dfs.federation.router.store.router.expiration</name>
<value>5m</value>
@ -422,7 +458,9 @@
<name>dfs.federation.router.quota.enable</name>
<value>false</value>
<description>
Set to true to enable quota system in Router.
Set to true to enable quota system in Router. When it's enabled, setting
or clearing sub-cluster's quota directly is not recommended since Router
Admin server will override sub-cluster's quota with global quota.
</description>
</property>
@ -465,4 +503,134 @@
</description>
</property>
</configuration>
<property>
<name>dfs.federation.router.client.allow-partial-listing</name>
<value>true</value>
<description>
If the Router can return a partial list of files in a multi-destination mount point when one of the subclusters is unavailable.
True may return a partial list of files if a subcluster is down.
False will fail the request if one is unavailable.
</description>
</property>
<property>
<name>dfs.federation.router.client.mount-status.time-out</name>
<value>1s</value>
<description>
Set a timeout for the Router when listing folders containing mount
points. In this process, the Router checks the mount table and then it
checks permissions in the subcluster. After the time out, we return the
default values.
</description>
</property>
<property>
<name>dfs.federation.router.connect.max.retries.on.timeouts</name>
<value>0</value>
<description>
Maximum number of retries for the IPC Client when connecting to the
subclusters. By default, it doesn't let the IPC retry and the Router
handles it.
</description>
</property>
<property>
<name>dfs.federation.router.connect.timeout</name>
<value>2s</value>
<description>
Time out for the IPC client connecting to the subclusters. This should be
short as the Router has knowledge of the state of the Routers.
</description>
</property>
<property>
<name>dfs.federation.router.keytab.file</name>
<value></value>
<description>
The keytab file used by router to login as its
service principal. The principal name is configured with
dfs.federation.router.kerberos.principal.
</description>
</property>
<property>
<name>dfs.federation.router.kerberos.principal</name>
<value></value>
<description>
The Router service principal. This is typically set to
router/_HOST@REALM.TLD. Each Router will substitute _HOST with its
own fully qualified hostname at startup. The _HOST placeholder
allows using the same configuration setting on both Router
in an HA setup.
</description>
</property>
<property>
<name>dfs.federation.router.kerberos.principal.hostname</name>
<value></value>
<description>
Optional. The hostname for the Router containing this
configuration file. Will be different for each machine.
Defaults to current hostname.
</description>
</property>
<property>
<name>dfs.federation.router.kerberos.internal.spnego.principal</name>
<value>${dfs.web.authentication.kerberos.principal}</value>
<description>
The server principal used by the Router for web UI SPNEGO
authentication when Kerberos security is enabled. This is
typically set to HTTP/_HOST@REALM.TLD The SPNEGO server principal
begins with the prefix HTTP/ by convention.
If the value is '*', the web server will attempt to login with
every principal specified in the keytab file
dfs.web.authentication.kerberos.keytab.
</description>
</property>
<property>
<name>dfs.federation.router.mount-table.cache.update</name>
<value>false</value>
<description>Set true to enable MountTableRefreshService. This service
updates mount table cache immediately after adding, modifying or
deleting the mount table entries. If this service is not enabled
mount table cache are refreshed periodically by
StateStoreCacheUpdateService
</description>
</property>
<property>
<name>dfs.federation.router.mount-table.cache.update.timeout</name>
<value>1m</value>
<description>This property defines how long to wait for all the
admin servers to finish their mount table cache update. This setting
supports multiple time unit suffixes as described in
dfs.federation.router.safemode.extension.
</description>
</property>
<property>
<name>dfs.federation.router.mount-table.cache.update.client.max.time
</name>
<value>5m</value>
<description>Remote router mount table cache is updated through
RouterClient(RPC call). To improve performance, RouterClient
connections are cached but it should not be kept in cache forever.
This property defines the max time a connection can be cached. This
setting supports multiple time unit suffixes as described in
dfs.federation.router.safemode.extension.
</description>
</property>
<property>
<name>dfs.federation.router.secret.manager.class</name>
<value>org.apache.hadoop.hdfs.server.federation.router.security.token.ZKDelegationTokenSecretManagerImpl</value>
<description>
Class to implement state store to delegation tokens.
Default implementation uses zookeeper as the backend to store delegation tokens.
</description>
</property>
</configuration>

View File

@ -75,8 +75,8 @@
<!-- Overview -->
<script type="text/x-dust-template" id="tmpl-federationhealth">
<div class="page-header"><h1>Router {#federation}<small>'{HostAndPort}'</small>{/federation}</h1></div>
{#federation}
<div class="page-header"><h1>Router {#router}<small>'{HostAndPort}'</small>{/router}</h1></div>
{#router}
<table class="table table-bordered table-striped">
<tr><th>Started:</th><td>{RouterStarted}</td></tr>
<tr><th>Version:</th><td>{Version}</td></tr>
@ -85,12 +85,12 @@
<tr><th>Block Pool ID:</th><td>{BlockPoolId}</td></tr>
<tr><th>Status:</th><td>{RouterStatus}</td></tr>
</table>
{/federation}
{/router}
<div class="page-header"><h1>Summary</h1></div>
{#federation}
<p>
Security is {#routerstat}{#SecurityEnabled}on{:else}off{/SecurityEnabled}{/routerstat}.</p>
Security is {#router}{#SecurityEnabled}on{:else}off{/SecurityEnabled}{/router}.</p>
<p>{#router}{#Safemode}{.}{:else}Safemode is off.{/Safemode}{/router}</p>
<p>
@ -177,10 +177,10 @@
</div>
</div>
</td>
<td>{numOfFiles}</td>
<td>{numOfBlocks}</td>
<td>{numOfBlocksMissing}</td>
<td>{numOfBlocksUnderReplicated}</td>
<td>{numOfFiles|fmt_human_number}</td>
<td>{numOfBlocks|fmt_human_number}</td>
<td>{numOfBlocksMissing|fmt_human_number}</td>
<td>{numOfBlocksUnderReplicated|fmt_human_number}</td>
<td>{numOfActiveDatanodes}</td>
<td>{numOfDeadDatanodes}</td>
<td>{numOfDecommissioningDatanodes}</td>
@ -244,10 +244,10 @@
</div>
</div>
</td>
<td>{numOfFiles}</td>
<td>{numOfBlocks}</td>
<td>{numOfBlocksMissing}</td>
<td>{numOfBlocksUnderReplicated}</td>
<td>{numOfFiles|fmt_human_number}</td>
<td>{numOfBlocks|fmt_human_number}</td>
<td>{numOfBlocksMissing|fmt_human_number}</td>
<td>{numOfBlocksUnderReplicated|fmt_human_number}</td>
<td>{numOfActiveDatanodes}</td>
<td>{numOfDeadDatanodes}</td>
<td>{numOfDecommissioningDatanodes}</td>
@ -393,6 +393,7 @@
<th>Target path</th>
<th>Order</th>
<th>Read only</th>
<th>Fault tolerant</th>
<th>Owner</th>
<th>Group</th>
<th>Permission</th>
@ -408,7 +409,8 @@
<td>{nameserviceId}</td>
<td>{path}</td>
<td>{order}</td>
<td class="mount-table-icon mount-table-read-only-{readonly}"/>
<td align="center" class="mount-table-icon mount-table-read-only-{readonly}" title="{status}"/>
<td align="center" class="mount-table-icon mount-table-fault-tolerant-{faulttolerant}" title="{ftStatus}"></td>
<td>{ownerName}</td>
<td>{groupName}</td>
<td>{mode}</td>

View File

@ -34,8 +34,7 @@
function load_overview() {
var BEANS = [
{"name": "federation", "url": "/jmx?qry=Hadoop:service=Router,name=FederationState"},
{"name": "routerstat", "url": "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},
{"name": "router", "url": "/jmx?qrt=Hadoop:service=NameNode,name=NameNodeInfo"},
{"name": "router", "url": "/jmx?qry=Hadoop:service=Router,name=Router"},
{"name": "mem", "url": "/jmx?qry=java.lang:type=Memory"}
];
@ -317,14 +316,27 @@
for (var i = 0, e = mountTable.length; i < e; ++i) {
if (mountTable[i].readonly == true) {
mountTable[i].readonly = "true"
mountTable[i].status = "Read only"
} else {
mountTable[i].readonly = "false"
}
}
}
function augment_fault_tolerant(mountTable) {
for (var i = 0, e = mountTable.length; i < e; ++i) {
if (mountTable[i].faulttolerant == true) {
mountTable[i].faulttolerant = "true"
mountTable[i].ftStatus = "Fault tolerant"
} else {
mountTable[i].faulttolerant = "false"
}
}
}
resource.MountTable = JSON.parse(resource.MountTable)
augment_read_only(resource.MountTable)
augment_fault_tolerant(resource.MountTable)
return resource;
}

View File

@ -132,12 +132,11 @@
}
.mount-table-read-only-true:before {
color: #c7254e;
color: #5fa341;
content: "\e033";
}
.mount-table-read-only-false:before {
.mount-table-fault-tolerant-true:before {
color: #5fa341;
content: "\e013";
content: "\e033";
}

View File

@ -45,6 +45,7 @@ This approach has the same architecture as [YARN federation](../../hadoop-yarn/h
### Example flow
The simplest configuration deploys a Router on each NameNode machine.
The Router monitors the local NameNode and its state and heartbeats to the State Store.
The Router monitors the local NameNode and heartbeats the state to the State Store.
When a regular DFS client contacts any of the Routers to access a file in the federated filesystem, the Router checks the Mount Table in the State Store (i.e., the local cache) to find out which subcluster contains the file.
Then it checks the Membership table in the State Store (i.e., the local cache) for the NameNode responsible for the subcluster.
@ -69,6 +70,9 @@ To make sure that changes have been propagated to all Routers, each Router heart
The communications between the Routers and the State Store are cached (with timed expiration for freshness).
This improves the performance of the system.
#### Router heartbeat
The Router periodically heartbeats its state to the State Store.
#### NameNode heartbeat
For this role, the Router periodically checks the state of a NameNode (usually on the same server) and reports their high availability (HA) state and load/space status to the State Store.
Note that this is an optional role, as a Router can be independent of any subcluster.
@ -143,6 +147,8 @@ For performance reasons, the Router caches the quota usage and updates it period
will be used for quota-verification during each WRITE RPC call invoked in RouterRPCSever. See [HDFS Quotas Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html)
for the quota detail.
Note: When global quota is enabled, setting or clearing sub-cluster's quota directly is not recommended since Router Admin server will override sub-cluster's quota with global quota.
### State Store
The (logically centralized, but physically distributed) State Store maintains:
@ -167,7 +173,15 @@ It is similar to the mount table in [ViewFs](../hadoop-hdfs/ViewFs.html) where i
### Security
Secure authentication and authorization are not supported yet, so the Router will not proxy to Hadoop clusters with security enabled.
Router supports security similar to [current security model](../hadoop-common/SecureMode.html) in HDFS. This feature is available for both RPC and Web based calls. It has the capability to proxy to underlying secure HDFS clusters.
Similar to Namenode, support exists for both kerberos and token based authentication for clients connecting to routers. Router internally relies on existing security related configs of `core-site.xml` and `hdfs-site.xml` to support this feature. In addition to that, routers need to be configured with its own keytab and principal.
For token based authentication, router issues delegation tokens to upstream clients without communicating with downstream namenodes. Router uses its own credentials to securely proxy to downstream namenode on behalf of upstream real user. Router principal has to be configured as a superuser in all secure downstream namenodes. Refer [here](../hadoop-common/Superusers.html) to configure proxy user for namenode. Along with that, user owning router daemons should be configured with the same identity as namenode process itself. Refer [here](../hadoop-hdfs/HdfsPermissionsGuide.html#The_Super-User) for details.
Router relies on a state store to distribute tokens across all routers. Apart from default implementation provided users can plugin their own implementation of state store for token management. Default implementation relies on zookeeper for token management. Since a large router/zookeeper cluster could potentially hold millions of tokens, `jute.maxbuffer` system property that zookeeper clients rely on should be appropriately configured in router daemons.
See the Apache JIRA ticket [HDFS-13532](https://issues.apache.org/jira/browse/HDFS-13532) for more information on this feature.
Deployment
@ -230,6 +244,12 @@ Ls command will show below information for each mount table entry:
Source Destinations Owner Group Mode Quota/Usage
/path ns0->/path root supergroup rwxr-xr-x [NsQuota: 50/0, SsQuota: 100 B/0 B]
Mount table cache is refreshed periodically but it can also be refreshed by executing refresh command:
[hdfs]$ $HADOOP_HOME/bin/hdfs dfsrouteradmin -refresh
The above command will refresh cache of the connected router. This command is redundant when mount table refresh service is enabled as the service will always keep the cache updated.
#### Multiple subclusters
A mount point also supports mapping multiple subclusters.
For example, to create a mount point that stores files in subclusters `ns1` and `ns2`.
@ -253,7 +273,19 @@ RANDOM can be used for reading and writing data from/into different subclusters.
The common use for this approach is to have the same data in multiple subclusters and balance the reads across subclusters.
For example, if thousands of containers need to read the same data (e.g., a library), one can use RANDOM to read the data from any of the subclusters.
To determine which subcluster contains a file:
[hdfs]$ $HADOOP_HOME/bin/hdfs dfsrouteradmin -getDestination /user/user1/file.txt
Note that consistency of the data across subclusters is not guaranteed by the Router.
By default, if one subcluster is unavailable, writes may fail if they target that subcluster.
To allow writing in another subcluster, one can make the mount point fault tolerant:
[hdfs]$ $HADOOP_HOME/bin/hdfs dfsrouteradmin -add /data ns1,ns2 /data -order HASH_ALL -faulttolerant
Note that this can lead to a file to be written in multiple subclusters or a folder missing in one.
One needs to be aware of the possibility of these inconsistencies and target this `faulttolerant` approach to resilient paths.
An example for this is the `/app-logs` folder which will mostly write once into a subfolder.
### Disabling nameservices
@ -266,6 +298,12 @@ For example, one can disable `ns1`, list it and enable it again:
This is useful when decommissioning subclusters or when one subcluster is missbehaving (e.g., low performance or unavailability).
### Router server generically refresh
To trigger a runtime-refresh of the resource specified by \<key\> on \<host:ipc\_port\>. For example, to enable white list checking, we just need to send a refresh command other than restart the router server.
[hdfs]$ $HADOOP_HOME/bin/hdfs dfsrouteradmin -refreshRouterArgs <host:ipc_port> <key> [arg1..argn]
Client configuration
--------------------
@ -315,7 +353,7 @@ This federated namespace can also be set as the default one at **core-site.xml**
Router configuration
--------------------
One can add the configurations for Router-based federation to **hdfs-site.xml**.
One can add the configurations for Router-based federation to **hdfs-rbf-site.xml**.
The main options are documented in [hdfs-rbf-default.xml](../hadoop-hdfs-rbf/hdfs-rbf-default.xml).
The configuration values are described in this section.
@ -380,6 +418,9 @@ The connection to the State Store and the internal caching at the Router.
| dfs.federation.router.store.connection.test | 60000 | How often to check for the connection to the State Store in milliseconds. |
| dfs.federation.router.cache.ttl | 60000 | How often to refresh the State Store caches in milliseconds. |
| dfs.federation.router.store.membership.expiration | 300000 | Expiration time in milliseconds for a membership record. |
| dfs.federation.router.mount-table.cache.update | false | If true, Mount table cache is updated whenever a mount table entry is added, modified or removed for all the routers. |
| dfs.federation.router.mount-table.cache.update.timeout | 1m | Max time to wait for all the routers to finish their mount table cache update. |
| dfs.federation.router.mount-table.cache.update.client.max.time | 5m | Max time a RouterClient connection can be cached. |
### Routing
@ -387,7 +428,7 @@ Forwarding client requests to the right subcluster.
| Property | Default | Description|
|:---- |:---- |:---- |
| dfs.federation.router.file.resolver.client.class | `org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver` | Class to resolve files to subclusters. |
| dfs.federation.router.file.resolver.client.class | `org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver` | Class to resolve files to subclusters. To enable multiple subclusters for a mount point, set to org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver. |
| dfs.federation.router.namenode.resolver.client.class | `org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver` | Class to resolve the namenode for a subcluster. |
### Namenode monitoring
@ -396,7 +437,8 @@ Monitor the namenodes in the subclusters for forwarding the client requests.
| Property | Default | Description|
|:---- |:---- |:---- |
| dfs.federation.router.heartbeat.enable | `true` | If `true`, the Router heartbeats into the State Store. |
| dfs.federation.router.heartbeat.enable | `true` | If `true`, the Router periodically heartbeats its state to the State Store. |
| dfs.federation.router.namenode.heartbeat.enable | | If `true`, the Router gets namenode heartbeats and send to the State Store. If not explicitly specified takes the same value as for `dfs.federation.router.heartbeat.enable`. |
| dfs.federation.router.heartbeat.interval | 5000 | How often the Router should heartbeat into the State Store in milliseconds. |
| dfs.federation.router.monitor.namenode | | The identifier of the namenodes to monitor and heartbeat. |
| dfs.federation.router.monitor.localnamenode.enable | `true` | If `true`, the Router should monitor the namenode in the local machine. |
@ -412,11 +454,23 @@ Global quota supported in federation.
| Property | Default | Description|
|:---- |:---- |:---- |
| dfs.federation.router.quota.enable | `false` | If `true`, the quota system enabled in the Router. |
| dfs.federation.router.quota.enable | `false` | If `true`, the quota system enabled in the Router. In that case, setting or clearing sub-cluster's quota directly is not recommended since Router Admin server will override sub-cluster's quota with global quota.|
| dfs.federation.router.quota-cache.update.interval | 60s | How often the Router updates quota cache. This setting supports multiple time unit suffixes. If no suffix is specified then milliseconds is assumed. |
### Security
Kerberos and Delegation token supported in federation.
| Property | Default | Description|
|:---- |:---- |:---- |
| dfs.federation.router.keytab.file | | The keytab file used by router to login as its service principal. The principal name is configured with 'dfs.federation.router.kerberos.principal'.|
| dfs.federation.router.kerberos.principal | | The Router service principal. This is typically set to router/_HOST@REALM.TLD. Each Router will substitute _HOST with its own fully qualified hostname at startup. The _HOST placeholder allows using the same configuration setting on all Routers in an HA setup. |
| dfs.federation.router.kerberos.principal.hostname | | The hostname for the Router containing this configuration file. Will be different for each machine. Defaults to current hostname. |
| dfs.federation.router.kerberos.internal.spnego.principal | `${dfs.web.authentication.kerberos.principal}` | The server principal used by the Router for web UI SPNEGO authentication when Kerberos security is enabled. This is typically set to HTTP/_HOST@REALM.TLD The SPNEGO server principal begins with the prefix HTTP/ by convention. If the value is '*', the web server will attempt to login with every principal specified in the keytab file 'dfs.web.authentication.kerberos.keytab'. |
| dfs.federation.router.secret.manager.class | `org.apache.hadoop.hdfs.server.federation.router.security.token.ZKDelegationTokenSecretManagerImpl` | Class to implement state store to delegation tokens. Default implementation uses zookeeper as the backend to store delegation tokens. |
Metrics
-------
The Router and State Store statistics are exposed in metrics/JMX. These info will be very useful for monitoring.
More metrics info can see [Router RPC Metrics](../../hadoop-project-dist/hadoop-common/Metrics.html#RouterRPCMetrics) and [State Store Metrics](../../hadoop-project-dist/hadoop-common/Metrics.html#StateStoreMetrics).
More metrics info can see [RBF Metrics](../../hadoop-project-dist/hadoop-common/Metrics.html#RBFMetrics), [Router RPC Metrics](../../hadoop-project-dist/hadoop-common/Metrics.html#RouterRPCMetrics) and [State Store Metrics](../../hadoop-project-dist/hadoop-common/Metrics.html#StateStoreMetrics).

View File

@ -43,12 +43,22 @@ public class RouterHDFSContract extends HDFSContract {
}
public static void createCluster() throws IOException {
createCluster(null);
}
public static void createCluster(Configuration conf) throws IOException {
createCluster(true, 2, conf);
}
public static void createCluster(
boolean ha, int numNameServices, Configuration conf) throws IOException {
try {
cluster = new MiniRouterDFSCluster(true, 2);
cluster = new MiniRouterDFSCluster(ha, numNameServices, conf);
// Start NNs and DNs and wait until ready
cluster.startCluster();
cluster.startCluster(conf);
cluster.addRouterOverrides(conf);
// Start routers with only an RPC service
cluster.startRouters();
@ -85,6 +95,10 @@ public class RouterHDFSContract extends HDFSContract {
return cluster.getCluster();
}
public static MiniRouterDFSCluster getRouterCluster() {
return cluster;
}
public static FileSystem getFileSystem() throws IOException {
//assumes cluster is not null
Assert.assertNotNull("cluster not created", cluster);

View File

@ -0,0 +1,154 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License. See accompanying LICENSE file.
*/
package org.apache.hadoop.fs.contract.router;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_CLIENT_HTTPS_KEYSTORE_RESOURCE_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_HTTPS_ADDRESS_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_KERBEROS_PRINCIPAL_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_KEYTAB_FILE_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_HTTP_POLICY_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_HTTPS_ADDRESS_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_KEYTAB_FILE_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY;
import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_DATA_TRANSFER_PROTECTION_KEY;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KERBEROS_PRINCIPAL_KEY;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KEYTAB_FILE_KEY;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_RPC_BIND_HOST_KEY;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS;
import static org.junit.Assert.assertTrue;
import java.io.File;
import java.util.Properties;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.HdfsConfiguration;
import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
import org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreFileImpl;
import org.apache.hadoop.hdfs.server.federation.security.MockDelegationTokenSecretManager;
import org.apache.hadoop.http.HttpConfig;
import org.apache.hadoop.minikdc.MiniKdc;
import org.apache.hadoop.security.SecurityUtil;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.security.ssl.KeyStoreTestUtil;
import org.apache.hadoop.test.GenericTestUtils;
/**
* Test utility to provide a standard routine to initialize the configuration
* for secure RBF HDFS cluster.
*/
public final class SecurityConfUtil {
// SSL keystore
private static String keystoresDir;
private static String sslConfDir;
// State string for mini dfs
private static final String SPNEGO_USER_NAME = "HTTP";
private static final String ROUTER_USER_NAME = "router";
private static String spnegoPrincipal;
private static String routerPrincipal;
private SecurityConfUtil() {
// Utility Class
}
public static Configuration initSecurity() throws Exception {
// delete old test dir
File baseDir = GenericTestUtils.getTestDir(
SecurityConfUtil.class.getSimpleName());
FileUtil.fullyDelete(baseDir);
assertTrue(baseDir.mkdirs());
// start a mini kdc with default conf
Properties kdcConf = MiniKdc.createConf();
MiniKdc kdc = new MiniKdc(kdcConf, baseDir);
kdc.start();
Configuration conf = new HdfsConfiguration();
SecurityUtil.setAuthenticationMethod(
UserGroupInformation.AuthenticationMethod.KERBEROS, conf);
UserGroupInformation.setConfiguration(conf);
assertTrue("Expected configuration to enable security",
UserGroupInformation.isSecurityEnabled());
// Setup the keytab
File keytabFile = new File(baseDir, "test.keytab");
String keytab = keytabFile.getAbsolutePath();
// Windows will not reverse name lookup "127.0.0.1" to "localhost".
String krbInstance = Path.WINDOWS ? "127.0.0.1" : "localhost";
kdc.createPrincipal(keytabFile,
SPNEGO_USER_NAME + "/" + krbInstance,
ROUTER_USER_NAME + "/" + krbInstance);
routerPrincipal =
ROUTER_USER_NAME + "/" + krbInstance + "@" + kdc.getRealm();
spnegoPrincipal =
SPNEGO_USER_NAME + "/" + krbInstance + "@" + kdc.getRealm();
// Setup principles and keytabs for dfs
conf.set(DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, routerPrincipal);
conf.set(DFS_NAMENODE_KEYTAB_FILE_KEY, keytab);
conf.set(DFS_DATANODE_KERBEROS_PRINCIPAL_KEY, routerPrincipal);
conf.set(DFS_DATANODE_KEYTAB_FILE_KEY, keytab);
conf.set(DFS_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY, spnegoPrincipal);
conf.set(DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY, keytab);
conf.set(DFS_NAMENODE_HTTPS_ADDRESS_KEY, "localhost:0");
conf.set(DFS_DATANODE_HTTPS_ADDRESS_KEY, "localhost:0");
conf.setBoolean(DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY, true);
conf.set(DFS_DATA_TRANSFER_PROTECTION_KEY, "authentication");
conf.set(DFS_HTTP_POLICY_KEY, HttpConfig.Policy.HTTPS_ONLY.name());
// Setup SSL configuration
keystoresDir = baseDir.getAbsolutePath();
sslConfDir = KeyStoreTestUtil.getClasspathDir(
SecurityConfUtil.class);
KeyStoreTestUtil.setupSSLConfig(
keystoresDir, sslConfDir, conf, false);
conf.set(DFS_CLIENT_HTTPS_KEYSTORE_RESOURCE_KEY,
KeyStoreTestUtil.getClientSSLConfigFileName());
conf.set(DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_KEY,
KeyStoreTestUtil.getServerSSLConfigFileName());
// Setup principals and keytabs for router
conf.set(DFS_ROUTER_KEYTAB_FILE_KEY, keytab);
conf.set(DFS_ROUTER_KERBEROS_PRINCIPAL_KEY, routerPrincipal);
conf.set(DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY, "*");
// Setup basic state store
conf.setClass(RBFConfigKeys.FEDERATION_STORE_DRIVER_CLASS,
StateStoreFileImpl.class, StateStoreDriver.class);
// We need to specify the host to prevent 0.0.0.0 as the host address
conf.set(DFS_ROUTER_RPC_BIND_HOST_KEY, "localhost");
conf.set(DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS,
MockDelegationTokenSecretManager.class.getName());
return conf;
}
}

View File

@ -0,0 +1,46 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License. See accompanying LICENSE file.
*/
package org.apache.hadoop.fs.contract.router;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.contract.AbstractContractAppendTest;
import org.apache.hadoop.fs.contract.AbstractFSContract;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
/**
* Test secure append operations on the Router-based FS.
*/
public class TestRouterHDFSContractAppendSecure
extends AbstractContractAppendTest {
@BeforeClass
public static void createCluster() throws Exception {
RouterHDFSContract.createCluster(initSecurity());
}
@AfterClass
public static void teardownCluster() throws IOException {
RouterHDFSContract.destroyCluster();
}
@Override
protected AbstractFSContract createContract(Configuration conf) {
return new RouterHDFSContract(conf);
}
}

View File

@ -0,0 +1,51 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License. See accompanying LICENSE file.
*/
package org.apache.hadoop.fs.contract.router;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.contract.AbstractContractConcatTest;
import org.apache.hadoop.fs.contract.AbstractFSContract;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import java.io.IOException;
import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
/**
* Test secure concat operations on the Router-based FS.
*/
public class TestRouterHDFSContractConcatSecure
extends AbstractContractConcatTest {
@BeforeClass
public static void createCluster() throws Exception {
RouterHDFSContract.createCluster(initSecurity());
// perform a simple operation on the cluster to verify it is up
RouterHDFSContract.getFileSystem().getDefaultBlockSize(new Path("/"));
}
@AfterClass
public static void teardownCluster() throws IOException {
RouterHDFSContract.destroyCluster();
}
@Override
protected AbstractFSContract createContract(Configuration conf) {
return new RouterHDFSContract(conf);
}
}

View File

@ -0,0 +1,48 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License. See accompanying LICENSE file.
*/
package org.apache.hadoop.fs.contract.router;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.contract.AbstractContractCreateTest;
import org.apache.hadoop.fs.contract.AbstractFSContract;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import java.io.IOException;
import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
/**
* Test secure create operations on the Router-based FS.
*/
public class TestRouterHDFSContractCreateSecure
extends AbstractContractCreateTest {
@BeforeClass
public static void createCluster() throws Exception {
RouterHDFSContract.createCluster(initSecurity());
}
@AfterClass
public static void teardownCluster() throws IOException {
RouterHDFSContract.destroyCluster();
}
@Override
protected AbstractFSContract createContract(Configuration conf) {
return new RouterHDFSContract(conf);
}
}

View File

@ -0,0 +1,115 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.contract.router;
import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
import static org.apache.hadoop.hdfs.server.federation.metrics.TestRBFMetrics.ROUTER_BEAN;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.contract.AbstractFSContract;
import org.apache.hadoop.fs.contract.AbstractFSContractTestBase;
import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
import org.apache.hadoop.hdfs.server.federation.FederationTestUtils;
import org.apache.hadoop.hdfs.server.federation.metrics.RouterMBean;
import org.apache.hadoop.security.token.SecretManager;
import org.apache.hadoop.security.token.Token;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExpectedException;
/**
* Test to verify router contracts for delegation token operations.
*/
public class TestRouterHDFSContractDelegationToken
extends AbstractFSContractTestBase {
@BeforeClass
public static void createCluster() throws Exception {
RouterHDFSContract.createCluster(false, 1, initSecurity());
}
@AfterClass
public static void teardownCluster() throws IOException {
RouterHDFSContract.destroyCluster();
}
@Override
protected AbstractFSContract createContract(Configuration conf) {
return new RouterHDFSContract(conf);
}
@Rule
public ExpectedException exceptionRule = ExpectedException.none();
@Test
public void testRouterDelegationToken() throws Exception {
RouterMBean bean = FederationTestUtils.getBean(
ROUTER_BEAN, RouterMBean.class);
// Initially there is no token in memory
assertEquals(0, bean.getCurrentTokensCount());
// Generate delegation token
Token<DelegationTokenIdentifier> token =
(Token<DelegationTokenIdentifier>) getFileSystem()
.getDelegationToken("router");
assertNotNull(token);
// Verify properties of the token
assertEquals("HDFS_DELEGATION_TOKEN", token.getKind().toString());
DelegationTokenIdentifier identifier = token.decodeIdentifier();
assertNotNull(identifier);
String owner = identifier.getOwner().toString();
// Windows will not reverse name lookup "127.0.0.1" to "localhost".
String host = Path.WINDOWS ? "127.0.0.1" : "localhost";
String expectedOwner = "router/"+ host + "@EXAMPLE.COM";
assertEquals(expectedOwner, owner);
assertEquals("router", identifier.getRenewer().toString());
int masterKeyId = identifier.getMasterKeyId();
assertTrue(masterKeyId > 0);
int sequenceNumber = identifier.getSequenceNumber();
assertTrue(sequenceNumber > 0);
long existingMaxTime = token.decodeIdentifier().getMaxDate();
assertTrue(identifier.getMaxDate() >= identifier.getIssueDate());
// one token is expected after the generation
assertEquals(1, bean.getCurrentTokensCount());
// Renew delegation token
long expiryTime = token.renew(initSecurity());
assertNotNull(token);
assertEquals(existingMaxTime, token.decodeIdentifier().getMaxDate());
// Expiry time after renewal should never exceed max time of the token.
assertTrue(expiryTime <= existingMaxTime);
// Renewal should retain old master key id and sequence number
identifier = token.decodeIdentifier();
assertEquals(identifier.getMasterKeyId(), masterKeyId);
assertEquals(identifier.getSequenceNumber(), sequenceNumber);
assertEquals(1, bean.getCurrentTokensCount());
// Cancel delegation token
token.cancel(initSecurity());
assertEquals(0, bean.getCurrentTokensCount());
// Renew a cancelled token
exceptionRule.expect(SecretManager.InvalidToken.class);
token.renew(initSecurity());
}
}

View File

@ -0,0 +1,46 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License. See accompanying LICENSE file.
*/
package org.apache.hadoop.fs.contract.router;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.contract.AbstractContractDeleteTest;
import org.apache.hadoop.fs.contract.AbstractFSContract;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
/**
* Test secure delete operations on the Router-based FS.
*/
public class TestRouterHDFSContractDeleteSecure
extends AbstractContractDeleteTest {
@BeforeClass
public static void createCluster() throws Exception {
RouterHDFSContract.createCluster(initSecurity());
}
@AfterClass
public static void teardownCluster() throws IOException {
RouterHDFSContract.destroyCluster();
}
@Override
protected AbstractFSContract createContract(Configuration conf) {
return new RouterHDFSContract(conf);
}
}

View File

@ -0,0 +1,47 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License. See accompanying LICENSE file.
*/
package org.apache.hadoop.fs.contract.router;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest;
import org.apache.hadoop.fs.contract.AbstractFSContract;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
/**
* Test secure get file status operations on the Router-based FS.
*/
public class TestRouterHDFSContractGetFileStatusSecure
extends AbstractContractGetFileStatusTest {
@BeforeClass
public static void createCluster() throws Exception {
RouterHDFSContract.createCluster(initSecurity());
}
@AfterClass
public static void teardownCluster() throws IOException {
RouterHDFSContract.destroyCluster();
}
@Override
protected AbstractFSContract createContract(Configuration conf) {
return new RouterHDFSContract(conf);
}
}

View File

@ -0,0 +1,48 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License. See accompanying LICENSE file.
*/
package org.apache.hadoop.fs.contract.router;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.contract.AbstractContractMkdirTest;
import org.apache.hadoop.fs.contract.AbstractFSContract;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import java.io.IOException;
import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
/**
* Test secure dir operations on the Router-based FS.
*/
public class TestRouterHDFSContractMkdirSecure
extends AbstractContractMkdirTest {
@BeforeClass
public static void createCluster() throws Exception {
RouterHDFSContract.createCluster(initSecurity());
}
@AfterClass
public static void teardownCluster() throws IOException {
RouterHDFSContract.destroyCluster();
}
@Override
protected AbstractFSContract createContract(Configuration conf) {
return new RouterHDFSContract(conf);
}
}

View File

@ -0,0 +1,47 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License. See accompanying LICENSE file.
*/
package org.apache.hadoop.fs.contract.router;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.contract.AbstractContractOpenTest;
import org.apache.hadoop.fs.contract.AbstractFSContract;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import java.io.IOException;
import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
/**
* Test secure open operations on the Router-based FS.
*/
public class TestRouterHDFSContractOpenSecure extends AbstractContractOpenTest {
@BeforeClass
public static void createCluster() throws Exception {
RouterHDFSContract.createCluster(initSecurity());
}
@AfterClass
public static void teardownCluster() throws IOException {
RouterHDFSContract.destroyCluster();
}
@Override
protected AbstractFSContract createContract(Configuration conf) {
return new RouterHDFSContract(conf);
}
}

Some files were not shown because too many files have changed in this diff Show More