2023-02-14 00:55:33 -05:00
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
2023-02-27 12:46:13 -05:00
| Generated by Apache Maven Doxia at 2023-02-27
2023-02-14 00:55:33 -05:00
| Rendered using Apache Maven Stylus Skin 1.5
-->
< html xmlns = "http://www.w3.org/1999/xhtml" >
< head >
< title > Hadoop KMS – Hadoop Key Management Server (KMS) - Documentation Sets< / title >
< style type = "text/css" media = "all" >
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
< / style >
< link rel = "stylesheet" href = "./css/print.css" type = "text/css" media = "print" / >
2023-02-27 12:46:13 -05:00
< meta name = "Date-Revision-yyyymmdd" content = "20230227" / >
2023-02-14 00:55:33 -05:00
< meta http-equiv = "Content-Type" content = "text/html; charset=UTF-8" / >
< / head >
< body class = "composite" >
< div id = "banner" >
< a href = "http://hadoop.apache.org/" id = "bannerLeft" >
< img src = "http://hadoop.apache.org/images/hadoop-logo.jpg" alt = "" / >
< / a >
< a href = "http://www.apache.org/" id = "bannerRight" >
< img src = "http://www.apache.org/images/asf_logo_wide.png" alt = "" / >
< / a >
< div class = "clear" >
< hr / >
< / div >
< / div >
< div id = "breadcrumbs" >
< div class = "xright" > < a href = "http://wiki.apache.org/hadoop" class = "externalLink" > Wiki< / a >
|
< a href = "https://gitbox.apache.org/repos/asf/hadoop.git" class = "externalLink" > git< / a >
|
< a href = "http://hadoop.apache.org/" class = "externalLink" > Apache Hadoop< / a >
2023-02-27 12:46:13 -05:00
| Last Published: 2023-02-27
2023-02-14 00:55:33 -05:00
| Version: 3.4.0-SNAPSHOT
< / div >
< div class = "clear" >
< hr / >
< / div >
< / div >
< div id = "leftColumn" >
< div id = "navcolumn" >
< h5 > General< / h5 >
< ul >
< li class = "none" >
< a href = "../index.html" > Overview< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/SingleCluster.html" > Single Node Setup< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/ClusterSetup.html" > Cluster Setup< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/CommandsManual.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/FileSystemShell.html" > FileSystem Shell< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/Compatibility.html" > Compatibility Specification< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/DownstreamDev.html" > Downstream Developer's Guide< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html" > Admin Compatibility Guide< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/InterfaceClassification.html" > Interface Classification< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/filesystem/index.html" > FileSystem Specification< / a >
< / li >
< / ul >
< h5 > Common< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/CLIMiniCluster.html" > CLI Mini Cluster< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/FairCallQueue.html" > Fair Call Queue< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/NativeLibraries.html" > Native Libraries< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/Superusers.html" > Proxy User< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/RackAwareness.html" > Rack Awareness< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/SecureMode.html" > Secure Mode< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html" > Service Level Authorization< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/HttpAuthentication.html" > HTTP Authentication< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html" > Credential Provider API< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-kms/index.html" > Hadoop KMS< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/Tracing.html" > Tracing< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/UnixShellGuide.html" > Unix Shell Guide< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/registry/index.html" > Registry< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html" > Async Profiler< / a >
< / li >
< / ul >
< h5 > HDFS< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html" > Architecture< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html" > User Guide< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html" > NameNode HA With QJM< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html" > NameNode HA With NFS< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html" > Observer NameNode< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/Federation.html" > Federation< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/ViewFs.html" > ViewFs< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html" > ViewFsOverloadScheme< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html" > Snapshots< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html" > Edits Viewer< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html" > Image Viewer< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html" > Permissions and HDFS< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html" > Quotas and HDFS< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/LibHdfs.html" > libhdfs (C API)< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/WebHDFS.html" > WebHDFS (REST API)< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-hdfs-httpfs/index.html" > HttpFS< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html" > Short Circuit Local Reads< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html" > Centralized Cache Management< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html" > NFS Gateway< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html" > Rolling Upgrade< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html" > Extended Attributes< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html" > Transparent Encryption< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html" > Multihoming< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html" > Storage Policies< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html" > Memory Storage Support< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html" > Synthetic Load Generator< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html" > Erasure Coding< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html" > Disk Balancer< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html" > Upgrade Domain< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html" > DataNode Admin< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html" > Router Federation< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html" > Provided Storage< / a >
< / li >
< / ul >
< h5 > MapReduce< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html" > Tutorial< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html" > Compatibility with 1.x< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html" > Encrypted Shuffle< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html" > Pluggable Shuffle/Sort< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html" > Distributed Cache Deploy< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html" > Support for YARN Shared Cache< / a >
< / li >
< / ul >
< h5 > MapReduce REST APIs< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html" > MR Application Master< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html" > MR History Server< / a >
< / li >
< / ul >
< h5 > YARN< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/YARN.html" > Architecture< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/YarnCommands.html" > Commands Reference< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html" > Capacity Scheduler< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/FairScheduler.html" > Fair Scheduler< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html" > ResourceManager Restart< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html" > ResourceManager HA< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/ResourceModel.html" > Resource Model< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/NodeLabel.html" > Node Labels< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html" > Node Attributes< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html" > Web Application Proxy< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/TimelineServer.html" > Timeline Server< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html" > Timeline Service V.2< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html" > Writing YARN Applications< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html" > YARN Application Security< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/NodeManager.html" > NodeManager< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/DockerContainers.html" > Running Applications in Docker Containers< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/RuncContainers.html" > Running Applications in runC Containers< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html" > Using CGroups< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/SecureContainer.html" > Secure Containers< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html" > Reservation System< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html" > Graceful Decommission< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html" > Opportunistic Containers< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/Federation.html" > YARN Federation< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/SharedCache.html" > Shared Cache< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/UsingGpus.html" > Using GPU< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html" > Using FPGA< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html" > Placement Constraints< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/YarnUI2.html" > YARN UI2< / a >
< / li >
< / ul >
< h5 > YARN REST APIs< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html" > Introduction< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html" > Resource Manager< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html" > Node Manager< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1" > Timeline Server< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API" > Timeline Service V.2< / a >
< / li >
< / ul >
< h5 > YARN Service< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html" > Overview< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html" > QuickStart< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html" > Concepts< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html" > Yarn Service API< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html" > Service Discovery< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html" > System Services< / a >
< / li >
< / ul >
< h5 > Hadoop Compatible File Systems< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-aliyun/tools/hadoop-aliyun/index.html" > Aliyun OSS< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-aws/tools/hadoop-aws/index.html" > Amazon S3< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-azure/index.html" > Azure Blob Storage< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-azure-datalake/index.html" > Azure Data Lake Storage< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-cos/cloud-storage/index.html" > Tencent COS< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-huaweicloud/cloud-storage/index.html" > Huaweicloud OBS< / a >
< / li >
< / ul >
< h5 > Auth< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-auth/index.html" > Overview< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-auth/Examples.html" > Examples< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-auth/Configuration.html" > Configuration< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-auth/BuildingIt.html" > Building< / a >
< / li >
< / ul >
< h5 > Tools< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-streaming/HadoopStreaming.html" > Hadoop Streaming< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-archives/HadoopArchives.html" > Hadoop Archives< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-archive-logs/HadoopArchiveLogs.html" > Hadoop Archive Logs< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-distcp/DistCp.html" > DistCp< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-federation-balance/HDFSFederationBalance.html" > HDFS Federation Balance< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-gridmix/GridMix.html" > GridMix< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-rumen/Rumen.html" > Rumen< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-resourceestimator/ResourceEstimator.html" > Resource Estimator Service< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-sls/SchedulerLoadSimulator.html" > Scheduler Load Simulator< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/Benchmarking.html" > Hadoop Benchmarking< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-dynamometer/Dynamometer.html" > Dynamometer< / a >
< / li >
< / ul >
< h5 > Reference< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/release/" > Changelog and Release Notes< / a >
< / li >
< li class = "none" >
< a href = "../api/index.html" > Java API docs< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/UnixShellAPI.html" > Unix Shell API< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/Metrics.html" > Metrics< / a >
< / li >
< / ul >
< h5 > Configuration< / h5 >
< ul >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/core-default.xml" > core-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml" > hdfs-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml" > hdfs-rbf-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml" > mapred-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-yarn/hadoop-yarn-common/yarn-default.xml" > yarn-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-kms/kms-default.html" > kms-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-hdfs-httpfs/httpfs-default.html" > httpfs-default.xml< / a >
< / li >
< li class = "none" >
< a href = "../hadoop-project-dist/hadoop-common/DeprecatedProperties.html" > Deprecated Properties< / a >
< / li >
< / ul >
< a href = "http://maven.apache.org/" title = "Built by Maven" class = "poweredBy" >
< img alt = "Built by Maven" src = "./images/logos/maven-feather.png" / >
< / a >
< / div >
< / div >
< div id = "bodyColumn" >
< div id = "contentBox" >
<!-- -
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
< h1 > Hadoop Key Management Server (KMS) - Documentation Sets< / h1 >
< ul >
< li > < a href = "#KMS_Client_Configuration" > KMS Client Configuration< / a > < / li >
< li > < a href = "#KMS" > KMS< / a >
< ul >
< li > < a href = "#Start.2FStop_the_KMS" > Start/Stop the KMS< / a > < / li >
< li > < a href = "#KMS_Configuration" > KMS Configuration< / a > < / li >
< li > < a href = "#KMS_HTTP_Configuration" > KMS HTTP Configuration< / a > < / li >
< li > < a href = "#KMS_Cache" > KMS Cache< / a >
< ul >
< li > < a href = "#CachingKeyProvider" > CachingKeyProvider< / a > < / li >
< li > < a href = "#KeyProvider" > KeyProvider< / a > < / li > < / ul > < / li >
< li > < a href = "#KMS_Aggregated_Audit_logs" > KMS Aggregated Audit logs< / a > < / li >
< li > < a href = "#KMS_Security_Configuration" > KMS Security Configuration< / a >
< ul >
< li > < a href = "#Enabling_Kerberos_HTTP_SPNEGO_Authentication" > Enabling Kerberos HTTP SPNEGO Authentication< / a > < / li >
< li > < a href = "#KMS_Proxyuser_Configuration" > KMS Proxyuser Configuration< / a > < / li >
< li > < a href = "#KMS_over_HTTPS_.28SSL.29" > KMS over HTTPS (SSL)< / a > < / li >
< li > < a href = "#ACLs_.28Access_Control_Lists.29" > ACLs (Access Control Lists)< / a > < / li > < / ul > < / li >
< li > < a href = "#KMS_Delegation_Token_Configuration" > KMS Delegation Token Configuration< / a > < / li >
< li > < a href = "#High_Availability" > High Availability< / a >
< ul >
< li > < a href = "#Behind_a_Load-Balancer_or_VIP" > Behind a Load-Balancer or VIP< / a > < / li >
< li > < a href = "#Using_LoadBalancingKMSClientProvider" > Using LoadBalancingKMSClientProvider< / a > < / li >
< li > < a href = "#HTTP_Kerberos_Principals_Configuration" > HTTP Kerberos Principals Configuration< / a > < / li >
< li > < a href = "#HTTP_Authentication_Signature" > HTTP Authentication Signature< / a > < / li >
< li > < a href = "#Delegation_Tokens" > Delegation Tokens< / a > < / li > < / ul > < / li >
< li > < a href = "#KMS_HTTP_REST_API" > KMS HTTP REST API< / a >
< ul >
< li > < a href = "#Create_a_Key" > Create a Key< / a > < / li >
< li > < a href = "#Rollover_Key" > Rollover Key< / a > < / li >
< li > < a href = "#Invalidate_Cache_of_a_Key" > Invalidate Cache of a Key< / a > < / li >
< li > < a href = "#Delete_Key" > Delete Key< / a > < / li >
< li > < a href = "#Get_Key_Metadata" > Get Key Metadata< / a > < / li >
< li > < a href = "#Get_Current_Key" > Get Current Key< / a > < / li >
< li > < a href = "#Generate_Encrypted_Key_for_Current_KeyVersion" > Generate Encrypted Key for Current KeyVersion< / a > < / li >
< li > < a href = "#Decrypt_Encrypted_Key" > Decrypt Encrypted Key< / a > < / li >
< li > < a href = "#Re-encrypt_Encrypted_Key_With_The_Latest_KeyVersion" > Re-encrypt Encrypted Key With The Latest KeyVersion< / a > < / li >
< li > < a href = "#Batch_Re-encrypt_Encrypted_Keys_With_The_Latest_KeyVersion" > Batch Re-encrypt Encrypted Keys With The Latest KeyVersion< / a > < / li >
< li > < a href = "#Get_Key_Version" > Get Key Version< / a > < / li >
< li > < a href = "#Get_Key_Versions" > Get Key Versions< / a > < / li >
< li > < a href = "#Get_Key_Names" > Get Key Names< / a > < / li >
< li > < a href = "#Get_Keys_Metadata" > Get Keys Metadata< / a > < / li > < / ul > < / li >
< li > < a href = "#Deprecated_Environment_Variables" > Deprecated Environment Variables< / a > < / li >
< li > < a href = "#Default_HTTP_Services" > Default HTTP Services< / a > < / li > < / ul > < / li > < / ul >
< p > Hadoop KMS is a cryptographic key management server based on Hadoop’ s < b > KeyProvider< / b > API.< / p >
< p > It provides a client and a server components which communicate over HTTP using a REST API.< / p >
< p > The client is a KeyProvider implementation interacts with the KMS using the KMS HTTP REST API.< / p >
< p > KMS and its client have built-in security and they support HTTP SPNEGO Kerberos authentication and HTTPS secure transport.< / p >
< p > KMS is a Java Jetty web-application.< / p > < section >
< h2 > < a name = "KMS_Client_Configuration" > < / a > KMS Client Configuration< / h2 >
< p > The KMS client < code > KeyProvider< / code > uses the < b > kms< / b > scheme, and the embedded URL must be the URL of the KMS. For example, for a KMS running on < code > http://localhost:9600/kms< / code > , the KeyProvider URI is < code > kms://http@localhost:9600/kms< / code > . And, for a KMS running on < code > https://localhost:9600/kms< / code > , the KeyProvider URI is < code > kms://https@localhost:9600/kms< / code > < / p >
< p > The following is an example to configure HDFS NameNode as a KMS client in < code > core-site.xml< / code > :< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.security.key.provider.path< /name>
< value> kms://http@localhost:9600/kms< /value>
< description>
The KeyProvider to use when interacting with encryption keys used
when reading and writing to an encryption zone.
< /description>
< /property>
< / pre > < / div > < / div >
< / section > < section >
< h2 > < a name = "KMS" > < / a > KMS< / h2 > < section >
< h3 > < a name = "Start.2FStop_the_KMS" > < / a > Start/Stop the KMS< / h3 >
< p > To start/stop KMS, use < code > hadoop --daemon start|stop kms< / code > . For example:< / p >
< div class = "source" >
< div class = "source" >
< pre > hadoop-3.4.0-SNAPSHOT $ hadoop --daemon start kms
< / pre > < / div > < / div >
< p > NOTE: The script < code > kms.sh< / code > is deprecated. It is now just a wrapper of < code > hadoop kms< / code > .< / p > < / section > < section >
< h3 > < a name = "KMS_Configuration" > < / a > KMS Configuration< / h3 >
< p > Configure the KMS backing KeyProvider properties in the < code > etc/hadoop/kms-site.xml< / code > configuration file:< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.kms.key.provider.uri< /name>
< value> jceks://file@/${user.home}/kms.keystore< /value>
< /property>
< property>
< name> hadoop.security.keystore.java-keystore-provider.password-file< /name>
< value> kms.keystore.password< /value>
< /property>
< / pre > < / div > < / div >
< p > The password file is looked up in the Hadoop’ s configuration directory via the classpath.< / p >
< p > NOTE: You need to restart the KMS for the configuration changes to take effect.< / p >
< p > NOTE: The KMS server can choose any < code > KeyProvider< / code > implementation as the backing provider. The example here uses a JavaKeyStoreProvider, which should only be used for experimental purposes and never be used in production. For detailed usage and caveats of JavaKeyStoreProvider, please see < a href = "../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html#Keystore_Passwords" > Keystore Passwords section of the Credential Provider API< / a > .< / p > < / section > < section >
< h3 > < a name = "KMS_HTTP_Configuration" > < / a > KMS HTTP Configuration< / h3 >
< p > KMS pre-configures the HTTP port to 9600.< / p >
< p > KMS supports the following HTTP < a href = "./kms-default.html" > configuration properties< / a > in < code > etc/hadoop/kms-site.xml< / code > .< / p >
< p > NOTE: You need to restart the KMS for the configuration changes to take effect.< / p > < / section > < section >
< h3 > < a name = "KMS_Cache" > < / a > KMS Cache< / h3 >
< p > KMS has two kinds of caching: a CachingKeyProvider for caching the encryption keys, and a KeyProvider for caching the EEKs.< / p > < section >
< h4 > < a name = "CachingKeyProvider" > < / a > CachingKeyProvider< / h4 >
< p > KMS caches encryption keys for a short period of time to avoid excessive hits to the underlying KeyProvider.< / p >
< p > This Cache is enabled by default (can be disabled by setting the < code > hadoop.kms.cache.enable< / code > boolean property to false)< / p >
< p > This cache is used with the following 3 methods only, < code > getCurrentKey()< / code > and < code > getKeyVersion()< / code > and < code > getMetadata()< / code > .< / p >
< p > For the < code > getCurrentKey()< / code > method, cached entries are kept for a maximum of 30000 milliseconds regardless the number of times the key is being accessed (to avoid stale keys to be considered current).< / p >
< p > For the < code > getKeyVersion()< / code > and < code > getMetadata()< / code > methods, cached entries are kept with a default inactivity timeout of 600000 milliseconds (10 mins).< / p >
< p > The cache is invalidated when the key is deleted by < code > deleteKey()< / code > , or when < code > invalidateCache()< / code > is called.< / p >
< p > These configurations can be changed via the following properties in the < code > etc/hadoop/kms-site.xml< / code > configuration file:< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.kms.cache.enable< /name>
< value> true< /value>
< /property>
< property>
< name> hadoop.kms.cache.timeout.ms< /name>
< value> 600000< /value>
< /property>
< property>
< name> hadoop.kms.current.key.cache.timeout.ms< /name>
< value> 30000< /value>
< /property>
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "KeyProvider" > < / a > KeyProvider< / h4 >
< p > Architecturally, both server-side (e.g. KMS) and client-side (e.g. NameNode) have a cache for EEKs. The following are configurable on the cache:< / p >
< ul >
< li > The size of the cache. This is the maximum number of EEKs that can be cached under each key name.< / li >
< li > A low watermark on the cache. For each key name, if after a get call, the number of cached EEKs are less than (size * low watermark), then the cache under this key name will be filled asynchronously. For each key name, only 1 thread could be running for the asynchronous filling.< / li >
< li > The maximum number of asynchronous threads overall, across key names, allowed to fill the queue in a cache.< / li >
< li > The cache expiry time, in milliseconds. Internally Guava cache is used as the cache implementation. The expiry approach is < a class = "externalLink" href = "https://code.google.com/p/guava-libraries/wiki/CachesExplained" > expireAfterAccess< / a > .< / li >
< / ul >
< p > Note that due to the asynchronous filling mechanism, it is possible that after rollNewVersion(), the caller still gets the old EEKs. In the worst case, the caller may get up to (server-side cache size + client-side cache size) number of old EEKs, or until both caches expire. This behavior is a trade off to avoid locking on the cache, and is acceptable since the old version EEKs can still be used to decrypt.< / p >
< p > Below are the configurations and their default values:< / p >
< p > Server-side can be changed via the following properties in the < code > etc/hadoop/kms-site.xml< / code > configuration file:< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.security.kms.encrypted.key.cache.size< /name>
< value> 500< /value>
< /property>
< property>
< name> hadoop.security.kms.encrypted.key.cache.low.watermark< /name>
< value> 0.3< /value>
< /property>
< property>
< name> hadoop.security.kms.encrypted.key.cache.num.fill.threads< /name>
< value> 2< /value>
< /property>
< property>
< name> hadoop.security.kms.encrypted.key.cache.expiry< /name>
< value> 43200000< /value>
< /property>
< / pre > < / div > < / div >
< p > Client-side can be changed via the following properties in the < code > etc/hadoop/core-site.xml< / code > configuration file:< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.security.kms.client.encrypted.key.cache.size< /name>
< value> 500< /value>
< /property>
< property>
< name> hadoop.security.kms.client.encrypted.key.cache.low-watermark< /name>
< value> 0.3< /value>
< /property>
< property>
< name> hadoop.security.kms.client.encrypted.key.cache.num.refill.threads< /name>
< value> 2< /value>
< /property>
< property>
< name> hadoop.security.kms.client.encrypted.key.cache.expiry< /name>
< value> 43200000< /value>
< /property>
< / pre > < / div > < / div >
< / section > < / section > < section >
< h3 > < a name = "KMS_Aggregated_Audit_logs" > < / a > KMS Aggregated Audit logs< / h3 >
< p > Audit logs are aggregated for API accesses to the GET_KEY_VERSION, GET_CURRENT_KEY, DECRYPT_EEK, GENERATE_EEK, REENCRYPT_EEK operations.< / p >
< p > Entries are grouped by the (user,key,operation) combined key for a configurable aggregation interval after which the number of accesses to the specified end-point by the user for a given key is flushed to the audit log.< / p >
< p > The Aggregation interval is configured via the property :< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.kms.aggregation.delay.ms< /name>
< value> 10000< /value>
< /property>
< / pre > < / div > < / div >
< / section > < section >
< h3 > < a name = "KMS_Security_Configuration" > < / a > KMS Security Configuration< / h3 > < section >
< h4 > < a name = "Enabling_Kerberos_HTTP_SPNEGO_Authentication" > < / a > Enabling Kerberos HTTP SPNEGO Authentication< / h4 >
< p > Configure the Kerberos < code > etc/krb5.conf< / code > file with the information of your KDC server.< / p >
< p > Create a service principal and its keytab for the KMS, it must be an < code > HTTP< / code > service principal.< / p >
< p > Configure KMS < code > etc/hadoop/kms-site.xml< / code > with the correct security values, for example:< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.kms.authentication.type< /name>
< value> kerberos< /value>
< /property>
< property>
< name> hadoop.kms.authentication.kerberos.keytab< /name>
< value> ${user.home}/kms.keytab< /value>
< /property>
< property>
< name> hadoop.kms.authentication.kerberos.principal< /name>
< value> HTTP/localhost< /value>
< /property>
< property>
< name> hadoop.kms.authentication.kerberos.name.rules< /name>
< value> DEFAULT< /value>
< /property>
< / pre > < / div > < / div >
< p > NOTE: You need to restart the KMS for the configuration changes to take effect.< / p > < / section > < section >
< h4 > < a name = "KMS_Proxyuser_Configuration" > < / a > KMS Proxyuser Configuration< / h4 >
< p > Each proxyuser must be configured in < code > etc/hadoop/kms-site.xml< / code > using the following properties:< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.kms.proxyuser.#USER#.users< /name>
< value> *< /value>
< /property>
< property>
< name> hadoop.kms.proxyuser.#USER#.groups< /name>
< value> *< /value>
< /property>
< property>
< name> hadoop.kms.proxyuser.#USER#.hosts< /name>
< value> *< /value>
< /property>
< / pre > < / div > < / div >
< p > < code > #USER#< / code > is the username of the proxyuser to configure.< / p >
< p > The < code > users< / code > property indicates the users that can be impersonated.< / p >
< p > The < code > groups< / code > property indicates the groups users being impersonated must belong to.< / p >
< p > At least one of the < code > users< / code > or < code > groups< / code > properties must be defined. If both are specified, then the configured proxyuser will be able to impersonate and user in the < code > users< / code > list and any user belonging to one of the groups in the < code > groups< / code > list.< / p >
< p > The < code > hosts< / code > property indicates from which host the proxyuser can make impersonation requests.< / p >
< p > If < code > users< / code > , < code > groups< / code > or < code > hosts< / code > has a < code > *< / code > , it means there are no restrictions for the proxyuser regarding users, groups or hosts.< / p > < / section > < section >
< h4 > < a name = "KMS_over_HTTPS_.28SSL.29" > < / a > KMS over HTTPS (SSL)< / h4 >
< p > Enable SSL in < code > etc/hadoop/kms-site.xml< / code > :< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.kms.ssl.enabled< /name>
< value> true< /value>
< description>
Whether SSL is enabled. Default is false, i.e. disabled.
< /description>
< /property>
< / pre > < / div > < / div >
< p > Configure < code > etc/hadoop/ssl-server.xml< / code > with proper values, for example:< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> ssl.server.keystore.location< /name>
< value> ${user.home}/.keystore< /value>
< description> Keystore to be used. Must be specified.< /description>
< /property>
< property>
< name> ssl.server.keystore.password< /name>
< value> < /value>
< description> Must be specified.< /description>
< /property>
< property>
< name> ssl.server.keystore.keypassword< /name>
< value> < /value>
< description> Must be specified.< /description>
< /property>
< / pre > < / div > < / div >
< p > The SSL passwords can be secured by a credential provider. See < a href = "../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html" > Credential Provider API< / a > .< / p >
< p > You need to create an SSL certificate for the KMS. As the < code > kms< / code > Unix user, using the Java < code > keytool< / code > command to create the SSL certificate:< / p >
< div class = "source" >
< div class = "source" >
< pre > $ keytool -genkey -alias jetty -keyalg RSA
< / pre > < / div > < / div >
< p > You will be asked a series of questions in an interactive prompt. It will create the keystore file, which will be named < b > .keystore< / b > and located in the user’ s home directory.< / p >
< p > The password you enter for “ keystore password” must match the value of the property < code > ssl.server.keystore.password< / code > set in the < code > ssl-server.xml< / code > in the configuration directory.< / p >
< p > The answer to “ What is your first and last name?” (i.e. “ CN” ) must be the hostname of the machine where the KMS will be running.< / p >
< p > NOTE: You need to restart the KMS for the configuration changes to take effect.< / p >
< p > NOTE: Some old SSL clients may use weak ciphers that are not supported by the KMS server. It is recommended to upgrade the SSL client.< / p > < / section > < section >
< h4 > < a name = "ACLs_.28Access_Control_Lists.29" > < / a > ACLs (Access Control Lists)< / h4 >
< p > KMS supports ACLs (Access Control Lists) for fine-grained permission control.< / p >
< p > Two levels of ACLs exist in KMS: KMS ACLs and Key ACLs. KMS ACLs control access at KMS operation level, and precede Key ACLs. In particular, only if permission is granted at KMS ACLs level, shall the permission check against Key ACLs be performed.< / p >
< p > The configuration and usage of KMS ACLs and Key ACLs are described in the sections below.< / p > < section >
< h5 > < a name = "KMS_ACLs" > < / a > KMS ACLs< / h5 >
< p > KMS ACLs configuration are defined in the KMS < code > etc/hadoop/kms-acls.xml< / code > configuration file. This file is hot-reloaded when it changes.< / p >
< p > KMS supports both fine grained access control as well as blacklist for kms operations via a set ACL configuration properties.< / p >
< p > A user accessing KMS is first checked for inclusion in the Access Control List for the requested operation and then checked for exclusion in the Black list for the operation before access is granted.< / p >
< div class = "source" >
< div class = "source" >
< pre > < configuration>
< property>
< name> hadoop.kms.acl.CREATE< /name>
< value> *< /value>
< description>
ACL for create-key operations.
If the user is not in the GET ACL, the key material is not returned
as part of the response.
< /description>
< /property>
< property>
< name> hadoop.kms.blacklist.CREATE< /name>
< value> hdfs,foo< /value>
< description>
Blacklist for create-key operations.
If the user is in the Blacklist, the key material is not returned
as part of the response.
< /description>
< /property>
< property>
< name> hadoop.kms.acl.DELETE< /name>
< value> *< /value>
< description>
ACL for delete-key operations.
< /description>
< /property>
< property>
< name> hadoop.kms.blacklist.DELETE< /name>
< value> hdfs,foo< /value>
< description>
Blacklist for delete-key operations.
< /description>
< /property>
< property>
< name> hadoop.kms.acl.ROLLOVER< /name>
< value> *< /value>
< description>
ACL for rollover-key operations.
If the user is not in the GET ACL, the key material is not returned
as part of the response.
< /description>
< /property>
< property>
< name> hadoop.kms.blacklist.ROLLOVER< /name>
< value> hdfs,foo< /value>
< description>
Blacklist for rollover-key operations.
< /description>
< /property>
< property>
< name> hadoop.kms.acl.GET< /name>
< value> *< /value>
< description>
ACL for get-key-version and get-current-key operations.
< /description>
< /property>
< property>
< name> hadoop.kms.blacklist.GET< /name>
< value> hdfs,foo< /value>
< description>
ACL for get-key-version and get-current-key operations.
< /description>
< /property>
< property>
< name> hadoop.kms.acl.GET_KEYS< /name>
< value> *< /value>
< description>
ACL for get-keys operation.
< /description>
< /property>
< property>
< name> hadoop.kms.blacklist.GET_KEYS< /name>
< value> hdfs,foo< /value>
< description>
Blacklist for get-keys operation.
< /description>
< /property>
< property>
< name> hadoop.kms.acl.GET_METADATA< /name>
< value> *< /value>
< description>
ACL for get-key-metadata and get-keys-metadata operations.
< /description>
< /property>
< property>
< name> hadoop.kms.blacklist.GET_METADATA< /name>
< value> hdfs,foo< /value>
< description>
Blacklist for get-key-metadata and get-keys-metadata operations.
< /description>
< /property>
< property>
< name> hadoop.kms.acl.SET_KEY_MATERIAL< /name>
< value> *< /value>
< description>
Complimentary ACL for CREATE and ROLLOVER operation to allow the client
to provide the key material when creating or rolling a key.
< /description>
< /property>
< property>
< name> hadoop.kms.blacklist.SET_KEY_MATERIAL< /name>
< value> hdfs,foo< /value>
< description>
Complimentary Blacklist for CREATE and ROLLOVER operation to allow the client
to provide the key material when creating or rolling a key.
< /description>
< /property>
< property>
< name> hadoop.kms.acl.GENERATE_EEK< /name>
< value> *< /value>
< description>
ACL for generateEncryptedKey
CryptoExtension operations
< /description>
< /property>
< property>
< name> hadoop.kms.blacklist.GENERATE_EEK< /name>
< value> hdfs,foo< /value>
< description>
Blacklist for generateEncryptedKey
CryptoExtension operations
< /description>
< /property>
< property>
< name> hadoop.kms.acl.DECRYPT_EEK< /name>
< value> *< /value>
< description>
ACL for decrypt EncryptedKey
CryptoExtension operations
< /description>
< /property>
< property>
< name> hadoop.kms.blacklist.DECRYPT_EEK< /name>
< value> hdfs,foo< /value>
< description>
Blacklist for decrypt EncryptedKey
CryptoExtension operations
< /description>
< /property>
< /configuration>
< / pre > < / div > < / div >
< / section > < section >
< h5 > < a name = "Key_ACLs" > < / a > Key ACLs< / h5 >
< p > KMS supports access control for all non-read operations at the Key level. All Key Access operations are classified as :< / p >
< ul >
< li > MANAGEMENT - createKey, deleteKey, rolloverNewVersion< / li >
< li > GENERATE_EEK - generateEncryptedKey, reencryptEncryptedKey, reencryptEncryptedKeys, warmUpEncryptedKeys< / li >
< li > DECRYPT_EEK - decryptEncryptedKey< / li >
< li > READ - getKeyVersion, getKeyVersions, getMetadata, getKeysMetadata, getCurrentKey< / li >
< li > ALL - all of the above< / li >
< / ul >
< p > These can be defined in the KMS < code > etc/hadoop/kms-acls.xml< / code > as follows< / p >
< p > For all keys for which a key access has not been explicitly configured, It is possible to configure a default key access control for a subset of the operation types.< / p >
< p > It is also possible to configure a “ whitelist” key ACL for a subset of the operation types. The whitelist key ACL grants access to the key, in addition to the explicit or default per-key ACL. That is, if no per-key ACL is explicitly set, a user will be granted access if they are present in the default per-key ACL or the whitelist key ACL. If a per-key ACL is explicitly set, a user will be granted access if they are present in the per-key ACL or the whitelist key ACL.< / p >
< p > If no ACL is configured for a specific key AND no default ACL is configured AND no whitelist key ACL is configured for the requested operation, then access will be DENIED.< / p >
< p > < b > NOTE:< / b > The default and whitelist key ACL does not support < code > ALL< / code > operation qualifier.< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> key.acl.testKey1.MANAGEMENT< /name>
< value> *< /value>
< description>
ACL for create-key, deleteKey and rolloverNewVersion operations.
< /description>
< /property>
< property>
< name> key.acl.testKey2.GENERATE_EEK< /name>
< value> *< /value>
< description>
ACL for generateEncryptedKey operations.
< /description>
< /property>
< property>
< name> key.acl.testKey3.DECRYPT_EEK< /name>
< value> admink3< /value>
< description>
ACL for decryptEncryptedKey operations.
< /description>
< /property>
< property>
< name> key.acl.testKey4.READ< /name>
< value> *< /value>
< description>
ACL for getKeyVersion, getKeyVersions, getMetadata, getKeysMetadata,
getCurrentKey operations
< /description>
< /property>
< property>
< name> key.acl.testKey5.ALL< /name>
< value> *< /value>
< description>
ACL for ALL operations.
< /description>
< /property>
< property>
< name> whitelist.key.acl.MANAGEMENT< /name>
< value> admin1< /value>
< description>
whitelist ACL for MANAGEMENT operations for all keys.
< /description>
< /property>
< !--
'testKey3' key ACL is defined. Since a 'whitelist'
key is also defined for DECRYPT_EEK, in addition to
admink3, admin1 can also perform DECRYPT_EEK operations
on 'testKey3'
-->
< property>
< name> whitelist.key.acl.DECRYPT_EEK< /name>
< value> admin1< /value>
< description>
whitelist ACL for DECRYPT_EEK operations for all keys.
< /description>
< /property>
< property>
< name> default.key.acl.MANAGEMENT< /name>
< value> user1,user2< /value>
< description>
default ACL for MANAGEMENT operations for all keys that are not
explicitly defined.
< /description>
< /property>
< property>
< name> default.key.acl.GENERATE_EEK< /name>
< value> user1,user2< /value>
< description>
default ACL for GENERATE_EEK operations for all keys that are not
explicitly defined.
< /description>
< /property>
< property>
< name> default.key.acl.DECRYPT_EEK< /name>
< value> user1,user2< /value>
< description>
default ACL for DECRYPT_EEK operations for all keys that are not
explicitly defined.
< /description>
< /property>
< property>
< name> default.key.acl.READ< /name>
< value> user1,user2< /value>
< description>
default ACL for READ operations for all keys that are not
explicitly defined.
< /description>
< /property>
< / pre > < / div > < / div >
< / section > < / section > < / section > < section >
< h3 > < a name = "KMS_Delegation_Token_Configuration" > < / a > KMS Delegation Token Configuration< / h3 >
< p > KMS supports delegation tokens to authenticate to the key providers from processes without Kerberos credentials.< / p >
< p > KMS delegation token authentication extends the default Hadoop authentication. Same as Hadoop authentication, KMS delegation tokens must not be fetched or renewed using delegation token authentication. See < a href = "../hadoop-auth/index.html" > Hadoop Auth< / a > page for more details.< / p >
< p > Additionally, KMS delegation token secret manager can be configured with the following properties:< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.kms.authentication.delegation-token.update-interval.sec< /name>
< value> 86400< /value>
< description>
How often the master key is rotated, in seconds. Default value 1 day.
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.delegation-token.max-lifetime.sec< /name>
< value> 604800< /value>
< description>
Maximum lifetime of a delegation token, in seconds. Default value 7 days.
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.delegation-token.renew-interval.sec< /name>
< value> 86400< /value>
< description>
Renewal interval of a delegation token, in seconds. Default value 1 day.
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.delegation-token.removal-scan-interval.sec< /name>
< value> 3600< /value>
< description>
Scan interval to remove expired delegation tokens.
< /description>
< /property>
< / pre > < / div > < / div >
< / section > < section >
< h3 > < a name = "High_Availability" > < / a > High Availability< / h3 >
< p > Multiple KMS instances may be used to provide high availability and scalability. Currently there are two approaches to supporting multiple KMS instances: running KMS instances behind a load-balancer/VIP, or using LoadBalancingKMSClientProvider.< / p >
< p > In both approaches, KMS instances must be specially configured to work properly as a single logical service, because requests from the same client may be handled by different KMS instances. In particular, Kerberos Principals Configuration, HTTP Authentication Signature and Delegation Tokens require special attention.< / p > < section >
< h4 > < a name = "Behind_a_Load-Balancer_or_VIP" > < / a > Behind a Load-Balancer or VIP< / h4 >
< p > Because KMS clients and servers communicate via a REST API over HTTP, Load-balancer or VIP may be used to distribute incoming traffic to achieve scalability and HA. In this mode, clients are unaware of multiple KMS instances at the server-side.< / p > < / section > < section >
< h4 > < a name = "Using_LoadBalancingKMSClientProvider" > < / a > Using LoadBalancingKMSClientProvider< / h4 >
< p > An alternative to running multiple KMS instances behind a load-balancer or VIP, is to use LoadBalancingKMSClientProvider. Using this approach, a KMS client (for example, a HDFS NameNode) is aware of multiple KMS instances, and it sends requests to them in a round-robin fashion. LoadBalancingKMSClientProvider is implicitly used when more than one URI is specified in < code > hadoop.security.key.provider.path< / code > .< / p >
< p > The following example in < code > core-site.xml< / code > configures two KMS instances, < code > kms01.example.com< / code > and < code > kms02.example.com< / code > . The hostnames are separated by semi-colons, and all KMS instances must run on the same port.< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.security.key.provider.path< /name>
< value> kms://https@kms01.example.com;kms02.example.com:9600/kms< /value>
< description>
The KeyProvider to use when interacting with encryption keys used
when reading and writing to an encryption zone.
< /description>
< /property>
< / pre > < / div > < / div >
< p > If a request to a KMS instance fails, clients retry with the next instance. The request is returned as failure only if all instances fail.< / p > < / section > < section >
< h4 > < a name = "HTTP_Kerberos_Principals_Configuration" > < / a > HTTP Kerberos Principals Configuration< / h4 >
< p > When KMS instances are behind a load-balancer or VIP, clients will use the hostname of the VIP. For Kerberos SPNEGO authentication, the hostname of the URL is used to construct the Kerberos service name of the server, < code > HTTP/#HOSTNAME#< / code > . This means that all KMS instances must have a Kerberos service name with the load-balancer or VIP hostname.< / p >
< p > In order to be able to access directly a specific KMS instance, the KMS instance must also have Kerberos service name with its own hostname. This is required for monitoring and admin purposes.< / p >
< p > Both Kerberos service principal credentials (for the load-balancer/VIP hostname and for the actual KMS instance hostname) must be in the keytab file configured for authentication. And the principal name specified in the configuration must be ‘ *’ . For example:< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.kms.authentication.kerberos.principal< /name>
< value> *< /value>
< /property>
< / pre > < / div > < / div >
< p > < b > NOTE:< / b > If using HTTPS, the SSL certificate used by the KMS instance must be configured to support multiple hostnames (see Java 7 < code > keytool< / code > SAN extension support for details on how to do this).< / p > < / section > < section >
< h4 > < a name = "HTTP_Authentication_Signature" > < / a > HTTP Authentication Signature< / h4 >
< p > KMS uses Hadoop Authentication for HTTP authentication. Hadoop Authentication issues a signed HTTP Cookie once the client has authenticated successfully. This HTTP Cookie has an expiration time, after which it will trigger a new authentication sequence. This is done to avoid triggering the authentication on every HTTP request of a client.< / p >
< p > A KMS instance must verify the HTTP Cookie signatures signed by other KMS instances. To do this, all KMS instances must share the signing secret. Please see < a href = "../hadoop-auth/Configuration.html#SignerSecretProvider_Configuration" > SignerSecretProvider Configuration< / a > for detailed description and configuration examples. Note that KMS configurations need to be prefixed with < code > hadoop.kms.authentication< / code > , as shown in the example below.< / p >
< p > This secret sharing can be done using a Zookeeper service which is configured in KMS with the following properties in the < code > kms-site.xml< / code > :< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.kms.authentication.signer.secret.provider< /name>
< value> zookeeper< /value>
< description>
Indicates how the secret to sign the authentication cookies will be
stored. Options are 'random' (default), 'file' and 'zookeeper'.
If using a setup with multiple KMS instances, 'zookeeper' should be used.
If using file, signature.secret.file should be configured and point to the secret file.
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.signer.secret.provider.zookeeper.path< /name>
< value> /hadoop-kms/hadoop-auth-signature-secret< /value>
< description>
The Zookeeper ZNode path where the KMS instances will store and retrieve
the secret from. All KMS instances that need to coordinate should point to the same path.
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.signer.secret.provider.zookeeper.connection.string< /name>
< value> #HOSTNAME#:#PORT#,...< /value>
< description>
The Zookeeper connection string, a list of hostnames and port comma
separated.
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type< /name>
< value> sasl< /value>
< description>
The Zookeeper authentication type, 'none' (default) or 'sasl' (Kerberos).
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.keytab< /name>
< value> /etc/hadoop/conf/kms.keytab< /value>
< description>
The absolute path for the Kerberos keytab with the credentials to
connect to Zookeeper.
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.principal< /name>
< value> kms/#HOSTNAME#< /value>
< description>
The Kerberos service principal used to connect to Zookeeper.
< /description>
< /property>
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Delegation_Tokens" > < / a > Delegation Tokens< / h4 >
< p > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation tokens too. Under HA, every KMS instance must verify the delegation token given by another KMS instance. To do this, all the KMS instances must use ZKDelegationTokenSecretManager to retrieve the TokenIdentifiers and DelegationKeys from ZooKeeper.< / p >
< p > Sample configuration in < code > etc/hadoop/kms-site.xml< / code > :< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.kms.authentication.zk-dt-secret-manager.enable< /name>
< value> true< /value>
< description>
If true, Hadoop KMS uses ZKDelegationTokenSecretManager to persist
TokenIdentifiers and DelegationKeys in ZooKeeper.
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.zk-dt-secret-manager.zkConnectionString< /name>
< value> #HOSTNAME#:#PORT#,...< /value>
< description>
The ZooKeeper connection string, a comma-separated list of hostnames and port.
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.zk-dt-secret-manager.znodeWorkingPath< /name>
< value> /hadoop-kms/zkdtsm< /value>
< description>
The ZooKeeper znode path where the KMS instances will store and retrieve
the secret from. All the KMS instances that need to coordinate should point to the same path.
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.zk-dt-secret-manager.zkAuthType< /name>
< value> sasl< /value>
< description>
The ZooKeeper authentication type, 'none' (default) or 'sasl' (Kerberos).
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.zk-dt-secret-manager.kerberos.keytab< /name>
< value> /etc/hadoop/conf/kms.keytab< /value>
< description>
The absolute path for the Kerberos keytab with the credentials to
connect to ZooKeeper. This parameter is effective only when
hadoop.kms.authentication.zk-dt-secret-manager.zkAuthType is set to 'sasl'.
< /description>
< /property>
< property>
< name> hadoop.kms.authentication.zk-dt-secret-manager.kerberos.principal< /name>
< value> kms/#HOSTNAME#< /value>
< description>
The Kerberos service principal used to connect to ZooKeeper.
This parameter is effective only when
hadoop.kms.authentication.zk-dt-secret-manager.zkAuthType is set to 'sasl'.
< /description>
< /property>
< / pre > < / div > < / div >
< / section > < / section > < section >
< h3 > < a name = "KMS_HTTP_REST_API" > < / a > KMS HTTP REST API< / h3 > < section >
< h4 > < a name = "Create_a_Key" > < / a > Create a Key< / h4 >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > POST http://HOST:PORT/kms/v1/keys
Content-Type: application/json
{
" name" : " < key-name> " ,
" cipher" : " < cipher> " ,
" length" : < length> , //int
" material" : " < material> " , //base64
" description" : " < description> "
}
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 201 CREATED
LOCATION: http://HOST:PORT/kms/v1/key/< key-name>
Content-Type: application/json
{
" name" : " versionName" ,
" material" : " < material> " , //base64, not present without GET ACL
}
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Rollover_Key" > < / a > Rollover Key< / h4 >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > POST http://HOST:PORT/kms/v1/key/< key-name>
Content-Type: application/json
{
" material" : " < material> " ,
}
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
Content-Type: application/json
{
" name" : " versionName" ,
" material" : " < material> " , //base64, not present without GET ACL
}
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Invalidate_Cache_of_a_Key" > < / a > Invalidate Cache of a Key< / h4 >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > POST http://HOST:PORT/kms/v1/key/< key-name> /_invalidatecache
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Delete_Key" > < / a > Delete Key< / h4 >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > DELETE http://HOST:PORT/kms/v1/key/< key-name>
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Get_Key_Metadata" > < / a > Get Key Metadata< / h4 >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > GET http://HOST:PORT/kms/v1/key/< key-name> /_metadata
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
Content-Type: application/json
{
" name" : " < key-name> " ,
" cipher" : " < cipher> " ,
" length" : < length> , //int
" description" : " < description> " ,
" created" : < millis-epoc> , //long
" versions" : < versions> //int
}
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Get_Current_Key" > < / a > Get Current Key< / h4 >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > GET http://HOST:PORT/kms/v1/key/< key-name> /_currentversion
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
Content-Type: application/json
{
" name" : " versionName" ,
" material" : " < material> " , //base64
}
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Generate_Encrypted_Key_for_Current_KeyVersion" > < / a > Generate Encrypted Key for Current KeyVersion< / h4 >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > GET http://HOST:PORT/kms/v1/key/< key-name> /_eek?eek_op=generate& num_keys=< number-of-keys-to-generate>
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
Content-Type: application/json
[
{
" versionName" : " < encryptionVersionName> " ,
" iv" : " < iv> " , //base64
" encryptedKeyVersion" : {
" versionName" : " EEK" ,
" material" : " < material> " , //base64
}
},
{
" versionName" : " < encryptionVersionName> " ,
" iv" : " < iv> " , //base64
" encryptedKeyVersion" : {
" versionName" : " EEK" ,
" material" : " < material> " , //base64
}
},
...
]
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Decrypt_Encrypted_Key" > < / a > Decrypt Encrypted Key< / h4 >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > POST http://HOST:PORT/kms/v1/keyversion/< version-name> /_eek?eek_op=decrypt
Content-Type: application/json
{
" name" : " < key-name> " ,
" iv" : " < iv> " , //base64
" material" : " < material> " , //base64
}
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
Content-Type: application/json
{
" name" : " EK" ,
" material" : " < material> " , //base64
}
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Re-encrypt_Encrypted_Key_With_The_Latest_KeyVersion" > < / a > Re-encrypt Encrypted Key With The Latest KeyVersion< / h4 >
< p > This command takes a previously generated encrypted key, and re-encrypts it using the latest KeyVersion encryption key in the KeyProvider. If the latest KeyVersion is the same as the one used to generate the encrypted key, the same encrypted key is returned.< / p >
< p > This is usually useful after a < a href = "#Rollover_Key" > Rollover< / a > of an encryption key. Re-encrypting the encrypted key will allow it to be encrypted using the latest version of the encryption key, but still with the same key material and initialization vector.< / p >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > POST http://HOST:PORT/kms/v1/keyversion/< version-name> /_eek?eek_op=reencrypt
Content-Type: application/json
{
" name" : " < key-name> " ,
" iv" : " < iv> " , //base64
" material" : " < material> " , //base64
}
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
Content-Type: application/json
{
" versionName" : " < encryptionVersionName> " ,
" iv" : " < iv> " , //base64
" encryptedKeyVersion" : {
" versionName" : " EEK" ,
" material" : " < material> " , //base64
}
}
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Batch_Re-encrypt_Encrypted_Keys_With_The_Latest_KeyVersion" > < / a > Batch Re-encrypt Encrypted Keys With The Latest KeyVersion< / h4 >
< p > Batched version of the above re-encrypt Encrypted Key. This command takes a list of previously generated encrypted key, and re-encrypts them using the latest KeyVersion encryption key in the KeyProvider, and return the re-encrypted encrypted keys in the same sequence. For each encrypted key, if the latest KeyVersion is the same as the one used to generate the encrypted key, no action is taken and the same encrypted key is returned.< / p >
< p > This is usually useful after a < a href = "#Rollover_Key" > Rollover< / a > of an encryption key. Re-encrypting the encrypted key will allow it to be encrypted using the latest version of the encryption key, but still with the same key material and initialization vector.< / p >
< p > All Encrypted keys for a batch request must be under the same encryption key name, but could be potentially under different versions of the encryption key.< / p >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > POST http://HOST:PORT/kms/v1/key/< key-name> /_reencryptbatch
Content-Type: application/json
[
{
" versionName" : " < encryptionVersionName> " ,
" iv" : " < iv> " , //base64
" encryptedKeyVersion" : {
" versionName" : " EEK" ,
" material" : " < material> " , //base64
}
},
{
" versionName" : " < encryptionVersionName> " ,
" iv" : " < iv> " , //base64
" encryptedKeyVersion" : {
" versionName" : " EEK" ,
" material" : " < material> " , //base64
}
},
...
]
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
Content-Type: application/json
[
{
" versionName" : " < encryptionVersionName> " ,
" iv" : " < iv> " , //base64
" encryptedKeyVersion" : {
" versionName" : " EEK" ,
" material" : " < material> " , //base64
}
},
{
" versionName" : " < encryptionVersionName> " ,
" iv" : " < iv> " , //base64
" encryptedKeyVersion" : {
" versionName" : " EEK" ,
" material" : " < material> " , //base64
}
},
...
]
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Get_Key_Version" > < / a > Get Key Version< / h4 >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > GET http://HOST:PORT/kms/v1/keyversion/< version-name>
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
Content-Type: application/json
{
" name" : " < name> " ,
" versionName" : " < version> " ,
" material" : " < material> " , //base64
}
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Get_Key_Versions" > < / a > Get Key Versions< / h4 >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > GET http://HOST:PORT/kms/v1/key/< key-name> /_versions
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
Content-Type: application/json
[
{
" name" : " < name> " ,
" versionName" : " < version> " ,
" material" : " < material> " , //base64
},
{
" name" : " < name> " ,
" versionName" : " < version> " ,
" material" : " < material> " , //base64
},
...
]
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Get_Key_Names" > < / a > Get Key Names< / h4 >
< p > < i > REQUEST:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > GET http://HOST:PORT/kms/v1/keys/names
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
Content-Type: application/json
[
" < key-name> " ,
" < key-name> " ,
...
]
< / pre > < / div > < / div >
< / section > < section >
< h4 > < a name = "Get_Keys_Metadata" > < / a > Get Keys Metadata< / h4 >
< div class = "source" >
< div class = "source" >
< pre > GET http://HOST:PORT/kms/v1/keys/metadata?key=< key-name> & key=< key-name> ,...
< / pre > < / div > < / div >
< p > < i > RESPONSE:< / i > < / p >
< div class = "source" >
< div class = "source" >
< pre > 200 OK
Content-Type: application/json
[
{
" name" : " < key-name> " ,
" cipher" : " < cipher> " ,
" length" : < length> , //int
" description" : " < description> " ,
" created" : < millis-epoc> , //long
" versions" : < versions> //int
},
{
" name" : " < key-name> " ,
" cipher" : " < cipher> " ,
" length" : < length> , //int
" description" : " < description> " ,
" created" : < millis-epoc> , //long
" versions" : < versions> //int
},
...
]
< / pre > < / div > < / div >
< / section > < / section > < section >
< h3 > < a name = "Deprecated_Environment_Variables" > < / a > Deprecated Environment Variables< / h3 >
< p > The following environment variables are deprecated. Set the corresponding configuration properties instead.< / p >
< table border = "0" class = "bodyTable" >
< thead >
< tr class = "a" >
< th > Environment Variable < / th >
< th > Configuration Property < / th >
< th > Configuration File< / th > < / tr >
< / thead > < tbody >
< tr class = "b" >
< td > KMS_TEMP < / td >
< td > hadoop.http.temp.dir < / td >
< td > kms-site.xml< / td > < / tr >
< tr class = "a" >
< td > KMS_HTTP_PORT < / td >
< td > hadoop.kms.http.port < / td >
< td > kms-site.xml< / td > < / tr >
< tr class = "b" >
< td > KMS_MAX_HTTP_HEADER_SIZE < / td >
< td > hadoop.http.max.request.header.size and hadoop.http.max.response.header.size < / td >
< td > kms-site.xml< / td > < / tr >
< tr class = "a" >
< td > KMS_MAX_THREADS < / td >
< td > hadoop.http.max.threads < / td >
< td > kms-site.xml< / td > < / tr >
< tr class = "b" >
< td > KMS_SSL_ENABLED < / td >
< td > hadoop.kms.ssl.enabled < / td >
< td > kms-site.xml< / td > < / tr >
< tr class = "a" >
< td > KMS_SSL_KEYSTORE_FILE < / td >
< td > ssl.server.keystore.location < / td >
< td > ssl-server.xml< / td > < / tr >
< tr class = "b" >
< td > KMS_SSL_KEYSTORE_PASS < / td >
< td > ssl.server.keystore.password < / td >
< td > ssl-server.xml< / td > < / tr >
< / tbody >
< / table > < / section > < section >
< h3 > < a name = "Default_HTTP_Services" > < / a > Default HTTP Services< / h3 >
< table border = "0" class = "bodyTable" >
< thead >
< tr class = "a" >
< th > Name < / th >
< th > Description< / th > < / tr >
< / thead > < tbody >
< tr class = "b" >
< td > /conf < / td >
< td > Display configuration properties< / td > < / tr >
< tr class = "a" >
< td > /jmx < / td >
< td > Java JMX management interface< / td > < / tr >
< tr class = "b" >
< td > /logLevel < / td >
< td > Get or set log level per class< / td > < / tr >
< tr class = "a" >
< td > /logs < / td >
< td > Display log files< / td > < / tr >
< tr class = "b" >
< td > /stacks < / td >
< td > Display JVM stacks< / td > < / tr >
< tr class = "a" >
< td > /static/index.html < / td >
< td > The static home page< / td > < / tr >
< tr class = "b" >
< td > /prof < / td >
< td > Async Profiler endpoint< / td > < / tr >
< / tbody >
< / table >
< p > To control the access to servlet < code > /conf< / code > , < code > /jmx< / code > , < code > /logLevel< / code > , < code > /logs< / code > , < code > /stacks< / code > and < code > /prof< / code > , configure the following properties in < code > kms-site.xml< / code > :< / p >
< div class = "source" >
< div class = "source" >
< pre > < property>
< name> hadoop.security.authorization< /name>
< value> true< /value>
< description> Is service-level authorization enabled?< /description>
< /property>
< property>
< name> hadoop.security.instrumentation.requires.admin< /name>
< value> true< /value>
< description>
Indicates if administrator ACLs are required to access
instrumentation servlets (JMX, METRICS, CONF, STACKS, PROF).
< /description>
< /property>
< property>
< name> hadoop.kms.http.administrators< /name>
< value> < /value>
< description> ACL for the admins, this configuration is used to control
who can access the default KMS servlets. The value should be a comma
separated list of users and groups. The user list comes first and is
separated by a space followed by the group list,
e.g. " user1,user2 group1,group2" . Both users and groups are optional,
so " user1" , " group1" , " " , " user1 group1" , " user1,user2 group1,group2"
are all valid (note the leading space in " group1" ). '*' grants access
to all users and groups, e.g. '*', '* ' and ' *' are all valid.
< /description>
< /property>
< / pre > < / div > < / div > < / section > < / section >
< / div >
< / div >
< div class = "clear" >
< hr / >
< / div >
< div id = "footer" >
< div class = "xright" >
© 2008-2023
Apache Software Foundation
- < a href = "http://maven.apache.org/privacy-policy.html" > Privacy Policy< / a > .
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
< / div >
< div class = "clear" >
< hr / >
< / div >
< / div >
< / body >
< / html >