hadoop/hadoop-hdds/docs/content/beyond/RunningWithHDFS.md

2.6 KiB

title linktitle weight summary
Running concurrently with HDFS Runing with HDFS 1 Ozone is designed to run concurrently with HDFS. This page explains how to deploy Ozone in a exisiting HDFS cluster.

Ozone is designed to work with HDFS. So it is easy to deploy ozone in an existing HDFS cluster.

The container manager part of Ozone can run inside DataNodes as a pluggable module or as a standalone component. This document describe how can it be started as a HDFS datanode plugin.

To activate ozone you should define the service plugin implementation class.

Important: It should be added to the hdfs-site.xml as the plugin should be activated as part of the normal HDFS Datanode bootstrap.

{{< highlight xml >}} dfs.datanode.plugins org.apache.hadoop.ozone.HddsDatanodeService {{< /highlight >}}

You also need to add the ozone-datanode-plugin jar file to the classpath:

{{< highlight bash >}} export HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin.jar {{< /highlight >}}

To start ozone with HDFS you should start the the following components:

  1. HDFS Namenode (from Hadoop distribution)
  2. HDFS Datanode (from the Hadoop distribution with the plugin on the classpath from the Ozone distribution)
  3. Ozone Manager (from the Ozone distribution)
  4. Storage Container Manager (from the Ozone distribution)

Please check the log of the datanode whether the HDDS/Ozone plugin is started or not. Log of datanode should contain something like this:

2018-09-17 16:19:24 INFO  HddsDatanodeService:158 - Started plug-in org.apache.hadoop.ozone.web.OzoneHddsDatanodeService@6f94fb9d
Note: The current version of Ozone is tested with Hadoop 3.1.