hadoop/hadoop-ozone/dist
Xiaoyu Yao 5f8ded542a
HDDS-1346. Remove hard-coded version ozone-0.5.0 from ReadMe of ozonesecure-mr docker-compose. Contributed by Xiaoyu Yao.
This closes #652.

(cherry picked from commit 8a59efee34)
2019-03-27 21:53:31 -07:00
..
dev-support/bin HDDS-1226. ozone-filesystem jar missing in hadoop classpath 2019-03-07 14:46:01 +01:00
src/main HDDS-1346. Remove hard-coded version ozone-0.5.0 from ReadMe of ozonesecure-mr docker-compose. Contributed by Xiaoyu Yao. 2019-03-27 21:53:31 -07:00
Dockerfile HDDS-872. Add Dockerfile and skaffold config to deploy ozone dev build to k8s. Contributed by Elek, Marton. 2018-12-11 16:31:33 +01:00
README.md HDDS-872. Add Dockerfile and skaffold config to deploy ozone dev build to k8s. Contributed by Elek, Marton. 2018-12-11 16:31:33 +01:00
pom.xml HDDS-1226. ozone-filesystem jar missing in hadoop classpath 2019-03-07 14:46:01 +01:00
skaffold.yaml HDDS-872. Add Dockerfile and skaffold config to deploy ozone dev build to k8s. Contributed by Elek, Marton. 2018-12-11 16:31:33 +01:00

README.md

Ozone Distribution

This folder contains the project to create the binary ozone distribution and provide all the helper script and docker files to start it locally or in the cluster.

Testing with local docker based cluster

After a full dist build you can find multiple docker-compose based cluster definition in the target/ozone-*/compose folder.

Please check the README files there.

Usually you can start the cluster with:

cd compose/ozone
docker-compose up -d

Testing on Kubernetes

You can also test the ozone cluster in kubernetes. If you have no active kubernetes cluster you can start a local one with minikube:

minikube start

For testing in kubernetes you need to:

  1. Create a docker image with the new build
  2. Upload it to a docker registery
  3. Deploy the cluster with apply kubernetes resources

The easiest way to do all these steps is using the skaffold tool. After the installation of skaffold, you can execute

skaffold run

in this (hadoop-ozone/dist) folder.

The default kubernetes resources set (src/main/k8s/) contains NodePort based service definitions for the Ozone Manager, Storage Container Manager and the S3 gateway.

With minikube you can access the services with:

minikube service s3g-public
minikube service om-public
minikube service scm-public

Monitoring

Apache Hadoop Ozone supports Prometheus out-of the box. It contains a prometheus compatible exporter servlet. To start the monitoring you need a prometheus deploy in your kubernetes cluster:

cd src/main/k8s/prometheus
kubectl apply -f .

The prometheus ui also could be access via a NodePort service:

minikube service prometheus-public

Notes on the Kubernetes setup

Please not that the provided kubernetes resources are not suitable production:

  1. There are no security setup
  2. The datanode is started in StatefulSet instead of DaemonSet (To make it possible to scale it up on one node minikube cluster)
  3. All the UI pages are published with NodePort services