Dr. Sizzles e5ad24ff9f
Support for middle manager less druid, tasks launch as k8s jobs (#13156)
* Support for middle manager less druid, tasks launch as k8s jobs

* Fixing forking task runner test

* Test cleanup, dependency cleanup, intellij inspections cleanup

* Changes per PR review

Add configuration option to disable http/https proxy for the k8s client
Update the docs to provide more detail about sidecar support

* Removing un-needed log lines

* Small changes per PR review

* Upon task completion we callback to the overlord to update the status / locaiton, for slower k8s clusters, this reduces locking time significantly

* Merge conflict fix

* Fixing tests and docs

* update tiny-cluster.yaml 

changed `enableTaskLevelLogPush` to `encapsulatedTask`

* Apply suggestions from code review

Co-authored-by: Abhishek Agarwal <1477457+abhishekagarwal87@users.noreply.github.com>

* Minor changes per PR request

* Cleanup, adding test to AbstractTask

* Add comment in peon.sh

* Bumping code coverage

* More tests to make code coverage happy

* Doh a duplicate dependnecy

* Integration test setup is weird for k8s, will do this in a different PR

* Reverting back all integration test changes, will do in anotbher PR

* use StringUtils.base64 instead of Base64

* Jdk is nasty, if i compress in jdk 11 in jdk 17 the decompressed result is different

Co-authored-by: Rahul Gidwani <r_gidwani@apple.com>
Co-authored-by: Abhishek Agarwal <1477457+abhishekagarwal87@users.noreply.github.com>
2022-11-02 19:44:47 -07:00

6.1 KiB

id title
k8s-jobs MM-less Druid in K8s

Apache Druid Extension to enable using Kubernetes for launching and managing tasks instead of the Middle Managers. This extension allows you to launch tasks as kubernetes jobs removing the need for your middle manager.

Consider this an EXPERIMENTAL feature mostly because it has not been tested yet on a wide variety of long-running Druid clusters.

How it works

The K8s extension takes the podSpec of your Overlord pod and creates a kubernetes job from this podSpec. Thus if you have sidecars such as Splunk or Istio it can optionally launch a task as a K8s job. All jobs are natively restorable, they are decoupled from the druid deployment, thus restarting pods or doing upgrades has no affect on tasks in flight. They will continue to run and when the overlord comes back up it will start tracking them again.

Configuration

To use this extension please make sure to includedruid-kubernetes-overlord-extensions in the extensions load list for your overlord process.

The extension uses the task queue to limit how many concurrent tasks (K8s jobs) are in flight so it is required you have a reasonable value for druid.indexer.queue.maxSize. Additionally set the variable druid.indexer.runner.namespace to the namespace in which you are running druid.

Other configurations required are: druid.indexer.runner.type: k8s and druid.indexer.task.encapsulatedTask: true

You can add optional labels to your K8s jobs / pods if you need them by using the following configuration: druid.indexer.runner.labels: '{"key":"value"}' Annotations are the same with: druid.indexer.runner.annotations: '{"key":"value"}'

All other configurations you had for the middle manager tasks must be moved under the overlord with one caveat, you must specify javaOpts as an array: druid.indexer.runner.javaOptsArray, druid.indexer.runner.javaOpts is no longer supported.

If you are running without a middle manager you need to also use druid.processing.intermediaryData.storage.type=deepstore

Additional Configuration

Properties

Property Possible Values Description Default required
druid.indexer.runner.debugJobs boolean Clean up K8s jobs after tasks complete. False No
druid.indexer.runner.sidecarSupport boolean If your overlord pod has sidecars, this will attempt to start the task with the same sidecars as the overlord pod. False No
druid.indexer.runner.kubexitImage String Used kubexit project to help shutdown sidecars when the main pod completes. Otherwise jobs with sidecars never terminate. karlkfi/kubexit:latest No
druid.indexer.runner.disableClientProxy boolean Use this if you have a global http(s) proxy and you wish to bypass it. false No
druid.indexer.runner.maxTaskDuration Duration Max time a task is allowed to run for before getting killed PT4H No
druid.indexer.runner.taskCleanupDelay Duration How long do jobs stay around before getting reaped from K8s P2D No
druid.indexer.runner.taskCleanupInterval Duration How often to check for jobs to be reaped PT10M No
druid.indexer.runner.K8sjobLaunchTimeout Duration How long to wait to launch a K8s task before marking it as failed, on a resource constrained cluster it may take some time. PT1H No
druid.indexer.runner.javaOptsArray JsonArray java opts for the task. -Xmx1g No
druid.indexer.runner.labels JsonObject Additional labels you want to add to peon pod {} No
druid.indexer.runner.annotations JsonObject Additional annotations you want to add to peon pod {} No
druid.indexer.runner.graceTerminationPeriodSeconds Long Number of seconds you want to wait after a sigterm for container lifecycle hooks to complete. Keep at a smaller value if you want tasks to hold locks for shorter periods. PT30S (K8s default) No

Gotchas

  • You must have in your role the ability to launch jobs.
  • All Druid Pods belonging to one Druid cluster must be inside same kubernetes namespace.
  • For the sidecar support to work, your entry point / command in docker must be explicitly defined your spec.

You can't have something like this: Dockerfile: ENTRYPOINT: ["foo.sh"]

and in your sidecar specs:

        name: foo
        args: 
           - arg1
           - arg2 

That will not work, because we cannot decipher what your command is, the extension needs to know it explicitly. *Even for sidecars like Istio which are dynamically created by the service mesh, this needs to happen.

Instead do the following: You can keep your Dockerfile the same but you must have a sidecar spec like so:

        name: foo
        command: foo.sh
        args: 
           - arg1
           - arg2 

The following roles must also be accessible. An example spec could be:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: druid-cluster
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - configmaps
  - jobs
  verbs:
  - '*'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: druid-cluster
subjects:
- kind: ServiceAccount
  name: default
roleRef:
  kind: Role
  name: druid-cluster
  apiGroup: rbac.authorization.k8s.io