Containers solved the problem of moving software from one environment to another because they encapsulate all the software dependencies. However, an orchestration platform is needed to manage containers at scale. [Kubernetes](https://kubernetes.io/) is a popular open-source solution that uses declarative configuration to specify the desired state of the application. Configuring and deploying an application on Kubernetes is often accomplished with YAML files to define the state and command line tools to manage and control the Kubernetes API. This article demonstrates how to use infrastructure as code to create [basic Kubernetes objects](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects) and higher-level abstractions that build upon the basic objects.
This is the first article of a multi-part series covering deploying Kubernetes and applications using infrastructure as code. We'll take a top-down approach starting from clusters to high-level abstractions such as deployments. This article aims to provide a step-by-step example of deploying an application in Kubernetes using infrastructure as code. For this example, we'll use Typescript for the programming language and provide sample code for AWS, Azure, GCP, and Kubernetes.
## Clusters and Nodes
A cluster is formed with a [control plane](https://kubernetes.io/docs/concepts/#kubernetes-control-plane) and a collection of [nodes](https://kubernetes.io/docs/concepts/architecture/nodes/), the smallest unit of computing in Kubernetes. A node can be either a physical or virtual that contains the necessary [components](https://kubernetes.io/docs/concepts/overview/components/#node-components) to run containers. A cluster has one or more nodes designated to the control plane, which controls the worker nodes where the application containers are deployed. A control plane node has:
- the Kube-APIServer, which provides communication to the cluster
- a Kube-Controller-Manager that provides governance for the cluster
- etcd, which is the cluster state database
- Kube-Scheduler, which schedules worker nodes based on events in etcd
To create a cluster for Kubernetes on a cloud provider, we need to create a VPC that will host the nodes required to deploy the Kubernetes cluster. Each cloud provider requires configuration specific to their implementation, and we'll cover their particular requirements.
We declare a VPC to host our Kubernetes cluster and specify a public subnet, which is the gateway for the Kube-APIServer. We create the Kubernetes cluster with the VPC we declared and the VPC's default public subnets. The `desiredCapacity` parameter sets the desired number of EC2 `t2.medium` nodes. We also export the [*kubeconfig*](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file which we can use with *kubectl* to communicate with our Kubernetes cluster.
export const resourceGroup = new azure.core.ResourceGroup("aks", { location });
```
In Azure, we have to create a *service principal*, which is an identity for accessing Azure resources. Note that in this example, the service principal password is stored as a secret in the project config file. Azure allocates a VPC when creating a Kubernetes cluster based on the values set in the config file. For example, the default number of nodes is two and uses Standard_D2_v2 virtual machine if these are not set in the config file. We also export the [*kubeconfig*](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file which we can use with *kubectl* to communicate with our Kubernetes cluster.
We create the Kubernetes cluster using the variables we set in the configuration file. Note that for the nodes, we specify the *oauthScopes*, which are the Google API scopes available to all of the node VMs under the "default" service account. Because GKE uses gcloud to authenticate to the service, we have to create a [*kubeconfig*](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file that uses gcloud. The kubeconfig file lets us communicate with our Kubernetes cluster.
This article covers how to set up a Kubernetes cluster on AWS, Azure, and GCP using Pulumi. Creating a cluster differs among cloud providers, but the process is generally the same. We defined configuration parameters such as node type, number of nodes, and passwords to instantiate the cluster then exported a kubeconfig file that we can use with kubectl.
This is the first in a series of articles on using infrastructure as code for Kubernetes. In the next article, we'll cover basic Kubernetes objects such as pods, services, and volumes. We'll also cover higher-level abstractions such as deployments and replicasets. Stay tuned! In the meantime, learn more about Pulumi: