12 KiB
title_tag, meta_desc, title, h1, meta_image, menu, aliases
title_tag | meta_desc | title | h1 | meta_image | menu | aliases | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Access Created Kubernetes Cluster | Crosswalk | This page provides a guide on how to try out a newly created Kubernetes cluster. | Access clusters | Accessing Kubernetes clusters | /images/docs/meta-images/docs-clouds-kubernetes-meta-image.png |
|
|
{{< chooser cloud "aws,azure,gcp" / >}}
After the cluster is created with a Pulumi update, there will be
outputs with fields like the cluster's kubeconfig
file
contents, and its cluster name for reference.
{{% choosable cloud aws %}}
The full code for this stack is on GitHub.
{{% /choosable %}}
{{% choosable cloud azure %}}
The full code for this stack is on GitHub.
{{% /choosable %}}
{{% choosable cloud gcp %}}
The full code for this stack is on GitHub.
{{% /choosable %}}
Overview
We'll explore how to:
Access the Cluster
{{% choosable cloud aws %}}
In EKS, the account caller will be placed into the
system:masters
Kubernetes RBAC group by default. The kubeconfig
generated will be specific to this primary cluster creator use-case, and it must be
copied, and reconfigured to use with other IAM roles the caller assumes, as
demonstrated in Configure Access Control.
As an Admin
Authentication
Authenticate as the admins
role from the Identity stack.
$ aws sts assume-role --role-arn `pulumi stack output adminsIamRoleArn` --role-session-name k8s-admin
Kubeconfig Setup
To access your new Kubernetes cluster using kubectl
, we need to setup the
kubeconfig
file, and export the environment variable for kubectl
usage
from the Cluster Configuration stack.
Setup the kubeconfig environment variable.
$ export KUBECONFIG=`pwd`/kubeconfig-admin.json
Get the Admins IAM Role ARN.
$ pulumi stack output adminsIamRoleArn
arn:aws:iam::000000000000:role/admins-eksClusterAdmin-0627674
Make a copy of the kubeconfig file that will be edited for the admins
to use the
adminsIamRoleArn
output.
$ pulumi stack output kubeconfig > kubeconfig-admin.json
Edit kubeconfig-admin.json
to use a role for authentication in the
args
of the aws-iam-authenticator
, e.g.
...
"users": [
{
"name": "aws",
"user": {
"exec": {
"apiVersion": "client.authentication.k8s.io/v1alpha1",
"args": [
"token",
"-i",
"k8s-aws-cluster-eksCluster-1ef1afe",
"-r",
"arn:aws:iam::000000000000:role/admins-eksClusterAdmin-0627674"
],
"command": "aws-iam-authenticator"
}
}
}
]
As a Developer
Authentication
Authenticate as the devs
role from the Identity stack.
$ aws sts assume-role --role-arn `pulumi stack output devsIamRoleArn` --role-session-name k8s-devs
Kubeconfig Setup
To access your new Kubernetes cluster using kubectl
, we need to setup the
kubeconfig
file, and export the environment variable for kubectl
usage
from the Cluster Configuration stack.
Setup the kubeconfig environment variable.
$ export KUBECONFIG=`pwd`/kubeconfig-devs.json
Get the Devs IAM Role ARN.
$ pulumi stack output devsIamRoleArn
arn:aws:iam::000000000000:role/devs-eksClusterDeveloper-e332028
Make a copy of the kubeconfig file that will be edited for the devs
to use the
devsIamRoleArn
output.
$ pulumi stack output kubeconfig > kubeconfig-devs.json
Edit kubeconfig-devs.json
to use a role for authentication in the
args
of the aws-iam-authenticator
, e.g.
...
"users": [
{
"name": "aws",
"user": {
"exec": {
"apiVersion": "client.authentication.k8s.io/v1alpha1",
"args": [
"token",
"-i",
"k8s-aws-cluster-eksCluster-1ef1afe",
"-r",
"arn:aws:iam::000000000000:role/devs-eksClusterDeveloper-e332028"
],
"command": "aws-iam-authenticator"
}
}
}
]
{{% /choosable %}}
{{% choosable cloud azure %}}
In AKS, the account caller will be placed into the
system:masters
Kubernetes RBAC group by default. Two kubeconfig
files will
be generated that will be specific to the admin and cluster user use-cases.
To configure the cluster for use with IAM roles, check out Configure Access Control.
Authentication
Authenticate as the ServicePrincipal from the Identity stack.
$ az login --service-principal --username $ARM_CLIENT_ID --password $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
Admin Kubeconfig Setup
To access your new Kubernetes cluster using kubectl
, we need to setup the
kubeconfig
file.
$ pulumi stack output kubeconfigAdmin > kubeconfig-admin.json
$ export KUBECONFIG=`pwd`/kubeconfig-admin.json
Developers Kubeconfig Setup
To access your new Kubernetes cluster using kubectl
, we need to setup the
kubeconfig
file.
$ pulumi stack output kubeconfig > kubeconfig-devs.json
$ export KUBECONFIG=`pwd`/kubeconfig-devs.json
{{% /choosable %}}
{{% choosable cloud gcp %}}
In Google Cloud, the account caller will be placed into the
system:masters
Kubernetes RBAC group by default. The kubeconfig
generated will be specific to this primary cluster creator use-case.
Google Cloud authentication will use tokens to operate as Members such as Users or ServiceAccounts, and with certain permissions as detailed in Configure Access Control.
Admin Authentication
Authenticate as the admins
ServiceAccount from the Identity stack.
$ pulumi stack output adminsIamServiceAccountSecret > k8s-admin-sa-key.json
$ gcloud auth activate-service-account --key-file k8s-admin-sa-key.json
Developer Authentication
Authenticate as the devs
ServiceAccount from the Identity stack.
$ pulumi stack output devsIamServiceAccountSecret > k8s-devs-sa-key.json
$ gcloud auth activate-service-account --key-file k8s-devs-sa-key.json
Kubeconfig Setup
To access your new Kubernetes cluster using kubectl
, we need to setup the
kubeconfig
file, and export the environment variable for kubectl
usage.
$ pulumi stack output --show-secrets kubeconfig > kubeconfig.json
$ export KUBECONFIG=`pwd`/kubeconfig.json
{{% /choosable %}}
Query the Cluster
Get cluster information.
$ kubectl version
$ kubectl cluster-info
Get the Nodes.
$ kubectl get nodes -o wide --show-labels
Get all Pods in the cluster, and show output attributes.
$ kubectl get pods --all-namespaces -o wide --show-labels
Get all Pods in the designated developer Namespace, and show output attributes.
$ kubectl get pods -n `pulumi stack output appsNamespaceName` -o wide --show-labels
Get the ConfigMaps of the kube-system
Namespace.
$ kubectl get cm -n kube-system
Deploy a Workload
{{< chooser k8s-language "typescript,yaml" / >}}
{{% choosable k8s-language yaml %}}
Imperatively deploy a NGINX Pod and public load-balanced service:
$ kubectl run --generator=run-pod/v1 nginx --image=nginx --port=80 --expose --service-overrides='{"spec":{"type":"LoadBalancer"}}'
After a few moments once it is deployed, visit the load balancer URL.
{{< choosable cloud aws >}}
$ if ING_LB=$((kubectl get svc nginx -o template --template='{{(index .status.loadBalancer.ingress 0).hostname}}') 2>&1) ; then echo "http://$ING_LB"; else echo "LB is not ready yet."; fi
{{< /choosable >}}
{{< choosable cloud azure >}}
$ if ING_LB=$((kubectl get svc nginx -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}') 2>&1) ; then echo "http://$ING_LB"; else echo "LB is not ready yet."; fi
{{< /choosable >}}
{{< choosable cloud gcp >}}
$ if ING_LB=$((kubectl get svc nginx -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}') 2>&1) ; then echo "http://$ING_LB"; else echo "LB is not ready yet."; fi
{{< /choosable >}}
Delete the pod and service.
$ kubectl delete pod/nginx svc/nginx
{{% /choosable %}}
{{% choosable k8s-language typescript %}}
Declaratively deploy a NGINX Pod and public load-balanced service:
import * as k8s from "@pulumi/kubernetes";
// Expose a k8s provider instance of the cluster.
const provider = new k8s.Provider("provider", {kubeconfig: kubeconfig });
// Create a NGINX Pod
const nginx = new k8s.core.v1.Pod(name,
{
metadata: {labels: {app: "nginx"}},
spec: {
containers: [
{
name: name,
image: "nginx:latest",
ports: [{ name: "http", containerPort: 80 }]
}
],
}
}, {provider: provider}
);
// Create a LoadBalancer Service for the NGINX Deployment
const service = new k8s.core.v1.Service(name,
{
metadata: {labels: {app: "nginx"}},
spec: {
type: "LoadBalancer",
ports: [{ port: 80, targetPort: "http" }],
selector: {app: "nginx"},
},
}, {provider: provider}
);
{{< choosable cloud aws >}}
// Export the Service name and public LoadBalancer Endpoint
export const serviceName = service.metadata.name;
export const serviceHostname = service.status.loadBalancer.ingress[0].hostname;
After a few moments, visit the load balancer listed in the serviceHostname
.
$ curl `pulumi stack output serviceHostname`
{{< /choosable >}}
{{< choosable cloud azure >}}
// Export the Service name and public LoadBalancer Endpoint
export const serviceName = service.metadata.name;
export const serviceIp = service.status.loadBalancer.ingress[0].ip;
After a few moments, visit the load balancer listed in the serviceIp
.
$ curl `pulumi stack output serviceIp`
{{< /choosable >}}
{{< choosable cloud gcp >}}
// Export the Service name and public LoadBalancer Endpoint
export const serviceName = service.metadata.name;
export const serviceIp = service.status.loadBalancer.ingress[0].ip;
After a few moments, visit the load balancer listed in the serviceIp
.
$ curl `pulumi stack output serviceIp`
{{< /choosable >}}
To tear down NGINX, delete its definition in the Pulumi program and run a Pulumi update.
{{% /choosable %}}
Learn More
See the official Kubernetes Basics tutorial for more details.