2019-07-08 15:25:20 -07:00
|
|
|
|
---
|
2019-12-18 09:59:20 -08:00
|
|
|
|
title: Kubernetes Ingress with AWS ALB Ingress Controller
|
|
|
|
|
h1: "Kubernetes Ingress with AWS ALB Ingress Controller and Pulumi Crosswalk for AWS"
|
2019-07-15 12:44:41 -04:00
|
|
|
|
date: "2019-07-09"
|
2019-08-07 17:51:49 -07:00
|
|
|
|
meta_desc: "In this post, we work through a simple example of running ALB based Kubernetes Ingresses with Pulumi EKS, AWS, and AWSX packages."
|
2019-07-16 21:31:42 -07:00
|
|
|
|
meta_image: "featured-img-albingresscontroller.png"
|
2019-08-07 17:51:49 -07:00
|
|
|
|
authors: ["nishi-davidson"]
|
Filter blog tags to Top 40, and add back some metadata (#350)
* Only show the top 40 blog tags
In https://github.com/pulumi/pulumi-hugo/pull/215, I had suggested
that instead of physically deleting tags we didn't want to show, we
compute it algorithmically, by only showing the "Top N" tags. This
commit introduces said functionality.
This has a few advantages:
* Preserves old metadata (the authors added the tags because they
felt they were meaningful and captured information about the posts).
* Enables us to surface those tags differently in the future (who
knows, maybe someday we'll want to run a "spinnaker" campaign).
* Notably, also keeps the tag index pages, which Google has indexed.
* Enables us to add a "View More ..." link at the bottom of the
page if folks want to see the entire list.
* Perhaps most importantly, protects against future bloat. For
example, since this tag cleanup happened, we have added top-level
tags for "aliases", "app-runner", "iam", "open-source", and
"refactoring", each of which has only a single post.
I chose 40 as the N in Top N, because that's how many we show today.
I could see an argument for filtering this based on post count
instead (e.g., only those with >3 posts).
* Add back some tags
Now that we filter out unpopular tags, we can add back some of the
ones previously removed.
2021-06-21 17:31:35 -07:00
|
|
|
|
tags: ["Kubernetes", "eks"]
|
2019-07-08 15:25:20 -07:00
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
[Kubernetes Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/)
|
2019-07-08 16:44:48 -07:00
|
|
|
|
is an API object that allows you manage external (or) internal HTTP[s]
|
2019-07-08 15:25:20 -07:00
|
|
|
|
access to [Kubernetes services](https://kubernetes.io/docs/concepts/services-networking/service/)
|
|
|
|
|
running in a cluster.
|
|
|
|
|
[Amazon Elastic Load Balancing Application Load Balancer](https://aws.amazon.com/elasticloadbalancing/features/#Details_for_Elastic_Load_Balancing_Products)
|
|
|
|
|
(ALB) is a popular AWS service that load balances incoming traffic at
|
|
|
|
|
the application layer across multiple targets, such as Amazon EC2
|
|
|
|
|
instances, in a region. ALB supports multiple features including host or
|
|
|
|
|
path based routing, TLS (Transport layer security) termination,
|
|
|
|
|
WebSockets, HTTP/2, AWS WAF (web application firewall) integration,
|
|
|
|
|
integrated access logs, and health checks.
|
|
|
|
|
|
|
|
|
|
The [AWS ALB Ingress controller](https://github.com/kubernetes-sigs/aws-alb-ingress-controller)
|
2019-12-19 11:45:43 -08:00
|
|
|
|
is a Kubernetes SIG-AWS subproject - it was the second sub-project added to
|
|
|
|
|
SIG-AWS after the [aws-authenticator subproject](https://github.com/kubernetes-sigs/aws-iam-authenticator).
|
2019-07-08 15:25:20 -07:00
|
|
|
|
The ALB Ingress controller triggers the creation of an ALB and the
|
|
|
|
|
necessary supporting AWS resources whenever a Kubernetes user declares
|
|
|
|
|
an Ingress resource on the cluster.
|
|
|
|
|
[TargetGroups](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html)
|
|
|
|
|
are created for each backend specified in the Ingress resource.
|
2022-03-10 11:05:49 -08:00
|
|
|
|
[Listeners](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html)
|
2019-07-08 15:25:20 -07:00
|
|
|
|
are created for every port specified as Ingress resource annotation.
|
|
|
|
|
When no port is specified, sensible defaults (80 or 443) are used.
|
2022-03-10 11:05:49 -08:00
|
|
|
|
[Rules](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html)
|
2019-07-08 15:25:20 -07:00
|
|
|
|
are created for each path specified in your ingress resource. This
|
|
|
|
|
ensures that traffic to a specific path is routed to the correct
|
|
|
|
|
TargetGroup.
|
|
|
|
|
|
|
|
|
|
In this post, we will work through a simple example of running ALB based
|
|
|
|
|
Kubernetes Ingresses with Pulumi
|
|
|
|
|
[EKS](https://github.com/pulumi/pulumi-eks),
|
|
|
|
|
[AWS](https://github.com/pulumi/pulumi-aws), and
|
2022-05-23 09:49:09 -07:00
|
|
|
|
[AWSX](https://github.com/pulumi/pulumi-awsx/tree/master/sdk/nodejs)
|
2019-07-08 15:25:20 -07:00
|
|
|
|
packages.
|
2019-07-15 12:44:41 -04:00
|
|
|
|
|
2019-07-08 15:25:20 -07:00
|
|
|
|
<!--more-->
|
|
|
|
|
|
|
|
|
|
## Step 1: Initialize Pulumi project and stack
|
|
|
|
|
|
2023-06-16 14:10:53 -07:00
|
|
|
|
[Install pulumi CLI](/docs/get-started/)
|
2023-05-15 15:25:28 -07:00
|
|
|
|
and set up your [AWS credentials](/docs/clouds/aws/get-started/).
|
|
|
|
|
Initialize a new [Pulumi project](/docs/concepts/projects/)
|
|
|
|
|
and [Pulumi stack](/docs/cli/commands/pulumi_stack/) from
|
2019-07-08 15:25:20 -07:00
|
|
|
|
available programming [language
|
|
|
|
|
templates](https://github.com/pulumi/templates). We will use the
|
|
|
|
|
`aws-typescript` template here and install all library
|
|
|
|
|
dependencies.
|
|
|
|
|
|
2022-05-20 08:04:17 -07:00
|
|
|
|
$ brew install pulumi/tap/pulumi # download pulumi CLI
|
2019-07-08 15:25:20 -07:00
|
|
|
|
$ mkdir eks-alb-ingress && cd eks-alb-ingress
|
|
|
|
|
$ pulumi new aws-typescript
|
|
|
|
|
$ npm install --save @pulumi/kubernetes @pulumi/eks
|
|
|
|
|
$ ls -la
|
|
|
|
|
drwxr-xr-x 10 nishidavidson staff 320 Jun 18 18:22 .
|
|
|
|
|
drwxr-xr-x+ 102 nishidavidson staff 3264 Jun 18 18:13 ..
|
|
|
|
|
-rw------- 1 nishidavidson staff 21 Jun 18 18:22 .gitignore
|
|
|
|
|
-rw-r--r-- 1 nishidavidson staff 32 Jun 18 18:22 Pulumi.dev.yaml
|
|
|
|
|
-rw------- 1 nishidavidson staff 91 Jun 18 18:22 Pulumi.yaml
|
|
|
|
|
-rw------- 1 nishidavidson staff 273 Jun 18 18:22 index.ts
|
|
|
|
|
drwxr-xr-x 95 nishidavidson staff 3040 Jun 18 18:22 node_modules
|
|
|
|
|
-rw-r--r-- 1 nishidavidson staff 50650 Jun 18 18:22 package-lock.json
|
|
|
|
|
-rw------- 1 nishidavidson staff 228 Jun 18 18:22 package.json
|
|
|
|
|
-rw------- 1 nishidavidson staff 522 Jun 18 18:22 tsconfig.json
|
|
|
|
|
|
|
|
|
|
## Step 2: Create an EKS cluster
|
|
|
|
|
|
|
|
|
|
Once the steps above are complete, we update the typescript code in
|
|
|
|
|
`index.ts` file to create an EKS cluster and run pulumi
|
|
|
|
|
up from the command line:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
import * as awsx from "@pulumi/awsx";
|
|
|
|
|
import * as eks from "@pulumi/eks";
|
|
|
|
|
import * as k8s from "@pulumi/kubernetes";
|
|
|
|
|
|
2020-01-21 10:39:36 -08:00
|
|
|
|
const vpc = new awsx.ec2.Vpc("vpc-alb-ingress-eks", {});
|
2019-07-08 15:25:20 -07:00
|
|
|
|
const cluster = new eks.Cluster("eks-cluster", {
|
2019-07-15 12:44:41 -04:00
|
|
|
|
vpcId: vpc.id,
|
|
|
|
|
subnetIds: vpc.publicSubnetIds,
|
|
|
|
|
instanceType: "t2.medium",
|
|
|
|
|
version: "1.12",
|
|
|
|
|
nodeRootVolumeSize: 200,
|
|
|
|
|
desiredCapacity: 3,
|
|
|
|
|
maxSize: 4,
|
|
|
|
|
minSize: 3,
|
|
|
|
|
deployDashboard: false,
|
|
|
|
|
vpcCniOptions: {
|
|
|
|
|
warmIpTarget: 4
|
|
|
|
|
}
|
2019-07-08 15:25:20 -07:00
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
export const clusterName = cluster.eksCluster.name;
|
|
|
|
|
export const kubeconfig = cluster.kubeconfig;
|
2019-07-15 12:44:41 -04:00
|
|
|
|
export const clusterNodeInstanceRoleName = cluster.instanceRoles.apply(
|
|
|
|
|
roles => roles[0].name
|
|
|
|
|
);
|
2019-07-08 15:25:20 -07:00
|
|
|
|
export const nodesubnetId = cluster.core.subnetIds;
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Configure the Public subnets in the console as defined in
|
2021-05-24 10:40:01 -07:00
|
|
|
|
[this guide](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/subnet_discovery/).
|
2019-07-08 15:25:20 -07:00
|
|
|
|
|
|
|
|
|
## Step 3: Deploy AWS ALB Ingress Controller
|
|
|
|
|
|
|
|
|
|
Lets confirm that the EKS cluster is up using the following commands:
|
|
|
|
|
|
|
|
|
|
$ pulumi stack export kubeconfig > kubeconfig.yaml
|
|
|
|
|
$ export KUBECONFIG=kubeconfig.yaml
|
|
|
|
|
$ kubectl get nodes
|
|
|
|
|
NAME STATUS ROLES AGE VERSION
|
|
|
|
|
ip-10-10-0-58.ec2.internal Ready <none> 7h8m v1.12.7
|
|
|
|
|
ip-10-10-1-167.ec2.internal Ready <none> 7h8m v1.12.7
|
|
|
|
|
ip-10-10-1-84.ec2.internal Ready <none> 7h8m v1.12.7
|
|
|
|
|
|
|
|
|
|
Adequate roles and policies must be configured in AWS and available to
|
|
|
|
|
the node(s) running the controller. How access is granted is up to you.
|
|
|
|
|
Some will attach the needed rights to node's role in AWS. Others will
|
|
|
|
|
use projects like [kube2iam](https://github.com/jtblin/kube2iam). We
|
|
|
|
|
attach a minimal IAM policy to the EKS worker nodes and then declare
|
|
|
|
|
this on the EKS cluster as shown in the code below.
|
|
|
|
|
|
|
|
|
|
When declaring the ALB Ingress controller we simply re-use the Helm
|
|
|
|
|
chart as part of the code. There is no need to rewrite all the logic or
|
|
|
|
|
install Tiller in the EKS cluster. This frees you from thinking about
|
|
|
|
|
RBAC for Helm, Tiller and the k8s cluster per se'.
|
|
|
|
|
|
|
|
|
|
With the default "instance mode" Ingress traffic starts from the ALB and
|
|
|
|
|
reaches the
|
|
|
|
|
[NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport)
|
|
|
|
|
opened for the service. Traffic is then routed to the container Pods
|
|
|
|
|
within cluster. This is all encoded using Pulumi libraries below. If you
|
|
|
|
|
wish to use "ip-mode" with your Ingress such that traffic directly
|
|
|
|
|
reaches your pods, you will need to modify the
|
|
|
|
|
`alb.ingress.kubernetes.io/target-type` annotation when using the helm
|
|
|
|
|
chart.
|
|
|
|
|
|
|
|
|
|
Append `index.ts` file from Step 2 with the code below and run
|
|
|
|
|
`pulumi up`:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
// STEP 3: Declare the AWS ALB Ingress Controller
|
|
|
|
|
|
|
|
|
|
// Create IAM Policy for the IngressController called "ingressController-iam-policy” and read the policy ARN.
|
2019-07-15 12:44:41 -04:00
|
|
|
|
const ingressControllerPolicy = new aws.iam.Policy(
|
|
|
|
|
"ingressController-iam-policy",
|
|
|
|
|
{
|
2019-07-08 15:25:20 -07:00
|
|
|
|
policy: {
|
2019-07-15 12:44:41 -04:00
|
|
|
|
Version: "2012-10-17",
|
|
|
|
|
Statement: [
|
|
|
|
|
{
|
|
|
|
|
Effect: "Allow",
|
|
|
|
|
Action: [
|
|
|
|
|
"acm:DescribeCertificate",
|
|
|
|
|
"acm:ListCertificates",
|
|
|
|
|
"acm:GetCertificate"
|
|
|
|
|
],
|
|
|
|
|
Resource: "*"
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
Effect: "Allow",
|
|
|
|
|
Action: [
|
|
|
|
|
"ec2:AuthorizeSecurityGroupIngress",
|
|
|
|
|
"ec2:CreateSecurityGroup",
|
|
|
|
|
"ec2:CreateTags",
|
|
|
|
|
"ec2:DeleteTags",
|
|
|
|
|
"ec2:DeleteSecurityGroup",
|
|
|
|
|
"ec2:DescribeInstances",
|
|
|
|
|
"ec2:DescribeInstanceStatus",
|
|
|
|
|
"ec2:DescribeSecurityGroups",
|
|
|
|
|
"ec2:DescribeSubnets",
|
|
|
|
|
"ec2:DescribeTags",
|
|
|
|
|
"ec2:DescribeVpcs",
|
|
|
|
|
"ec2:ModifyInstanceAttribute",
|
|
|
|
|
"ec2:ModifyNetworkInterfaceAttribute",
|
|
|
|
|
"ec2:RevokeSecurityGroupIngress"
|
|
|
|
|
],
|
|
|
|
|
Resource: "*"
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
Effect: "Allow",
|
|
|
|
|
Action: [
|
|
|
|
|
"elasticloadbalancing:AddTags",
|
|
|
|
|
"elasticloadbalancing:CreateListener",
|
|
|
|
|
"elasticloadbalancing:CreateLoadBalancer",
|
|
|
|
|
"elasticloadbalancing:CreateRule",
|
|
|
|
|
"elasticloadbalancing:CreateTargetGroup",
|
|
|
|
|
"elasticloadbalancing:DeleteListener",
|
|
|
|
|
"elasticloadbalancing:DeleteLoadBalancer",
|
|
|
|
|
"elasticloadbalancing:DeleteRule",
|
|
|
|
|
"elasticloadbalancing:DeleteTargetGroup",
|
|
|
|
|
"elasticloadbalancing:DeregisterTargets",
|
|
|
|
|
"elasticloadbalancing:DescribeListeners",
|
|
|
|
|
"elasticloadbalancing:DescribeLoadBalancers",
|
|
|
|
|
"elasticloadbalancing:DescribeLoadBalancerAttributes",
|
|
|
|
|
"elasticloadbalancing:DescribeRules",
|
|
|
|
|
"elasticloadbalancing:DescribeSSLPolicies",
|
|
|
|
|
"elasticloadbalancing:DescribeTags",
|
|
|
|
|
"elasticloadbalancing:DescribeTargetGroups",
|
|
|
|
|
"elasticloadbalancing:DescribeTargetGroupAttributes",
|
|
|
|
|
"elasticloadbalancing:DescribeTargetHealth",
|
|
|
|
|
"elasticloadbalancing:ModifyListener",
|
|
|
|
|
"elasticloadbalancing:ModifyLoadBalancerAttributes",
|
|
|
|
|
"elasticloadbalancing:ModifyRule",
|
|
|
|
|
"elasticloadbalancing:ModifyTargetGroup",
|
|
|
|
|
"elasticloadbalancing:ModifyTargetGroupAttributes",
|
|
|
|
|
"elasticloadbalancing:RegisterTargets",
|
|
|
|
|
"elasticloadbalancing:RemoveTags",
|
|
|
|
|
"elasticloadbalancing:SetIpAddressType",
|
|
|
|
|
"elasticloadbalancing:SetSecurityGroups",
|
|
|
|
|
"elasticloadbalancing:SetSubnets",
|
|
|
|
|
"elasticloadbalancing:SetWebACL"
|
|
|
|
|
],
|
|
|
|
|
Resource: "*"
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
Effect: "Allow",
|
|
|
|
|
Action: ["iam:GetServerCertificate", "iam:ListServerCertificates"],
|
|
|
|
|
Resource: "*"
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
Effect: "Allow",
|
|
|
|
|
Action: [
|
|
|
|
|
"waf-regional:GetWebACLForResource",
|
|
|
|
|
"waf-regional:GetWebACL",
|
|
|
|
|
"waf-regional:AssociateWebACL",
|
|
|
|
|
"waf-regional:DisassociateWebACL"
|
|
|
|
|
],
|
|
|
|
|
Resource: "*"
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
Effect: "Allow",
|
|
|
|
|
Action: ["tag:GetResources", "tag:TagResources"],
|
|
|
|
|
Resource: "*"
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
Effect: "Allow",
|
|
|
|
|
Action: ["waf:GetWebACL"],
|
|
|
|
|
Resource: "*"
|
|
|
|
|
}
|
|
|
|
|
]
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
);
|
2019-07-08 15:25:20 -07:00
|
|
|
|
|
|
|
|
|
// Attach this policy to the NodeInstanceRole of the worker nodes.
|
2019-07-15 12:44:41 -04:00
|
|
|
|
export const nodeinstanceRole = new aws.iam.RolePolicyAttachment(
|
|
|
|
|
"eks-NodeInstanceRole-policy-attach",
|
|
|
|
|
{
|
2019-07-08 15:25:20 -07:00
|
|
|
|
policyArn: ingressControllerPolicy.arn,
|
2019-07-15 12:44:41 -04:00
|
|
|
|
role: clusterNodeInstanceRoleName
|
|
|
|
|
}
|
|
|
|
|
);
|
2019-07-08 15:25:20 -07:00
|
|
|
|
|
|
|
|
|
// Declare the ALBIngressController in 1 step with the Helm Chart.
|
2019-07-15 12:44:41 -04:00
|
|
|
|
const albingresscntlr = new k8s.helm.v2.Chart(
|
|
|
|
|
"alb",
|
|
|
|
|
{
|
|
|
|
|
chart:
|
|
|
|
|
"http://storage.googleapis.com/kubernetes-charts-incubator/aws-alb-ingress-controller-0.1.9.tgz",
|
2019-07-08 15:25:20 -07:00
|
|
|
|
values: {
|
2019-07-15 12:44:41 -04:00
|
|
|
|
clusterName: clusterName,
|
|
|
|
|
autoDiscoverAwsRegion: "true",
|
|
|
|
|
autoDiscoverAwsVpcID: "true"
|
|
|
|
|
}
|
|
|
|
|
},
|
|
|
|
|
{ provider: cluster.provider }
|
|
|
|
|
);
|
2019-07-08 15:25:20 -07:00
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Confirm the alb-ingress-controller was created as follows:
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
$ kubectl get pods -n default | grep alb
|
|
|
|
|
alb-aws-alb-ingress-controller-58f44d4bb8lxs6w
|
|
|
|
|
|
|
|
|
|
$ kubectl logs alb-ingress-controller-58f44d4bb8lxs6w
|
|
|
|
|
-------------------------------------------------------------------------------
|
|
|
|
|
AWS ALB Ingress controller
|
|
|
|
|
Release: v1.1.2
|
|
|
|
|
Build: git-cc1c5971
|
|
|
|
|
Repository: https://github.com/kubernetes-sigs/aws-alb-ingress-controller.git
|
|
|
|
|
-------------------------------------------------------------------------------
|
|
|
|
|
```
|
|
|
|
|
|
2019-07-15 12:44:41 -04:00
|
|
|
|
Make sure the ingress-controller logs do not show errors about missing subnet tags or missing cluster name before proceeding to Step 4.
|
2019-07-08 15:25:20 -07:00
|
|
|
|
|
|
|
|
|
## Step 4: Deploy Sample Application
|
|
|
|
|
|
|
|
|
|
The Ingress controller should now be running on the EKS worker nodes.
|
|
|
|
|
Let's now create a sample "2048-game" and expose it as an Ingress on
|
|
|
|
|
our EKS cluster. The code below will let you do so. Append this piece of
|
|
|
|
|
code into `index.ts` file from Step 3 and run `pulumi up`:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
function createNewNamespace(name: string): k8s.core.v1.Namespace {
|
2019-07-15 12:44:41 -04:00
|
|
|
|
//Create new namespace
|
|
|
|
|
return new k8s.core.v1.Namespace(
|
|
|
|
|
name,
|
|
|
|
|
{ metadata: { name: name } },
|
|
|
|
|
{ provider: cluster.provider }
|
|
|
|
|
);
|
2019-07-08 15:25:20 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Define the 2048 namespace, deployment, and service.
|
|
|
|
|
const nsgame = createNewNamespace("2048-game");
|
|
|
|
|
|
2019-07-15 12:44:41 -04:00
|
|
|
|
const deploymentgame = new k8s.extensions.v1beta1.Deployment(
|
|
|
|
|
"deployment-game",
|
|
|
|
|
{
|
2019-07-08 15:25:20 -07:00
|
|
|
|
metadata: { name: "deployment-game", namespace: "2048-game" },
|
|
|
|
|
spec: {
|
2019-07-15 12:44:41 -04:00
|
|
|
|
replicas: 5,
|
|
|
|
|
template: {
|
|
|
|
|
metadata: { labels: { app: "2048" } },
|
|
|
|
|
spec: {
|
|
|
|
|
containers: [
|
|
|
|
|
{
|
|
|
|
|
image: "alexwhen/docker-2048",
|
|
|
|
|
imagePullPolicy: "Always",
|
|
|
|
|
name: "2048",
|
|
|
|
|
ports: [{ containerPort: 80 }]
|
|
|
|
|
}
|
|
|
|
|
]
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
},
|
|
|
|
|
{ provider: cluster.provider }
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
const servicegame = new k8s.core.v1.Service(
|
|
|
|
|
"service-game",
|
|
|
|
|
{
|
2019-07-08 15:25:20 -07:00
|
|
|
|
metadata: { name: "service-2048", namespace: "2048-game" },
|
|
|
|
|
spec: {
|
2019-07-15 12:44:41 -04:00
|
|
|
|
ports: [{ port: 80, targetPort: 80, protocol: "TCP" }],
|
|
|
|
|
type: "NodePort",
|
|
|
|
|
selector: { app: "2048" }
|
|
|
|
|
}
|
|
|
|
|
},
|
|
|
|
|
{ provider: cluster.provider }
|
|
|
|
|
);
|
2019-07-08 15:25:20 -07:00
|
|
|
|
|
|
|
|
|
//declare 2048 ingress
|
2019-07-15 12:44:41 -04:00
|
|
|
|
const ingressgame = new k8s.extensions.v1beta1.Ingress(
|
|
|
|
|
"ingress-game",
|
|
|
|
|
{
|
2019-07-08 15:25:20 -07:00
|
|
|
|
metadata: {
|
2019-07-15 12:44:41 -04:00
|
|
|
|
name: "2048-ingress",
|
|
|
|
|
namespace: "2048-game",
|
|
|
|
|
annotations: {
|
|
|
|
|
"kubernetes.io/ingress.class": "alb",
|
|
|
|
|
"alb.ingress.kubernetes.io/scheme": "internet-facing"
|
|
|
|
|
},
|
|
|
|
|
labels: { app: "2048-ingress" }
|
2019-07-08 15:25:20 -07:00
|
|
|
|
},
|
|
|
|
|
spec: {
|
2019-07-15 12:44:41 -04:00
|
|
|
|
rules: [
|
|
|
|
|
{
|
|
|
|
|
http: {
|
|
|
|
|
paths: [
|
|
|
|
|
{
|
|
|
|
|
path: "/*",
|
|
|
|
|
backend: { serviceName: "service-2048", servicePort: 80 }
|
|
|
|
|
}
|
|
|
|
|
]
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
]
|
|
|
|
|
}
|
|
|
|
|
},
|
|
|
|
|
{ provider: cluster.provider }
|
|
|
|
|
);
|
2019-07-08 15:25:20 -07:00
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
After few seconds, verify the Ingress resource as follows:
|
|
|
|
|
|
|
|
|
|
$ kubectl get ingress/2048-ingress -n 2048-game
|
|
|
|
|
NAME HOSTS ADDRESS PORTS AGE
|
|
|
|
|
2048-ingress * DNS-Name-Of-Your-ALB 80 3m
|
|
|
|
|
|
|
|
|
|
Open a browser. Copy and paste your "DNS-Name-Of-Your-ALB". You should
|
|
|
|
|
be to access your newly deployed 2048 game -- have fun!
|
|
|
|
|
|
|
|
|
|
Pulumi is open source and free to use. For more information on our
|
|
|
|
|
product platform, check out the following resources:
|
|
|
|
|
|
2022-10-26 07:22:15 -07:00
|
|
|
|
- [Pulumi Crosswalk for AWS Announcement](/blog/introducing-pulumi-crosswalk-for-aws-the-easiest-way-to-aws/)
|
|
|
|
|
- [Mapbox IOT-as-Code with Pulumi Crosswalk for AWS](/blog/mapbox-iot-as-code-with-pulumi-crosswalk-for-aws/)
|
2023-05-15 15:25:28 -07:00
|
|
|
|
- [Pulumi Crosswalk for AWS Documentation for ECS, EKS, ELB, and more](/docs/clouds/aws/guides/)
|