packer-cn/website/source/docs/builders/amazon.html.md

296 lines
9.8 KiB
Markdown
Raw Normal View History

---
2017-06-14 21:04:16 -04:00
description: |
Packer is able to create Amazon AMIs. To achieve this, Packer comes with
multiple builders depending on the strategy you want to use to build the AMI.
2015-07-22 22:31:00 -04:00
layout: docs
2017-06-14 21:04:16 -04:00
page_title: 'Amazon AMI - Builders'
sidebar_current: 'docs-builders-amazon'
---
# Amazon AMI Builder
Packer is able to create Amazon AMIs. To achieve this, Packer comes with
2015-07-24 23:55:08 -04:00
multiple builders depending on the strategy you want to use to build the AMI.
Packer supports the following builders at the moment:
2015-07-24 23:55:08 -04:00
- [amazon-ebs](/docs/builders/amazon-ebs.html) - Create EBS-backed AMIs by
2018-10-26 20:02:51 -04:00
launching a source AMI and re-packaging it into a new AMI after
provisioning. If in doubt, use this builder, which is the easiest to get
started with.
2015-07-22 23:25:58 -04:00
2015-07-24 23:55:08 -04:00
- [amazon-instance](/docs/builders/amazon-instance.html) - Create
instance-store AMIs by launching and provisioning a source instance, then
rebundling it and uploading it to S3.
2015-07-22 23:25:58 -04:00
2015-07-24 23:55:08 -04:00
- [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs
from an existing EC2 instance by mounting the root device and using a
2016-01-14 15:31:19 -05:00
[Chroot](https://en.wikipedia.org/wiki/Chroot) environment to provision
2015-07-24 23:55:08 -04:00
that device. This is an **advanced builder and should not be used by
newcomers**. However, it is also the fastest way to build an EBS-backed AMI
since no new EC2 instance needs to be launched.
- [amazon-ebssurrogate](/docs/builders/amazon-ebssurrogate.html) - Create EBS
-backed AMIs from scratch. Works similarly to the `chroot` builder but does
not require running in AWS. This is an **advanced builder and should not be
used by newcomers**.
2013-07-31 01:17:58 -04:00
2017-06-14 21:04:16 -04:00
-> **Don't know which builder to use?** If in doubt, use the [amazon-ebs
2015-07-24 23:55:08 -04:00
builder](/docs/builders/amazon-ebs.html). It is much easier to use and Amazon
generally recommends EBS-backed images nowadays.
builder/amazon: Add `ebs-volume` builder This commit adds a builder that works like EBS builders, except does not create an AMI, and instead is intended to create EBS volumes in an initialized state. For example, the following template can be used to create and export a set of 3 EBS Volumes in a ZFS zpool named `data` for importing by instances running production systems: ``` { "variables": { "aws_access_key_id": "{{ env `AWS_ACCESS_KEY_ID` }}", "aws_secret_access_key": "{{ env `AWS_SECRET_ACCESS_KEY` }}", "region": "{{ env `AWS_REGION` }}", "source_ami": "{{ env `PACKER_SOURCE_AMI` }}", "vpc_id": "{{ env `PACKER_VPC_ID` }}", "subnet_id": "{{ env `PACKER_SUBNET_ID` }}" }, "builders": [{ "type": "amazon-ebs-volume", "access_key": "{{ user `aws_access_key_id` }}", "secret_key": "{{ user `aws_secret_access_key` }}", "region": "{{user `region`}}", "spot_price_auto_product": "Linux/UNIX (Amazon VPC)", "ssh_pty": true, "instance_type": "t2.medium", "vpc_id": "{{user `vpc_id` }}", "subnet_id": "{{user `subnet_id` }}", "associate_public_ip_address": true, "source_ami": "{{user `source_ami` }}", "ssh_username": "ubuntu", "ssh_timeout": "5m", "ebs_volumes": [ { "device_name": "/dev/xvdf", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data1", "zpool": "data", "Component": "TeamCity" } }, { "device_name": "/dev/xvdg", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data2", "zpool": "data", "Component": "TeamCity" } }, { "device_name": "/dev/xvdh", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data3", "zpool": "data", "Component": "TeamCity" } } ] }], "provisioners": [ { "type": "shell", "start_retry_timeout": "10m", "inline": [ "DEBIAN_FRONTEND=noninteractive sudo apt-get update", "DEBIAN_FRONTEND=noninteractive sudo apt-get install -y zfs", "lsblk", "sudo parted /dev/xvdf --script mklabel GPT", "sudo parted /dev/xvdg --script mklabel GPT", "sudo parted /dev/xvdh --script mklabel GPT", "sudo zpool create -m none data raidz xvdf xvdg xvdh", "sudo zpool status", "sudo zpool export data", "sudo zpool status" ] } ] } ``` StepModifyInstance and StepStopInstance are now shared between EBS and EBS-Volume builders - move them into the AWS common directory and rename them to indicate that they only apply to EBS-backed builders.
2016-10-31 17:44:41 -04:00
# Amazon EBS Volume Builder
Packer is able to create Amazon EBS Volumes which are preinitialized with a
filesystem and data.
builder/amazon: Add `ebs-volume` builder This commit adds a builder that works like EBS builders, except does not create an AMI, and instead is intended to create EBS volumes in an initialized state. For example, the following template can be used to create and export a set of 3 EBS Volumes in a ZFS zpool named `data` for importing by instances running production systems: ``` { "variables": { "aws_access_key_id": "{{ env `AWS_ACCESS_KEY_ID` }}", "aws_secret_access_key": "{{ env `AWS_SECRET_ACCESS_KEY` }}", "region": "{{ env `AWS_REGION` }}", "source_ami": "{{ env `PACKER_SOURCE_AMI` }}", "vpc_id": "{{ env `PACKER_VPC_ID` }}", "subnet_id": "{{ env `PACKER_SUBNET_ID` }}" }, "builders": [{ "type": "amazon-ebs-volume", "access_key": "{{ user `aws_access_key_id` }}", "secret_key": "{{ user `aws_secret_access_key` }}", "region": "{{user `region`}}", "spot_price_auto_product": "Linux/UNIX (Amazon VPC)", "ssh_pty": true, "instance_type": "t2.medium", "vpc_id": "{{user `vpc_id` }}", "subnet_id": "{{user `subnet_id` }}", "associate_public_ip_address": true, "source_ami": "{{user `source_ami` }}", "ssh_username": "ubuntu", "ssh_timeout": "5m", "ebs_volumes": [ { "device_name": "/dev/xvdf", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data1", "zpool": "data", "Component": "TeamCity" } }, { "device_name": "/dev/xvdg", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data2", "zpool": "data", "Component": "TeamCity" } }, { "device_name": "/dev/xvdh", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data3", "zpool": "data", "Component": "TeamCity" } } ] }], "provisioners": [ { "type": "shell", "start_retry_timeout": "10m", "inline": [ "DEBIAN_FRONTEND=noninteractive sudo apt-get update", "DEBIAN_FRONTEND=noninteractive sudo apt-get install -y zfs", "lsblk", "sudo parted /dev/xvdf --script mklabel GPT", "sudo parted /dev/xvdg --script mklabel GPT", "sudo parted /dev/xvdh --script mklabel GPT", "sudo zpool create -m none data raidz xvdf xvdg xvdh", "sudo zpool status", "sudo zpool export data", "sudo zpool status" ] } ] } ``` StepModifyInstance and StepStopInstance are now shared between EBS and EBS-Volume builders - move them into the AWS common directory and rename them to indicate that they only apply to EBS-backed builders.
2016-10-31 17:44:41 -04:00
2018-10-26 20:02:51 -04:00
- [amazon-ebsvolume](/docs/builders/amazon-ebsvolume.html) - Create EBS
volumes by launching a source AMI with block devices mapped. Provision the
instance, then destroy it, retaining the EBS volumes.
builder/amazon: Add `ebs-volume` builder This commit adds a builder that works like EBS builders, except does not create an AMI, and instead is intended to create EBS volumes in an initialized state. For example, the following template can be used to create and export a set of 3 EBS Volumes in a ZFS zpool named `data` for importing by instances running production systems: ``` { "variables": { "aws_access_key_id": "{{ env `AWS_ACCESS_KEY_ID` }}", "aws_secret_access_key": "{{ env `AWS_SECRET_ACCESS_KEY` }}", "region": "{{ env `AWS_REGION` }}", "source_ami": "{{ env `PACKER_SOURCE_AMI` }}", "vpc_id": "{{ env `PACKER_VPC_ID` }}", "subnet_id": "{{ env `PACKER_SUBNET_ID` }}" }, "builders": [{ "type": "amazon-ebs-volume", "access_key": "{{ user `aws_access_key_id` }}", "secret_key": "{{ user `aws_secret_access_key` }}", "region": "{{user `region`}}", "spot_price_auto_product": "Linux/UNIX (Amazon VPC)", "ssh_pty": true, "instance_type": "t2.medium", "vpc_id": "{{user `vpc_id` }}", "subnet_id": "{{user `subnet_id` }}", "associate_public_ip_address": true, "source_ami": "{{user `source_ami` }}", "ssh_username": "ubuntu", "ssh_timeout": "5m", "ebs_volumes": [ { "device_name": "/dev/xvdf", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data1", "zpool": "data", "Component": "TeamCity" } }, { "device_name": "/dev/xvdg", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data2", "zpool": "data", "Component": "TeamCity" } }, { "device_name": "/dev/xvdh", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data3", "zpool": "data", "Component": "TeamCity" } } ] }], "provisioners": [ { "type": "shell", "start_retry_timeout": "10m", "inline": [ "DEBIAN_FRONTEND=noninteractive sudo apt-get update", "DEBIAN_FRONTEND=noninteractive sudo apt-get install -y zfs", "lsblk", "sudo parted /dev/xvdf --script mklabel GPT", "sudo parted /dev/xvdg --script mklabel GPT", "sudo parted /dev/xvdh --script mklabel GPT", "sudo zpool create -m none data raidz xvdf xvdg xvdh", "sudo zpool status", "sudo zpool export data", "sudo zpool status" ] } ] } ``` StepModifyInstance and StepStopInstance are now shared between EBS and EBS-Volume builders - move them into the AWS common directory and rename them to indicate that they only apply to EBS-backed builders.
2016-10-31 17:44:41 -04:00
2015-07-25 00:00:24 -04:00
<span id="specifying-amazon-credentials"></span>
## Authentication
2015-07-24 23:55:08 -04:00
The AWS provider offers a flexible means of providing credentials for
authentication. The following methods are supported, in this order, and
explained below:
- Static credentials
- Environment variables
- Shared credentials file
- EC2 Role
### Static Credentials
Static credentials can be provided in the form of an access key id and secret.
These look like:
2018-10-26 20:02:51 -04:00
``` json
{
"access_key": "AKIAIOSFODNN7EXAMPLE",
"secret_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"region": "us-east-1",
"type": "amazon-ebs"
}
```
### Environment variables
You can provide your credentials via the `AWS_ACCESS_KEY_ID` and
`AWS_SECRET_ACCESS_KEY`, environment variables, representing your AWS Access
Key and AWS Secret Key, respectively. Note that setting your AWS credentials
using either these environment variables will override the use of
`AWS_SHARED_CREDENTIALS_FILE` and `AWS_PROFILE`. The `AWS_DEFAULT_REGION` and
`AWS_SESSION_TOKEN` environment variables are also used, if applicable:
Usage:
2018-10-26 20:02:51 -04:00
$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
$ export AWS_DEFAULT_REGION="us-west-2"
$ packer build packer.json
### Shared Credentials file
You can use an AWS credentials file to specify your credentials. The default
2018-10-26 20:02:51 -04:00
location is $HOME/.aws/credentials on Linux and OS X, or
"%USERPROFILE%.aws\\credentials" for Windows users. If we fail to detect
credentials inline, or in the environment, Packer will check this location. You
can optionally specify a different location in the configuration by setting the
environment with the `AWS_SHARED_CREDENTIALS_FILE` variable.
The format for the credentials file is like so
2018-10-26 20:02:51 -04:00
[default]
aws_access_key_id=<your access key id>
aws_secret_access_key=<your secret access key>
You may also configure the profile to use by setting the `profile`
configuration option, or setting the `AWS_PROFILE` environment variable:
2018-10-26 20:02:51 -04:00
``` json
{
"profile": "customprofile",
"region": "us-east-1",
"type": "amazon-ebs"
}
```
### IAM Task or Instance Role
Finally, Packer will use credentials provided by the task's or instance's IAM
role, if it has one.
This is a preferred approach over any other when running in EC2 as you can
avoid hard coding credentials. Instead these are leased on-the-fly by Packer,
which reduces the chance of leakage.
The following policy document provides the minimal set permissions necessary
for Packer to work:
2017-06-14 21:04:16 -04:00
``` json
{
2016-08-22 11:06:20 -04:00
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action" : [
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CopyImage",
2016-11-03 14:49:50 -04:00
"ec2:CreateImage",
"ec2:CreateKeypair",
"ec2:CreateSecurityGroup",
"ec2:CreateSnapshot",
2016-11-03 14:49:50 -04:00
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteKeyPair",
2016-11-03 14:49:50 -04:00
"ec2:DeleteSecurityGroup",
"ec2:DeleteSnapshot",
2016-11-03 14:49:50 -04:00
"ec2:DeleteVolume",
"ec2:DeregisterImage",
"ec2:DescribeImageAttribute",
2016-11-03 14:49:50 -04:00
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:DescribeRegions",
2016-11-03 14:49:50 -04:00
"ec2:DescribeSecurityGroups",
"ec2:DescribeSnapshots",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"ec2:GetPasswordData",
"ec2:ModifyImageAttribute",
2016-11-09 14:20:06 -05:00
"ec2:ModifyInstanceAttribute",
"ec2:ModifySnapshotAttribute",
2016-11-03 14:49:50 -04:00
"ec2:RegisterImage",
"ec2:RunInstances",
"ec2:StopInstances",
2016-11-09 14:20:06 -05:00
"ec2:TerminateInstances"
],
"Resource" : "*"
}]
}
```
Note that if you'd like to create a spot instance, you must also add:
ec2:CreateLaunchTemplate,
ec2:DeleteLaunchTemplate,
ec2:CreateFleet
If you have the `spot_price` parameter set to `auto`, you must also add:
2018-10-26 20:02:51 -04:00
ec2:DescribeSpotPriceHistory
## Troubleshooting
### Attaching IAM Policies to Roles
2017-02-21 14:13:06 -05:00
IAM policies can be associated with users or roles. If you use packer with IAM
2015-07-24 23:55:08 -04:00
roles, you may encounter an error like this one:
==> amazon-ebs: Error launching source instance: You are not authorized to perform this operation.
2015-07-24 23:55:08 -04:00
You can read more about why this happens on the [Amazon Security
2016-01-14 15:31:19 -05:00
Blog](https://blogs.aws.amazon.com/security/post/Tx3M0IFB5XBOCQX/Granting-Permission-to-Launch-EC2-Instances-with-IAM-Roles-PassRole-Permission).
2015-07-24 23:55:08 -04:00
The example policy below may help packer work with IAM roles. Note that this
example provides more than the minimal set of permissions needed for packer to
work, but specifics will depend on your use-case.
2017-06-14 21:04:16 -04:00
``` json
{
"Sid": "PackerIAMPassRole",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": [
"*"
]
}
```
2019-10-22 06:40:18 -04:00
In case when you're creating a temporary instance profile you will require to have following
IAM policies.
``` json
{
"Sid": "PackerIAMCreateRole",
"Effect": "Allow",
"Action": [
"iam:PassRole",
"iam:CreateInstanceProfile",
"iam:DeleteInstanceProfile",
"iam:GetRole",
"iam:GetInstanceProfile",
"iam:DeleteRolePolicy",
"iam:RemoveRoleFromInstanceProfile",
"iam:CreateRole",
"iam:DeleteRole",
"iam:PutRolePolicy",
"iam:AddRoleToInstanceProfile"
],
"Resource": "*"
}
```
2019-11-06 13:37:01 -05:00
In cases where you are using a KMS key for encryption, your key will need the
following policies at a minimum:
```json
2019-11-06 13:37:01 -05:00
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Action": [
"kms:ReEncrypt*",
"kms:GenerateDataKey*"
],
"Resource": "*"
}
```
### Checking that system time is current
Amazon uses the current time as part of the [request signing
process](http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html). If
your system clock is too skewed from the current time, your requests might
2016-10-03 20:36:54 -04:00
fail. If that's the case, you might see an error like this:
==> amazon-ebs: Error querying AMI: AuthFailure: AWS was not able to validate the provided access credentials
If you suspect your system's date is wrong, you can compare it against
<http://www.time.gov/>. On
2019-01-11 17:06:15 -05:00
Linux/OS X, you can run the `date` command to get the current time. If you're
on Linux, you can try setting the time with ntp by running `sudo ntpd -q`.
### `exceeded wait attempts` while waiting for tasks to complete
2018-10-26 20:02:51 -04:00
We use the AWS SDK's built-in waiters to wait for longer-running tasks to
complete. These waiters have default delays between queries and maximum number
of queries that don't always work for our users.
If you find that you are being rate-limited or have exceeded your max wait
attempts, you can override the defaults by setting the following packer
environment variables (note that these will apply to all aws tasks that we have
to wait for):
`AWS_MAX_ATTEMPTS` - This is how many times to re-send a status update request.
Excepting tasks that we know can take an extremely long time, this defaults to
40tries.
`AWS_POLL_DELAY_SECONDS` - How many seconds to wait in between status update
requests. Generally defaults to 2 or 5 seconds, depending on the task.
2019-11-06 13:37:01 -05:00
### `ResourceNotReady: failed waiting for successful resource state`
This error message can appear for several reasons, generally during image
copy/encryption. It is often the result of a KMS misconfiguration. Examples of
possible misconfigurations are:
- You provided an invalid kms_key_id.
- The kms key you provided is a valid key, but not in the region you've said to
use it in.
- The kms key you provided is a valid key, but does not have all of the
necessary policy permissions for an image copy. (see above for the necessary
kms policies)
- You are using STS credentials that expired during a long-running call.