packer-cn/website/pages/docs/post-processors/amazon-import.mdx

258 lines
10 KiB
Plaintext

---
description: |
The Packer Amazon Import post-processor takes an OVA artifact from various
builders and imports it to an AMI available to Amazon Web Services EC2.
layout: docs
page_title: Amazon Import - Post-Processors
sidebar_title: Amazon Import
---
# Amazon Import Post-Processor
Type: `amazon-import`
The Packer Amazon Import post-processor takes an OVA artifact from various
builders and imports it to an AMI available to Amazon Web Services EC2.
~> This post-processor is for advanced users. It depends on specific IAM
roles inside AWS and is best used with images that operate with the EC2
configuration model (eg, cloud-init for Linux systems). Please ensure you read
the [prerequisites for import](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html)
before using this post-processor.
## How Does it Work?
The import process operates making a temporary copy of the OVA to an S3 bucket,
and calling an import task in EC2 on the OVA file. Once completed, an AMI is
returned containing the converted virtual machine. The temporary OVA copy in S3
can be discarded after the import is complete.
The import process itself run by AWS includes modifications to the image
uploaded, to allow it to boot and operate in the AWS EC2 environment. However,
not all modifications required to make the machine run well in EC2 are
performed. Take care around console output from the machine, as debugging can
be very difficult without it. You may also want to include tools suitable for
instances in EC2 such as `cloud-init` for Linux.
Further information about the import process can be found in AWS's [EC2
Import/Export Instance
documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instances_of_your_vm.html).
## Configuration
There are some configuration options available for the post-processor. They are
segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
Required:
- `access_key` (string) - The access key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon#specifying-amazon-credentials)
- `region` (string) - The name of the region, such as `us-east-1` in which to
upload the OVA file to S3 and create the AMI. A list of valid regions can
be obtained with AWS CLI tools or by consulting the AWS website.
- `s3_bucket_name` (string) - The name of the S3 bucket where the OVA file
will be copied to for import. This bucket must exist when the
post-processor is run.
- `secret_key` (string) - The secret key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon#specifying-amazon-credentials)
Optional:
- `ami_description` (string) - The description to set for the resulting
imported AMI. By default this description is generated by the AMI import
process.
- `ami_encrypt` (boolean) - Encrypt the resulting AMI using KMS. This defaults
to `false`.
- `ami_groups` (array of strings) - A list of groups that have access to
launch the imported AMI. By default no groups have permission to launch the
AMI. `all` will make the AMI publicly accessible. AWS currently doesn't
accept any value other than "all".
- `ami_kms_key` (string) - The ID of the KMS key used to encrypt the AMI
if `ami_encrypt` is true. If set, the role specified in `role_name` must
be granted access to use this key. If not set, the account default KMS key
will be used.
- `ami_name` (string) - The name of the ami within the console. If not
specified, this will default to something like `ami-import-sfwerwf`. Please
note, specifying this option will result in a slightly longer execution
time.
- `ami_users` (array of strings) - A list of account IDs that have access to
launch the imported AMI. By default no additional users other than the user
importing the AMI has permission to launch it.
- `custom_endpoint_ec2` (string) - This option is useful if you use a cloud
provider whose API is compatible with aws EC2. Specify another endpoint
like this `https://ec2.custom.endpoint.com`.
- `format` (string) - One of: `ova`, `raw`, `vhd`, `vhdx`, or `vmdk`. This
specifies the format of the source virtual machine image. The resulting
artifact from the builder is assumed to have a file extension matching the
format. This defaults to `ova`.
- `insecure_skip_tls_verify` (boolean) - This allows skipping TLS
verification of the AWS EC2 endpoint. The default is `false`.
- `keep_input_artifact` (boolean) - if true, do not delete the source virtual
machine image after importing it to the cloud. Defaults to false.
- `license_type` (string) - The license type to be used for the Amazon
Machine Image (AMI) after importing. Valid values: `AWS` or `BYOL`
(default). For more details regarding licensing, see
[Prerequisites](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html)
in the VM Import/Export User Guide.
- `mfa_code` (string) - The MFA
[TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the
time.
- `profile` (string) - The profile to use in the shared credentials file for
AWS. See Amazon's documentation on [specifying
profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles)
for more details.
- `role_name` (string) - The name of the role to use when not using the
default role, 'vmimport'
- `s3_encryption` (string) - One of: `aws:kms`, or `AES256`. The algorithm
used to encrypt the artifact in S3. This **does not** encrypt the
resulting AMI, and is only used to encrypt the uploaded artifact before
it becomes an AMI. By default no encryption is used.
- `s3_encryption_key` (string) - The KMS key ID to use when `aws:kms` is
specified in `s3_encryption`. This setting is ignored if AES is used
as Amazon does not currently support custom AES keys when using the VM
import service. If set, the role specified in `role_name` must be granted
access to use this key. If not set, and `s3_encryption` is set to `aws:kms`,
the account default KMS key will be used.
- `s3_key_name` (string) - The name of the key in `s3_bucket_name` where the
OVA file will be copied to for import. If not specified, this will default
to "packer-import-{{timestamp}}.ova". This key (i.e., the uploaded OVA)
will be removed after import, unless `skip_clean` is `true`. This is
treated as a [template engine](/docs/templates/engine). Therefore, you
may use user variables and template functions in this field.
- `skip_clean` (boolean) - Whether we should skip removing the OVA file
uploaded to S3 after the import process has completed. "true" means that we
should leave it in the S3 bucket, "false" means to clean it out. Defaults
to `false`.
- `skip_region_validation` (boolean) - Set to true if you want to skip
validation of the region configuration option. Default `false`.
- `tags` (object of key/value strings) - Tags applied to the created AMI and
relevant snapshots.
- `token` (string) - The access token to use. This is different from the
access key and secret key. If you're not sure what this is, then you
probably don't need it. This will also be read from the `AWS_SESSION_TOKEN`
environmental variable.
## Basic Example
Here is a basic example. This assumes that the builder has produced an OVA
artifact for us to work with, and IAM roles for import exist in the AWS account
being imported into.
```json
{
"type": "amazon-import",
"access_key": "YOUR KEY HERE",
"secret_key": "YOUR SECRET KEY HERE",
"region": "us-east-1",
"s3_bucket_name": "importbucket",
"license_type": "BYOL",
"tags": {
"Description": "packer amazon-import {{timestamp}}"
}
}
```
## VMWare Example
This is an example that uses `vmware-iso` builder and exports the `.ova` file
using ovftool.
```json
"post-processors" : [
[
{
"type": "shell-local",
"inline": [ "/usr/bin/ovftool <packer-output-directory>/<vmware-name>.vmx <packer-output-directory>/<vmware-name>.ova" ]
},
{
"files": [
"<packer-output-directory>/<vmware-name>.ova"
],
"type": "artifice"
},
{
"type": "amazon-import",
"access_key": "YOUR KEY HERE",
"secret_key": "YOUR SECRET KEY HERE",
"region": "us-east-1",
"s3_bucket_name": "importbucket",
"license_type": "BYOL",
"tags": {
"Description": "packer amazon-import {{timestamp}}"
}
}
]
]
```
## Amazon Permissions
You'll need at least the following permissions in the policy for your IAM user
in order to successfully upload an image via the amazon-import post-processor.
```json
"ec2:CancelImportTask",
"ec2:CopyImage",
"ec2:CreateTags",
"ec2:DescribeImages",
"ec2:DescribeImportImageTasks",
"ec2:ImportImage",
"ec2:ModifyImageAttribute"
"ec2:DeregisterImage"
```
## Troubleshooting Timeouts
The amazon-import feature can take a long time to upload and convert your OVAs
into AMIs; if you find that your build is failing because you have exceeded
your max retries or find yourself being rate limited, you can override the max
retries and the delay in between retries by setting the environment variables
`AWS_MAX_ATTEMPTS` and `AWS_POLL_DELAY_SECONDS` on the machine running the
Packer build. By default, the waiter that waits for your image to be imported
from s3 will retry for up to an hour: it retries up to 720 times with a 5
second delay in between retries.
This is dramatically higher than many of our other waiters, to account for how
long this process can take.
-> **Note:** Packer can also read the access key and secret access key from
environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
This will take the OVA generated by a builder and upload it to S3. In this
case, an existing bucket called `importbucket` in the `us-east-1` region will
be where the copy is placed. The key name of the copy will be a default name
generated by packer.
Once uploaded, the import process will start, creating an AMI in the
"us-east-1" region with a "Description" tag applied to both the AMI and the
snapshots associated with it. Note: the import process does not allow you to
name the AMI, the name is automatically generated by AWS.
After tagging is completed, the OVA uploaded to S3 will be removed.