Accept any OVA. Fix names for tasks/import/copy. Update docs.

- s3_key_name is now optional, default is equivilent to
  "packer-import-{{timestamp}}"
- Remove restriction on builder used, anything producing an OVA is okay
- Fix task and ova description passed to import API call, correctly
  adds timestamp to both
- Documentation updated
  - Remove VMware-specific text
  - Mark s3_key_name as optional
  - Remove s3_key_name from example now it's optional
  - Explain the import process more clearly in example
This commit is contained in:
David Zanetti 2015-11-25 10:06:35 +13:00
parent 274630bd9c
commit 873dc89478
2 changed files with 16 additions and 23 deletions

View File

@ -22,12 +22,6 @@ import (
const BuilderId = "packer.post-processor.amazon-import"
// We accept the output from vmware or vmware-esx
var builtins = map[string]string{
"mitchellh.vmware": "amazon-import",
"mitchellh.vmware-esx": "amazon-import",
}
// Configuration of this post processor
type Config struct {
common.PackerConfig `mapstructure:",squash"`
@ -64,10 +58,13 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
// Set defaults
if p.config.ImportTaskDesc == "" {
p.config.ImportTaskDesc = "packer-amazon-ova task"
p.config.ImportTaskDesc = fmt.Sprintf("packer-import-%d", interpolate.InitTime.Unix())
}
if p.config.ImportDiskDesc == "" {
p.config.ImportDiskDesc = "packer-amazon-ova disk"
p.config.ImportDiskDesc = fmt.Sprintf("packer-import-ova-%d", interpolate.InitTime.Unix())
}
if p.config.S3Key == "" {
p.config.S3Key = fmt.Sprintf("packer-import-%d.ova", interpolate.InitTime.Unix())
}
errs := new(packer.MultiError)
@ -78,7 +75,6 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
// define all our required paramaters
templates := map[string]*string{
"s3_bucket_name": &p.config.S3Bucket,
"s3_key_name": &p.config.S3Key,
}
// Check out required params are defined
for key, ptr := range templates {
@ -103,10 +99,6 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac
if err != nil {
return nil, false, err
}
// Confirm we're dealing with the result of a builder we like
if _, ok := builtins[artifact.BuilderId()]; !ok {
return nil, false, fmt.Errorf("Artifact type %s is not supported by this post-processor", artifact.BuilderId())
}
log.Println("Looking for OVA in artifact")
// Locate the files output from the builder
@ -120,7 +112,7 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac
// Hope we found something useful
if source == "" {
return nil, false, fmt.Errorf("OVA file not found")
return nil, false, fmt.Errorf("No OVA file found in artifact from builder")
}
// Set up the AWS session

View File

@ -1,6 +1,6 @@
---
description: |
The Packer Amazon Import post-processor takes an OVA artifact from the VMware builder and
The Packer Amazon Import post-processor takes an OVA artifact from various builders and
imports it to an AMI available to Amazon Web Services EC2.
layout: docs
page_title: 'Amazon Import Post-Processor'
@ -10,13 +10,13 @@ page_title: 'Amazon Import Post-Processor'
Type: `amazon-import`
The Packer Amazon Import post-processor takes an OVA artifact from the VMware builder and imports it to an AMI available to Amazon Web Services EC2.
The Packer Amazon Import post-processor takes an OVA artifact from various builder and imports it to an AMI available to Amazon Web Services EC2.
\~> This post-processor is for advanced users. It depends on specific IAM roles inside AWS and is best used with images that operate with the EC2 configuration model (eg, cloud-init for Linux systems). Please ensure you read the [prerequisites for import](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html) before using this post-processor.
## How Does it Work?
The import process operates by copying the OVA to an S3 bucket, and calling an import task in EC2 on the OVA file. Once completed, an AMI is returned containing the converted virtual machine.
The import process operates making a temporary copy of the OVA to an S3 bucket, and calling an import task in EC2 on the OVA file. Once completed, an AMI is returned containing the converted virtual machine. The temporary OVA copy in S3 can be discarded after the import is complete.
The import process itself run by AWS includes modifications to the image uploaded, to allow it to boot and operate in the AWS EC2 environment. However, not all modifications required to make the machine run well in EC2 are performed. Take care around console output from the machine, as debugging can be very difficult without it. You may also want to include tools suitable for instances in EC2 such as cloud-init for Linux.
@ -37,13 +37,13 @@ Required:
- `s3_bucket_name` (string) - The name of the S3 bucket where the OVA file will be copied to for import. This bucket must exist when the post-processor is run.
- `s3_key_name` (string) - The name of the key in `s3_bucket` where the OVA file will be copied to for import. This key will be removed after import, unless `skip_clean` is true.
- `secret_key` (string) - The secret key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
Optional:
- `s3_key_name` (string) - The name of the key in `s3_bucket_name` where the OVA file will be copied to for import. If not specified, this will default to "packer-import-{{timestamp}}.ova". This key (ie, the uploaded OVA) will be removed after import, unless `skip_clean` is true.
- `skip_clean` (boolean) - Whether we should skip removing the OVA file uploaded to S3 after the import process has completed. "true" means that we should leave it in the S3 bucket, "false" means to clean it out. Defaults to "false".
- `tags` (object of key/value strings) - Tags applied to the created AMI and
@ -51,7 +51,7 @@ Optional:
## Basic Example
Here is a basic example. This assumes that the builder has produced an OVA artifact for us to work with.
Here is a basic example. This assumes that the builder has produced an OVA artifact for us to work with, and IAM roles for import exist in the AWS account being imported into.
``` {.javascript}
{
@ -60,7 +60,6 @@ Here is a basic example. This assumes that the builder has produced an OVA artif
"secret_key": "YOUR SECRET KEY HERE",
"region": "us-east-1",
"s3_bucket_name": "importbucket",
"s3_key_name": "import.ova",
"tags": {
"Description": "packer amazon-import {{timestamp}}"
}
@ -71,6 +70,8 @@ Here is a basic example. This assumes that the builder has produced an OVA artif
environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
This will locate the OVA created by the builder, upload it into the S3 bucket called "importbucket" which must exist when the post-process runs, call the OVA file "import.ova" in that bucket, and then kick off an import process into an AMI. The region used for both the S3 upload and the AMI import will be "us-east-1".
This will take the OVA generated by a builder and upload it to S3. In this case, an existing bucket called "importbucket" in the "us-east-1" region will be where the copy is placed. The key name of the copy will be a default name generated by packer.
Once created, both the AMI and the snapshots associated with it would be tagged with a key called "Description" and a value of "packer amazon-import" with the timestamp appended.
Once uploaded, the import process will start, creating an AMI in the "us-east-1" region with a "Description" tag applied to both the AMI and the snapshots associated with it. Note: the import process does not allow you to name the AMI, the name is automatically generated by AWS.
After tagging is completed, the OVA uploaded to S3 will be removed.