b1ff8c3bfc
This commit adds a builder that works like EBS builders, except does not create an AMI, and instead is intended to create EBS volumes in an initialized state. For example, the following template can be used to create and export a set of 3 EBS Volumes in a ZFS zpool named `data` for importing by instances running production systems: ``` { "variables": { "aws_access_key_id": "{{ env `AWS_ACCESS_KEY_ID` }}", "aws_secret_access_key": "{{ env `AWS_SECRET_ACCESS_KEY` }}", "region": "{{ env `AWS_REGION` }}", "source_ami": "{{ env `PACKER_SOURCE_AMI` }}", "vpc_id": "{{ env `PACKER_VPC_ID` }}", "subnet_id": "{{ env `PACKER_SUBNET_ID` }}" }, "builders": [{ "type": "amazon-ebs-volume", "access_key": "{{ user `aws_access_key_id` }}", "secret_key": "{{ user `aws_secret_access_key` }}", "region": "{{user `region`}}", "spot_price_auto_product": "Linux/UNIX (Amazon VPC)", "ssh_pty": true, "instance_type": "t2.medium", "vpc_id": "{{user `vpc_id` }}", "subnet_id": "{{user `subnet_id` }}", "associate_public_ip_address": true, "source_ami": "{{user `source_ami` }}", "ssh_username": "ubuntu", "ssh_timeout": "5m", "ebs_volumes": [ { "device_name": "/dev/xvdf", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data1", "zpool": "data", "Component": "TeamCity" } }, { "device_name": "/dev/xvdg", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data2", "zpool": "data", "Component": "TeamCity" } }, { "device_name": "/dev/xvdh", "delete_on_termination": false, "volume_size": 10, "volume_type": "gp2", "tags": { "Name": "TeamCity-Data3", "zpool": "data", "Component": "TeamCity" } } ] }], "provisioners": [ { "type": "shell", "start_retry_timeout": "10m", "inline": [ "DEBIAN_FRONTEND=noninteractive sudo apt-get update", "DEBIAN_FRONTEND=noninteractive sudo apt-get install -y zfs", "lsblk", "sudo parted /dev/xvdf --script mklabel GPT", "sudo parted /dev/xvdg --script mklabel GPT", "sudo parted /dev/xvdh --script mklabel GPT", "sudo zpool create -m none data raidz xvdf xvdg xvdh", "sudo zpool status", "sudo zpool export data", "sudo zpool status" ] } ] } ``` StepModifyInstance and StepStopInstance are now shared between EBS and EBS-Volume builders - move them into the AWS common directory and rename them to indicate that they only apply to EBS-backed builders. |
||
---|---|---|
.github | ||
builder | ||
command | ||
common | ||
communicator | ||
contrib | ||
examples/azure | ||
fix | ||
helper | ||
packer | ||
plugin/example | ||
post-processor | ||
provisioner | ||
scripts | ||
template | ||
test | ||
vendor | ||
version | ||
website | ||
.gitignore | ||
.travis.yml | ||
CHANGELOG.md | ||
CONTRIBUTING.md | ||
LICENSE | ||
Makefile | ||
README.md | ||
Vagrantfile | ||
appveyor.yml | ||
azure-merge.sh | ||
checkpoint.go | ||
commands.go | ||
config.go | ||
log.go | ||
main.go | ||
main_test.go | ||
panic.go | ||
signal.go | ||
stdin.go |
README.md
Packer
- Website: http://www.packer.io
- IRC:
#packer-tool
on Freenode - Mailing list: Google Groups
Packer is a tool for building identical machine images for multiple platforms from a single source configuration.
Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel. Packer comes out of the box with support for the following platforms:
- Amazon EC2 (AMI). Both EBS-backed and instance-store AMIs
- Azure
- DigitalOcean
- Docker
- Google Compute Engine
- OpenStack
- Parallels
- QEMU. Both KVM and Xen images.
- VirtualBox
- VMware
Support for other platforms can be added via plugins.
The images that Packer creates can easily be turned into Vagrant boxes.
Quick Start
Note: There is a great introduction and getting started guide for those with a bit more patience. Otherwise, the quick start below will get you up and running quickly, at the sacrifice of not explaining some key points.
First, download a pre-built Packer binary for your operating system or compile Packer yourself.
After Packer is installed, create your first template, which tells Packer
what platforms to build images for and how you want to build them. In our
case, we'll create a simple AMI that has Redis pre-installed. Save this
file as quick-start.json
. Export your AWS credentials as the
AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables.
{
"variables": {
"access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}"
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `access_key`}}",
"secret_key": "{{user `secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-de0d9eb7",
"instance_type": "t1.micro",
"ssh_username": "ubuntu",
"ami_name": "packer-example {{timestamp}}"
}]
}
Next, tell Packer to build the image:
$ packer build quick-start.json
...
Packer will build an AMI according to the "quick-start" template. The AMI will be available in your AWS account. To delete the AMI, you must manually delete it using the AWS console. Packer builds your images, it does not manage their lifecycle. Where they go, how they're run, etc. is up to you.
Documentation
Comprehensive documentation is viewable on the Packer website:
Developing Packer
See CONTRIBUTING.md for best practices and instructions on setting up your development environment to work on Packer.