Merge branch 'master' into custom-targetpath

This commit is contained in:
Olivier Tremblay 2015-08-20 07:26:22 -04:00
commit 661552dfd5
171 changed files with 7044 additions and 5727 deletions

View File

@ -1,3 +1,53 @@
## (Unreleased)
IMPROVEMENTS:
* builder/docker: Now supports Download so it can be used with the file
provisioner to download a file from a container. [GH-2585]
* post-processor/vagrant: Like the compress post-processor, vagrant now uses a
parallel gzip algorithm to compress vagrant boxes. [GH-2590]
BUG FIXES:
* builded/parallels: Fix interpolation in parallels_tools_guest_path [GH-2543]
## 0.8.5 (Aug 10, 2015)
FEATURES:
* **[Beta]** Artifice post-processor: Override packer artifacts during post-
processing. This allows you to extract artifacts from a packer builder
and use them with other post-processors like compress, docker, and Atlas.
IMPROVEMENTS:
* Many docs have been updated and corrected; big thanks to our contributors!
* builder/openstack: Add debug logging for IP addresses used for SSH [GH-2513]
* builder/openstack: Add option to use existing SSH keypair [GH-2512]
* builder/openstack: Add support for Glance metadata [GH-2434]
* builder/qemu and builder/vmware: Packer's VNC connection no longer asks for
an exclusive connection [GH-2522]
* provisioner/salt-masterless: Can now customize salt remote directories [GH-2519]
BUG FIXES:
* builder/amazon: Improve instance cleanup by storing id sooner [GH-2404]
* builder/amazon: Only fetch windows password when using WinRM communicator [GH-2538]
* builder/openstack: Support IPv6 SSH address [GH-2450]
* builder/openstack: Track new IP address discovered during RackConnect [GH-2514]
* builder/qemu: Add 100ms delay between VNC key events. [GH-2415]
* post-processor/atlas: atlas_url configuration option works now [GH-2478]
* post-processor/compress: Now supports interpolation in output config [GH-2414]
* provisioner/powershell: Elevated runs now receive environment variables [GH-2378]
* provisioner/salt-masterless: Clarify error messages when we can't create or
write to the temp directory [GH-2518]
* provisioner/salt-masterless: Copy state even if /srv/salt exists already [GH-1699]
* provisioner/salt-masterless: Make sure /etc/salt exists before writing to it [GH-2520]
* provisioner/winrm: Connect to the correct port when using NAT with
VirtualBox / VMware [GH-2399]
Note: 0.8.3 was pulled and 0.8.4 was skipped.
## 0.8.2 (July 17, 2015) ## 0.8.2 (July 17, 2015)
IMPROVEMENTS: IMPROVEMENTS:

View File

@ -1,8 +1,6 @@
TEST?=./... TEST?=./...
VETARGS?=-asmdecl -atomic -bool -buildtags -copylocks -methods \
-nilfunc -printf -rangeloops -shift -structtags -unsafeptr
default: test default: test vet dev
bin: bin:
@sh -c "$(CURDIR)/scripts/build.sh" @sh -c "$(CURDIR)/scripts/build.sh"
@ -16,6 +14,7 @@ generate:
go generate ./... go generate ./...
test: test:
@echo "Running tests on:"; git symbolic-ref HEAD; git rev-parse HEAD
go test $(TEST) $(TESTARGS) -timeout=10s go test $(TEST) $(TESTARGS) -timeout=10s
@$(MAKE) vet @$(MAKE) vet
@ -31,19 +30,23 @@ testrace:
go test -race $(TEST) $(TESTARGS) go test -race $(TEST) $(TESTARGS)
updatedeps: updatedeps:
@echo "Updating deps on:"; git symbolic-ref HEAD; git rev-parse HEAD
go get -u github.com/mitchellh/gox go get -u github.com/mitchellh/gox
go get -u golang.org/x/tools/cmd/stringer go get -u golang.org/x/tools/cmd/stringer
go list ./... \ go list ./... \
| xargs go list -f '{{join .Deps "\n"}}' \ | xargs go list -f '{{join .Deps "\n"}}' \
| grep -v github.com/mitchellh/packer \ | grep -v github.com/mitchellh/packer \
| grep -v '/internal/' \
| sort -u \ | sort -u \
| xargs go get -f -u -v | xargs go get -f -u -v
@echo "Finished updating deps, now on:"; git symbolic-ref HEAD; git rev-parse HEAD
vet: vet:
@go tool vet 2>/dev/null ; if [ $$? -eq 3 ]; then \ @echo "Running go vet on:"; git symbolic-ref HEAD; git rev-parse HEAD
@go vet 2>/dev/null ; if [ $$? -eq 3 ]; then \
go get golang.org/x/tools/cmd/vet; \ go get golang.org/x/tools/cmd/vet; \
fi fi
@go tool vet $(VETARGS) . ; if [ $$? -eq 1 ]; then \ @go vet ./... ; if [ $$? -eq 1 ]; then \
echo ""; \ echo ""; \
echo "Vet found suspicious constructs. Please check the reported constructs"; \ echo "Vet found suspicious constructs. Please check the reported constructs"; \
echo "and fix them if necessary before submitting the code for reviewal."; \ echo "and fix them if necessary before submitting the code for reviewal."; \

View File

@ -31,6 +31,8 @@ install:
build_script: build_script:
- go test -v ./... - go test -v ./...
- go vet ./...
- git rev-parse HEAD
test: off test: off

View File

@ -5,7 +5,6 @@ import (
"log" "log"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awsutil"
"github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/ec2"
"github.com/mitchellh/multistep" "github.com/mitchellh/multistep"
awscommon "github.com/mitchellh/packer/builder/amazon/common" awscommon "github.com/mitchellh/packer/builder/amazon/common"
@ -52,12 +51,12 @@ func (s *StepCreateVolume) Run(state multistep.StateBag) multistep.StepAction {
} }
createVolume := &ec2.CreateVolumeInput{ createVolume := &ec2.CreateVolumeInput{
AvailabilityZone: instance.Placement.AvailabilityZone, AvailabilityZone: instance.Placement.AvailabilityZone,
Size: aws.Long(vs), Size: aws.Int64(vs),
SnapshotID: rootDevice.EBS.SnapshotID, SnapshotID: rootDevice.EBS.SnapshotID,
VolumeType: rootDevice.EBS.VolumeType, VolumeType: rootDevice.EBS.VolumeType,
IOPS: rootDevice.EBS.IOPS, IOPS: rootDevice.EBS.IOPS,
} }
log.Printf("Create args: %s", awsutil.StringValue(createVolume)) log.Printf("Create args: %s", createVolume)
createVolumeResp, err := ec2conn.CreateVolume(createVolume) createVolumeResp, err := ec2conn.CreateVolume(createVolume)
if err != nil { if err != nil {

View File

@ -34,7 +34,7 @@ func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction {
} }
if s.RootVolumeSize > *newDevice.EBS.VolumeSize { if s.RootVolumeSize > *newDevice.EBS.VolumeSize {
newDevice.EBS.VolumeSize = aws.Long(s.RootVolumeSize) newDevice.EBS.VolumeSize = aws.Int64(s.RootVolumeSize)
} }
} }
@ -64,7 +64,7 @@ func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction {
// Set the AMI ID in the state // Set the AMI ID in the state
ui.Say(fmt.Sprintf("AMI: %s", *registerResp.ImageID)) ui.Say(fmt.Sprintf("AMI: %s", *registerResp.ImageID))
amis := make(map[string]string) amis := make(map[string]string)
amis[ec2conn.Config.Region] = *registerResp.ImageID amis[*ec2conn.Config.Region] = *registerResp.ImageID
state.Put("amis", amis) state.Put("amis", amis)
// Wait for the image to become ready // Wait for the image to become ready

View File

@ -9,6 +9,7 @@ import (
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/mitchellh/packer/template/interpolate" "github.com/mitchellh/packer/template/interpolate"
) )
@ -31,7 +32,7 @@ func (c *AccessConfig) Config() (*aws.Config, error) {
}}, }},
&credentials.EnvProvider{}, &credentials.EnvProvider{},
&credentials.SharedCredentialsProvider{Filename: "", Profile: ""}, &credentials.SharedCredentialsProvider{Filename: "", Profile: ""},
&credentials.EC2RoleProvider{}, &ec2rolecreds.EC2RoleProvider{},
}) })
region, err := c.Region() region, err := c.Region()
@ -40,9 +41,9 @@ func (c *AccessConfig) Config() (*aws.Config, error) {
} }
return &aws.Config{ return &aws.Config{
Region: region, Region: aws.String(region),
Credentials: creds, Credentials: creds,
MaxRetries: 11, MaxRetries: aws.Int(11),
}, nil }, nil
} }

View File

@ -70,7 +70,7 @@ func (a *Artifact) Destroy() error {
regionConfig := &aws.Config{ regionConfig := &aws.Config{
Credentials: a.Conn.Config.Credentials, Credentials: a.Conn.Config.Credentials,
Region: region, Region: aws.String(region),
} }
regionConn := ec2.New(regionConfig) regionConn := ec2.New(regionConfig)
@ -88,7 +88,7 @@ func (a *Artifact) Destroy() error {
if len(errors) == 1 { if len(errors) == 1 {
return errors[0] return errors[0]
} else { } else {
return &packer.MultiError{errors} return &packer.MultiError{Errors: errors}
} }
} }

View File

@ -32,20 +32,20 @@ func buildBlockDevices(b []BlockDevice) []*ec2.BlockDeviceMapping {
for _, blockDevice := range b { for _, blockDevice := range b {
ebsBlockDevice := &ec2.EBSBlockDevice{ ebsBlockDevice := &ec2.EBSBlockDevice{
VolumeType: aws.String(blockDevice.VolumeType), VolumeType: aws.String(blockDevice.VolumeType),
VolumeSize: aws.Long(blockDevice.VolumeSize), VolumeSize: aws.Int64(blockDevice.VolumeSize),
DeleteOnTermination: aws.Boolean(blockDevice.DeleteOnTermination), DeleteOnTermination: aws.Bool(blockDevice.DeleteOnTermination),
} }
// IOPS is only valid for SSD Volumes // IOPS is only valid for SSD Volumes
if blockDevice.VolumeType != "" && blockDevice.VolumeType != "standard" && blockDevice.VolumeType != "gp2" { if blockDevice.VolumeType != "" && blockDevice.VolumeType != "standard" && blockDevice.VolumeType != "gp2" {
ebsBlockDevice.IOPS = aws.Long(blockDevice.IOPS) ebsBlockDevice.IOPS = aws.Int64(blockDevice.IOPS)
} }
// You cannot specify Encrypted if you specify a Snapshot ID // You cannot specify Encrypted if you specify a Snapshot ID
if blockDevice.SnapshotId != "" { if blockDevice.SnapshotId != "" {
ebsBlockDevice.SnapshotID = aws.String(blockDevice.SnapshotId) ebsBlockDevice.SnapshotID = aws.String(blockDevice.SnapshotId)
} else if blockDevice.Encrypted { } else if blockDevice.Encrypted {
ebsBlockDevice.Encrypted = aws.Boolean(blockDevice.Encrypted) ebsBlockDevice.Encrypted = aws.Bool(blockDevice.Encrypted)
} }
mapping := &ec2.BlockDeviceMapping{ mapping := &ec2.BlockDeviceMapping{

View File

@ -5,7 +5,6 @@ import (
"testing" "testing"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awsutil"
"github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/ec2"
) )
@ -29,8 +28,8 @@ func TestBlockDevice(t *testing.T) {
EBS: &ec2.EBSBlockDevice{ EBS: &ec2.EBSBlockDevice{
SnapshotID: aws.String("snap-1234"), SnapshotID: aws.String("snap-1234"),
VolumeType: aws.String("standard"), VolumeType: aws.String("standard"),
VolumeSize: aws.Long(8), VolumeSize: aws.Int64(8),
DeleteOnTermination: aws.Boolean(true), DeleteOnTermination: aws.Bool(true),
}, },
}, },
}, },
@ -45,8 +44,8 @@ func TestBlockDevice(t *testing.T) {
VirtualName: aws.String(""), VirtualName: aws.String(""),
EBS: &ec2.EBSBlockDevice{ EBS: &ec2.EBSBlockDevice{
VolumeType: aws.String(""), VolumeType: aws.String(""),
VolumeSize: aws.Long(8), VolumeSize: aws.Int64(8),
DeleteOnTermination: aws.Boolean(false), DeleteOnTermination: aws.Bool(false),
}, },
}, },
}, },
@ -64,9 +63,9 @@ func TestBlockDevice(t *testing.T) {
VirtualName: aws.String(""), VirtualName: aws.String(""),
EBS: &ec2.EBSBlockDevice{ EBS: &ec2.EBSBlockDevice{
VolumeType: aws.String("io1"), VolumeType: aws.String("io1"),
VolumeSize: aws.Long(8), VolumeSize: aws.Int64(8),
DeleteOnTermination: aws.Boolean(true), DeleteOnTermination: aws.Bool(true),
IOPS: aws.Long(1000), IOPS: aws.Int64(1000),
}, },
}, },
}, },
@ -93,13 +92,13 @@ func TestBlockDevice(t *testing.T) {
got := blockDevices.BuildAMIDevices() got := blockDevices.BuildAMIDevices()
if !reflect.DeepEqual(expected, got) { if !reflect.DeepEqual(expected, got) {
t.Fatalf("Bad block device, \nexpected: %s\n\ngot: %s", t.Fatalf("Bad block device, \nexpected: %s\n\ngot: %s",
awsutil.StringValue(expected), awsutil.StringValue(got)) expected, got)
} }
if !reflect.DeepEqual(expected, blockDevices.BuildLaunchDevices()) { if !reflect.DeepEqual(expected, blockDevices.BuildLaunchDevices()) {
t.Fatalf("Bad block device, \nexpected: %s\n\ngot: %s", t.Fatalf("Bad block device, \nexpected: %s\n\ngot: %s",
awsutil.StringValue(expected), expected,
awsutil.StringValue(blockDevices.BuildLaunchDevices())) blockDevices.BuildLaunchDevices())
} }
} }
} }

View File

@ -181,8 +181,6 @@ func WaitForState(conf *StateChangeConf) (i interface{}, err error) {
time.Sleep(time.Duration(sleepSeconds) * time.Second) time.Sleep(time.Duration(sleepSeconds) * time.Second)
} }
return
} }
func isTransientNetworkError(err error) bool { func isTransientNetworkError(err error) bool {

View File

@ -5,6 +5,7 @@ import (
"sync" "sync"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/ec2"
"github.com/mitchellh/multistep" "github.com/mitchellh/multistep"
@ -21,7 +22,7 @@ func (s *StepAMIRegionCopy) Run(state multistep.StateBag) multistep.StepAction {
ec2conn := state.Get("ec2").(*ec2.EC2) ec2conn := state.Get("ec2").(*ec2.EC2)
ui := state.Get("ui").(packer.Ui) ui := state.Get("ui").(packer.Ui)
amis := state.Get("amis").(map[string]string) amis := state.Get("amis").(map[string]string)
ami := amis[ec2conn.Config.Region] ami := amis[*ec2conn.Config.Region]
if len(s.Regions) == 0 { if len(s.Regions) == 0 {
return multistep.ActionContinue return multistep.ActionContinue
@ -33,7 +34,7 @@ func (s *StepAMIRegionCopy) Run(state multistep.StateBag) multistep.StepAction {
var wg sync.WaitGroup var wg sync.WaitGroup
errs := new(packer.MultiError) errs := new(packer.MultiError)
for _, region := range s.Regions { for _, region := range s.Regions {
if region == ec2conn.Config.Region { if region == *ec2conn.Config.Region {
ui.Message(fmt.Sprintf( ui.Message(fmt.Sprintf(
"Avoiding copying AMI to duplicate region %s", region)) "Avoiding copying AMI to duplicate region %s", region))
continue continue
@ -44,7 +45,7 @@ func (s *StepAMIRegionCopy) Run(state multistep.StateBag) multistep.StepAction {
go func(region string) { go func(region string) {
defer wg.Done() defer wg.Done()
id, err := amiRegionCopy(state, s.AccessConfig, s.Name, ami, region, ec2conn.Config.Region) id, err := amiRegionCopy(state, s.AccessConfig, s.Name, ami, region, *ec2conn.Config.Region)
lock.Lock() lock.Lock()
defer lock.Unlock() defer lock.Unlock()
@ -84,7 +85,7 @@ func amiRegionCopy(state multistep.StateBag, config *AccessConfig, name string,
if err != nil { if err != nil {
return "", err return "", err
} }
awsConfig.Region = target awsConfig.Region = aws.String(target)
regionconn := ec2.New(awsConfig) regionconn := ec2.New(awsConfig)
resp, err := regionconn.CopyImage(&ec2.CopyImageInput{ resp, err := regionconn.CopyImage(&ec2.CopyImageInput{

View File

@ -36,7 +36,7 @@ func (s *StepCreateTags) Run(state multistep.StateBag) multistep.StepAction {
regionconn := ec2.New(&aws.Config{ regionconn := ec2.New(&aws.Config{
Credentials: ec2conn.Config.Credentials, Credentials: ec2conn.Config.Credentials,
Region: region, Region: aws.String(region),
}) })
// Retrieve image list for given AMI // Retrieve image list for given AMI

View File

@ -26,11 +26,10 @@ type StepGetPassword struct {
func (s *StepGetPassword) Run(state multistep.StateBag) multistep.StepAction { func (s *StepGetPassword) Run(state multistep.StateBag) multistep.StepAction {
ui := state.Get("ui").(packer.Ui) ui := state.Get("ui").(packer.Ui)
image := state.Get("source_image").(*ec2.Image)
// Skip if we're not Windows... // Skip if we're not using winrm
if image.Platform == nil || *image.Platform != "windows" { if s.Comm.Type != "winrm" {
log.Printf("[INFO] Not Windows, skipping get password...") log.Printf("[INFO] Not using winrm communicator, skipping get password...")
return multistep.ActionContinue return multistep.ActionContinue
} }

View File

@ -90,7 +90,7 @@ func (s *StepModifyAMIAttributes) Run(state multistep.StateBag) multistep.StepAc
ui.Say(fmt.Sprintf("Modifying attributes on AMI (%s)...", ami)) ui.Say(fmt.Sprintf("Modifying attributes on AMI (%s)...", ami))
regionconn := ec2.New(&aws.Config{ regionconn := ec2.New(&aws.Config{
Credentials: ec2conn.Config.Credentials, Credentials: ec2conn.Config.Credentials,
Region: region, Region: aws.String(region),
}) })
for name, input := range options { for name, input := range options {
ui.Message(fmt.Sprintf("Modifying: %s", name)) ui.Message(fmt.Sprintf("Modifying: %s", name))

View File

@ -31,7 +31,7 @@ type StepRunSourceInstance struct {
UserData string UserData string
UserDataFile string UserDataFile string
instance *ec2.Instance instanceId string
spotRequest *ec2.SpotInstanceRequest spotRequest *ec2.SpotInstanceRequest
} }
@ -141,8 +141,8 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
ImageID: &s.SourceAMI, ImageID: &s.SourceAMI,
InstanceType: &s.InstanceType, InstanceType: &s.InstanceType,
UserData: &userData, UserData: &userData,
MaxCount: aws.Long(1), MaxCount: aws.Int64(1),
MinCount: aws.Long(1), MinCount: aws.Int64(1),
IAMInstanceProfile: &ec2.IAMInstanceProfileSpecification{Name: &s.IamInstanceProfile}, IAMInstanceProfile: &ec2.IAMInstanceProfileSpecification{Name: &s.IamInstanceProfile},
BlockDeviceMappings: s.BlockDevices.BuildLaunchDevices(), BlockDeviceMappings: s.BlockDevices.BuildLaunchDevices(),
Placement: &ec2.Placement{AvailabilityZone: &s.AvailabilityZone}, Placement: &ec2.Placement{AvailabilityZone: &s.AvailabilityZone},
@ -151,11 +151,11 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
if s.SubnetId != "" && s.AssociatePublicIpAddress { if s.SubnetId != "" && s.AssociatePublicIpAddress {
runOpts.NetworkInterfaces = []*ec2.InstanceNetworkInterfaceSpecification{ runOpts.NetworkInterfaces = []*ec2.InstanceNetworkInterfaceSpecification{
&ec2.InstanceNetworkInterfaceSpecification{ &ec2.InstanceNetworkInterfaceSpecification{
DeviceIndex: aws.Long(0), DeviceIndex: aws.Int64(0),
AssociatePublicIPAddress: &s.AssociatePublicIpAddress, AssociatePublicIPAddress: &s.AssociatePublicIpAddress,
SubnetID: &s.SubnetId, SubnetID: &s.SubnetId,
Groups: securityGroupIds, Groups: securityGroupIds,
DeleteOnTermination: aws.Boolean(true), DeleteOnTermination: aws.Bool(true),
}, },
} }
} else { } else {
@ -185,11 +185,11 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
IAMInstanceProfile: &ec2.IAMInstanceProfileSpecification{Name: &s.IamInstanceProfile}, IAMInstanceProfile: &ec2.IAMInstanceProfileSpecification{Name: &s.IamInstanceProfile},
NetworkInterfaces: []*ec2.InstanceNetworkInterfaceSpecification{ NetworkInterfaces: []*ec2.InstanceNetworkInterfaceSpecification{
&ec2.InstanceNetworkInterfaceSpecification{ &ec2.InstanceNetworkInterfaceSpecification{
DeviceIndex: aws.Long(0), DeviceIndex: aws.Int64(0),
AssociatePublicIPAddress: &s.AssociatePublicIpAddress, AssociatePublicIPAddress: &s.AssociatePublicIpAddress,
SubnetID: &s.SubnetId, SubnetID: &s.SubnetId,
Groups: securityGroupIds, Groups: securityGroupIds,
DeleteOnTermination: aws.Boolean(true), DeleteOnTermination: aws.Bool(true),
}, },
}, },
Placement: &ec2.SpotPlacement{ Placement: &ec2.SpotPlacement{
@ -235,6 +235,9 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
instanceId = *spotResp.SpotInstanceRequests[0].InstanceID instanceId = *spotResp.SpotInstanceRequests[0].InstanceID
} }
// Set the instance ID so that the cleanup works properly
s.instanceId = instanceId
ui.Message(fmt.Sprintf("Instance ID: %s", instanceId)) ui.Message(fmt.Sprintf("Instance ID: %s", instanceId))
ui.Say(fmt.Sprintf("Waiting for instance (%v) to become ready...", instanceId)) ui.Say(fmt.Sprintf("Waiting for instance (%v) to become ready...", instanceId))
stateChange := StateChangeConf{ stateChange := StateChangeConf{
@ -251,7 +254,7 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
return multistep.ActionHalt return multistep.ActionHalt
} }
s.instance = latestInstance.(*ec2.Instance) instance := latestInstance.(*ec2.Instance)
ec2Tags := make([]*ec2.Tag, 1, len(s.Tags)+1) ec2Tags := make([]*ec2.Tag, 1, len(s.Tags)+1)
ec2Tags[0] = &ec2.Tag{Key: aws.String("Name"), Value: aws.String("Packer Builder")} ec2Tags[0] = &ec2.Tag{Key: aws.String("Name"), Value: aws.String("Packer Builder")}
@ -261,7 +264,7 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
_, err = ec2conn.CreateTags(&ec2.CreateTagsInput{ _, err = ec2conn.CreateTags(&ec2.CreateTagsInput{
Tags: ec2Tags, Tags: ec2Tags,
Resources: []*string{s.instance.InstanceID}, Resources: []*string{instance.InstanceID},
}) })
if err != nil { if err != nil {
ui.Message( ui.Message(
@ -269,20 +272,20 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
} }
if s.Debug { if s.Debug {
if s.instance.PublicDNSName != nil && *s.instance.PublicDNSName != "" { if instance.PublicDNSName != nil && *instance.PublicDNSName != "" {
ui.Message(fmt.Sprintf("Public DNS: %s", *s.instance.PublicDNSName)) ui.Message(fmt.Sprintf("Public DNS: %s", *instance.PublicDNSName))
} }
if s.instance.PublicIPAddress != nil && *s.instance.PublicIPAddress != "" { if instance.PublicIPAddress != nil && *instance.PublicIPAddress != "" {
ui.Message(fmt.Sprintf("Public IP: %s", *s.instance.PublicIPAddress)) ui.Message(fmt.Sprintf("Public IP: %s", *instance.PublicIPAddress))
} }
if s.instance.PrivateIPAddress != nil && *s.instance.PrivateIPAddress != "" { if instance.PrivateIPAddress != nil && *instance.PrivateIPAddress != "" {
ui.Message(fmt.Sprintf("Private IP: %s", *s.instance.PrivateIPAddress)) ui.Message(fmt.Sprintf("Private IP: %s", *instance.PrivateIPAddress))
} }
} }
state.Put("instance", s.instance) state.Put("instance", instance)
return multistep.ActionContinue return multistep.ActionContinue
} }
@ -313,16 +316,15 @@ func (s *StepRunSourceInstance) Cleanup(state multistep.StateBag) {
} }
// Terminate the source instance if it exists // Terminate the source instance if it exists
if s.instance != nil { if s.instanceId != "" {
ui.Say("Terminating the source AWS instance...") ui.Say("Terminating the source AWS instance...")
if _, err := ec2conn.TerminateInstances(&ec2.TerminateInstancesInput{InstanceIDs: []*string{s.instance.InstanceID}}); err != nil { if _, err := ec2conn.TerminateInstances(&ec2.TerminateInstancesInput{InstanceIDs: []*string{&s.instanceId}}); err != nil {
ui.Error(fmt.Sprintf("Error terminating instance, may still be around: %s", err)) ui.Error(fmt.Sprintf("Error terminating instance, may still be around: %s", err))
return return
} }
stateChange := StateChangeConf{ stateChange := StateChangeConf{
Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"}, Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"},
Refresh: InstanceStateRefreshFunc(ec2conn, *s.instance.InstanceID), Refresh: InstanceStateRefreshFunc(ec2conn, s.instanceId),
Target: "terminated", Target: "terminated",
} }

View File

@ -59,8 +59,8 @@ func (s *StepSecurityGroup) Run(state multistep.StateBag) multistep.StepAction {
req := &ec2.AuthorizeSecurityGroupIngressInput{ req := &ec2.AuthorizeSecurityGroupIngressInput{
GroupID: groupResp.GroupID, GroupID: groupResp.GroupID,
IPProtocol: aws.String("tcp"), IPProtocol: aws.String("tcp"),
FromPort: aws.Long(int64(port)), FromPort: aws.Int64(int64(port)),
ToPort: aws.Long(int64(port)), ToPort: aws.Int64(int64(port)),
CIDRIP: aws.String("0.0.0.0/0"), CIDRIP: aws.String("0.0.0.0/0"),
} }

View File

@ -38,7 +38,7 @@ func (s *stepCreateAMI) Run(state multistep.StateBag) multistep.StepAction {
// Set the AMI ID in the state // Set the AMI ID in the state
ui.Message(fmt.Sprintf("AMI: %s", *createResp.ImageID)) ui.Message(fmt.Sprintf("AMI: %s", *createResp.ImageID))
amis := make(map[string]string) amis := make(map[string]string)
amis[ec2conn.Config.Region] = *createResp.ImageID amis[*ec2conn.Config.Region] = *createResp.ImageID
state.Put("amis", amis) state.Put("amis", amis)
// Wait for the image to become ready // Wait for the image to become ready

View File

@ -44,7 +44,7 @@ func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction {
// Set the AMI ID in the state // Set the AMI ID in the state
ui.Say(fmt.Sprintf("AMI: %s", *registerResp.ImageID)) ui.Say(fmt.Sprintf("AMI: %s", *registerResp.ImageID))
amis := make(map[string]string) amis := make(map[string]string)
amis[ec2conn.Config.Region] = *registerResp.ImageID amis[*ec2conn.Config.Region] = *registerResp.ImageID
state.Put("amis", amis) state.Put("amis", amis)
// Wait for the image to become ready // Wait for the image to become ready

View File

@ -10,11 +10,11 @@ import (
"os" "os"
"runtime" "runtime"
"code.google.com/p/gosshold/ssh"
"github.com/digitalocean/godo" "github.com/digitalocean/godo"
"github.com/mitchellh/multistep" "github.com/mitchellh/multistep"
"github.com/mitchellh/packer/common/uuid" "github.com/mitchellh/packer/common/uuid"
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"golang.org/x/crypto/ssh"
) )
type stepCreateSSHKey struct { type stepCreateSSHKey struct {

View File

@ -1,6 +1,7 @@
package docker package docker
import ( import (
"archive/tar"
"bytes" "bytes"
"fmt" "fmt"
"io" "io"
@ -24,8 +25,8 @@ type Communicator struct {
HostDir string HostDir string
ContainerDir string ContainerDir string
Version *version.Version Version *version.Version
Config *Config Config *Config
lock sync.Mutex lock sync.Mutex
} }
func (c *Communicator) Start(remote *packer.RemoteCmd) error { func (c *Communicator) Start(remote *packer.RemoteCmd) error {
@ -194,8 +195,42 @@ func (c *Communicator) UploadDir(dst string, src string, exclude []string) error
return nil return nil
} }
// Download pulls a file out of a container using `docker cp`. We have a source
// path and want to write to an io.Writer, not a file. We use - to make docker
// cp to write to stdout, and then copy the stream to our destination io.Writer.
func (c *Communicator) Download(src string, dst io.Writer) error { func (c *Communicator) Download(src string, dst io.Writer) error {
panic("not implemented") log.Printf("Downloading file from container: %s:%s", c.ContainerId, src)
localCmd := exec.Command("docker", "cp", fmt.Sprintf("%s:%s", c.ContainerId, src), "-")
pipe, err := localCmd.StdoutPipe()
if err != nil {
return fmt.Errorf("Failed to open pipe: %s", err)
}
if err = localCmd.Start(); err != nil {
return fmt.Errorf("Failed to start download: %s", err)
}
// When you use - to send docker cp to stdout it is streamed as a tar; this
// enables it to work with directories. We don't actually support
// directories in Download() but we still need to handle the tar format.
archive := tar.NewReader(pipe)
_, err = archive.Next()
if err != nil {
return fmt.Errorf("Failed to read header from tar stream: %s", err)
}
numBytes, err := io.Copy(dst, archive)
if err != nil {
return fmt.Errorf("Failed to pipe download: %s", err)
}
log.Printf("Copied %d bytes for %s", numBytes, src)
if err = localCmd.Wait(); err != nil {
return fmt.Errorf("Failed to download '%s' from container: %s", src, err)
}
return nil
} }
// canExec tells us whether `docker exec` is supported // canExec tells us whether `docker exec` is supported

View File

@ -1,10 +1,129 @@
package docker package docker
import ( import (
"github.com/mitchellh/packer/packer" "crypto/sha256"
"io/ioutil"
"os"
"os/exec"
"runtime"
"strings"
"testing" "testing"
"github.com/mitchellh/packer/packer"
"github.com/mitchellh/packer/provisioner/file"
"github.com/mitchellh/packer/template"
) )
func TestCommunicator_impl(t *testing.T) { func TestCommunicator_impl(t *testing.T) {
var _ packer.Communicator = new(Communicator) var _ packer.Communicator = new(Communicator)
} }
func TestUploadDownload(t *testing.T) {
ui := packer.TestUi(t)
cache := &packer.FileCache{CacheDir: os.TempDir()}
tpl, err := template.Parse(strings.NewReader(dockerBuilderConfig))
if err != nil {
t.Fatalf("Unable to parse config: %s", err)
}
// Make sure we only run this on linux hosts
if os.Getenv("PACKER_ACC") == "" {
t.Skip("This test is only run with PACKER_ACC=1")
}
if runtime.GOOS != "linux" {
t.Skip("This test is only supported on linux")
}
cmd := exec.Command("docker", "-v")
cmd.Run()
if !cmd.ProcessState.Success() {
t.Error("docker command not found; please make sure docker is installed")
}
// Setup the builder
builder := &Builder{}
warnings, err := builder.Prepare(tpl.Builders["docker"].Config)
if err != nil {
t.Fatalf("Error preparing configuration %s", err)
}
if len(warnings) > 0 {
t.Fatal("Encountered configuration warnings; aborting")
}
// Setup the provisioners
upload := &file.Provisioner{}
err = upload.Prepare(tpl.Provisioners[0].Config)
if err != nil {
t.Fatalf("Error preparing upload: %s", err)
}
download := &file.Provisioner{}
err = download.Prepare(tpl.Provisioners[1].Config)
if err != nil {
t.Fatalf("Error preparing download: %s", err)
}
// Preemptive cleanup. Honestly I don't know why you would want to get rid
// of my strawberry cake. It's so tasty! Do you not like cake? Are you a
// cake-hater? Or are you keeping all the cake all for yourself? So selfish!
defer os.Remove("my-strawberry-cake")
// Add hooks so the provisioners run during the build
hooks := map[string][]packer.Hook{}
hooks[packer.HookProvision] = []packer.Hook{
&packer.ProvisionHook{
Provisioners: []packer.Provisioner{
upload,
download,
},
},
}
hook := &packer.DispatchHook{Mapping: hooks}
// Run things
artifact, err := builder.Run(ui, hook, cache)
if err != nil {
t.Fatalf("Error running build %s", err)
}
// Preemptive cleanup
defer artifact.Destroy()
// Verify that the thing we downloaded is the same thing we sent up.
// Complain loudly if it isn't.
inputFile, err := ioutil.ReadFile("test-fixtures/onecakes/strawberry")
if err != nil {
t.Fatalf("Unable to read input file: %s", err)
}
outputFile, err := ioutil.ReadFile("my-strawberry-cake")
if err != nil {
t.Fatalf("Unable to read output file: %s", err)
}
if sha256.Sum256(inputFile) != sha256.Sum256(outputFile) {
t.Fatalf("Input and output files do not match\n"+
"Input:\n%s\nOutput:\n%s\n", inputFile, outputFile)
}
}
const dockerBuilderConfig = `
{
"builders": [
{
"type": "docker",
"image": "alpine",
"export_path": "alpine.tar",
"run_command": ["-d", "-i", "-t", "{{.Image}}", "/bin/sh"]
}
],
"provisioners": [
{
"type": "file",
"source": "test-fixtures/onecakes/strawberry",
"destination": "/strawberry-cake"
},
{
"type": "file",
"source": "/strawberry-cake",
"destination": "my-strawberry-cake",
"direction": "download"
}
]
}
`

View File

@ -26,7 +26,7 @@ func (s *StepConnectDocker) Run(state multistep.StateBag) multistep.StepAction {
HostDir: tempDir, HostDir: tempDir,
ContainerDir: "/packer-files", ContainerDir: "/packer-files",
Version: version, Version: version,
Config: config, Config: config,
} }
state.Put("communicator", comm) state.Put("communicator", comm)

View File

@ -0,0 +1 @@
chocolate!

View File

@ -0,0 +1 @@
vanilla!

View File

@ -0,0 +1 @@
strawberry!

View File

@ -1,15 +1,16 @@
package googlecompute package googlecompute
import ( import (
"code.google.com/p/gosshold/ssh"
"crypto/rand" "crypto/rand"
"crypto/rsa" "crypto/rsa"
"crypto/x509" "crypto/x509"
"encoding/pem" "encoding/pem"
"fmt" "fmt"
"os"
"github.com/mitchellh/multistep" "github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"os" "golang.org/x/crypto/ssh"
) )
// StepCreateSSHKey represents a Packer build step that generates SSH key pairs. // StepCreateSSHKey represents a Packer build step that generates SSH key pairs.

View File

@ -75,8 +75,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
Flavor: b.config.Flavor, Flavor: b.config.Flavor,
}, },
&StepKeyPair{ &StepKeyPair{
Debug: b.config.PackerDebug, Debug: b.config.PackerDebug,
DebugKeyPath: fmt.Sprintf("os_%s.pem", b.config.PackerBuildName), DebugKeyPath: fmt.Sprintf("os_%s.pem", b.config.PackerBuildName),
KeyPairName: b.config.SSHKeyPairName,
PrivateKeyFile: b.config.RunConfig.Comm.SSHPrivateKey,
}, },
&StepRunSourceServer{ &StepRunSourceServer{
Name: b.config.ImageName, Name: b.config.ImageName,

View File

@ -8,7 +8,8 @@ import (
// ImageConfig is for common configuration related to creating Images. // ImageConfig is for common configuration related to creating Images.
type ImageConfig struct { type ImageConfig struct {
ImageName string `mapstructure:"image_name"` ImageName string `mapstructure:"image_name"`
ImageMetadata map[string]string `mapstructure:"metadata"`
} }
func (c *ImageConfig) Prepare(ctx *interpolate.Context) []error { func (c *ImageConfig) Prepare(ctx *interpolate.Context) []error {

View File

@ -10,8 +10,9 @@ import (
// RunConfig contains configuration for running an instance from a source // RunConfig contains configuration for running an instance from a source
// image and details on how to access that launched image. // image and details on how to access that launched image.
type RunConfig struct { type RunConfig struct {
Comm communicator.Config `mapstructure:",squash"` Comm communicator.Config `mapstructure:",squash"`
SSHInterface string `mapstructure:"ssh_interface"` SSHKeyPairName string `mapstructure:"ssh_keypair_name"`
SSHInterface string `mapstructure:"ssh_interface"`
SourceImage string `mapstructure:"source_image"` SourceImage string `mapstructure:"source_image"`
Flavor string `mapstructure:"flavor"` Flavor string `mapstructure:"flavor"`

View File

@ -92,6 +92,4 @@ func WaitForState(conf *StateChangeConf) (i interface{}, err error) {
log.Printf("Waiting for state to become: %s currently %s (%d%%)", conf.Target, currentState, currentProgress) log.Printf("Waiting for state to become: %s currently %s (%d%%)", conf.Target, currentState, currentProgress)
time.Sleep(2 * time.Second) time.Sleep(2 * time.Second)
} }
return
} }

View File

@ -23,6 +23,7 @@ func CommHost(
// If we have a specific interface, try that // If we have a specific interface, try that
if sshinterface != "" { if sshinterface != "" {
if addr := sshAddrFromPool(s, sshinterface); addr != "" { if addr := sshAddrFromPool(s, sshinterface); addr != "" {
log.Printf("[DEBUG] Using IP address %s from specified interface %s for SSH", addr, sshinterface)
return addr, nil return addr, nil
} }
} }
@ -30,15 +31,18 @@ func CommHost(
// If we have a floating IP, use that // If we have a floating IP, use that
ip := state.Get("access_ip").(*floatingip.FloatingIP) ip := state.Get("access_ip").(*floatingip.FloatingIP)
if ip != nil && ip.IP != "" { if ip != nil && ip.IP != "" {
log.Printf("[DEBUG] Using floating IP %s for SSH", ip.IP)
return ip.IP, nil return ip.IP, nil
} }
if s.AccessIPv4 != "" { if s.AccessIPv4 != "" {
log.Printf("[DEBUG] Using AccessIPv4 %s for SSH", s.AccessIPv4)
return s.AccessIPv4, nil return s.AccessIPv4, nil
} }
// Try to get it from the requested interface // Try to get it from the requested interface
if addr := sshAddrFromPool(s, sshinterface); addr != "" { if addr := sshAddrFromPool(s, sshinterface); addr != "" {
log.Printf("[DEBUG] Using IP address %s for SSH", addr)
return addr, nil return addr, nil
} }
@ -101,11 +105,15 @@ func sshAddrFromPool(s *servers.Server, desired string) string {
if address["OS-EXT-IPS:type"] == "floating" { if address["OS-EXT-IPS:type"] == "floating" {
addr = address["addr"].(string) addr = address["addr"].(string)
} else { } else {
if address["version"].(float64) == 4 { if address["version"].(float64) == 6 {
addr = fmt.Sprintf("[%s]", address["addr"].(string))
} else {
addr = address["addr"].(string) addr = address["addr"].(string)
} }
} }
if addr != "" { if addr != "" {
log.Printf("[DEBUG] Detected address: %s", addr)
return addr return addr
} }
} }

View File

@ -30,7 +30,8 @@ func (s *stepCreateImage) Run(state multistep.StateBag) multistep.StepAction {
// Create the image // Create the image
ui.Say(fmt.Sprintf("Creating the image: %s", config.ImageName)) ui.Say(fmt.Sprintf("Creating the image: %s", config.ImageName))
imageId, err := servers.CreateImage(client, server.ID, servers.CreateImageOpts{ imageId, err := servers.CreateImage(client, server.ID, servers.CreateImageOpts{
Name: config.ImageName, Name: config.ImageName,
Metadata: config.ImageMetadata,
}).ExtractImageID() }).ExtractImageID()
if err != nil { if err != nil {
err := fmt.Errorf("Error creating image: %s", err) err := fmt.Errorf("Error creating image: %s", err)

View File

@ -2,6 +2,7 @@ package openstack
import ( import (
"fmt" "fmt"
"io/ioutil"
"os" "os"
"runtime" "runtime"
@ -12,12 +13,29 @@ import (
) )
type StepKeyPair struct { type StepKeyPair struct {
Debug bool Debug bool
DebugKeyPath string DebugKeyPath string
keyName string KeyPairName string
PrivateKeyFile string
keyName string
} }
func (s *StepKeyPair) Run(state multistep.StateBag) multistep.StepAction { func (s *StepKeyPair) Run(state multistep.StateBag) multistep.StepAction {
if s.PrivateKeyFile != "" {
privateKeyBytes, err := ioutil.ReadFile(s.PrivateKeyFile)
if err != nil {
state.Put("error", fmt.Errorf(
"Error loading configured private key file: %s", err))
return multistep.ActionHalt
}
state.Put("keyPair", s.KeyPairName)
state.Put("privateKey", string(privateKeyBytes))
return multistep.ActionContinue
}
config := state.Get("config").(Config) config := state.Get("config").(Config)
ui := state.Get("ui").(packer.Ui) ui := state.Get("ui").(packer.Ui)
@ -81,6 +99,11 @@ func (s *StepKeyPair) Run(state multistep.StateBag) multistep.StepAction {
} }
func (s *StepKeyPair) Cleanup(state multistep.StateBag) { func (s *StepKeyPair) Cleanup(state multistep.StateBag) {
// If we used an SSH private key file, do not go about deleting
// keypairs
if s.PrivateKeyFile != "" {
return
}
// If no key name is set, then we never created it, so just return // If no key name is set, then we never created it, so just return
if s.keyName == "" { if s.keyName == "" {
return return

View File

@ -39,6 +39,7 @@ func (s *StepWaitForRackConnect) Run(state multistep.StateBag) multistep.StepAct
} }
if server.Metadata["rackconnect_automation_status"] == "DEPLOYED" { if server.Metadata["rackconnect_automation_status"] == "DEPLOYED" {
state.Put("server", server)
break break
} }

View File

@ -0,0 +1,86 @@
package common
import (
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"testing"
)
func TestStepUploadParallelsTools_impl(t *testing.T) {
var _ multistep.Step = new(StepUploadParallelsTools)
}
func TestStepUploadParallelsTools(t *testing.T) {
state := testState(t)
state.Put("parallels_tools_path", "./step_upload_parallels_tools_test.go")
step := new(StepUploadParallelsTools)
step.ParallelsToolsMode = "upload"
step.ParallelsToolsGuestPath = "/tmp/prl-lin.iso"
step.ParallelsToolsFlavor = "lin"
comm := new(packer.MockCommunicator)
state.Put("communicator", comm)
// Test the run
if action := step.Run(state); action != multistep.ActionContinue {
t.Fatalf("bad action: %#v", action)
}
if _, ok := state.GetOk("error"); ok {
t.Fatal("should NOT have error")
}
// Verify
if comm.UploadPath != "/tmp/prl-lin.iso" {
t.Fatalf("bad: %#v", comm.UploadPath)
}
}
func TestStepUploadParallelsTools_interpolate(t *testing.T) {
state := testState(t)
state.Put("parallels_tools_path", "./step_upload_parallels_tools_test.go")
step := new(StepUploadParallelsTools)
step.ParallelsToolsMode = "upload"
step.ParallelsToolsGuestPath = "/tmp/prl-{{ .Flavor }}.iso"
step.ParallelsToolsFlavor = "win"
comm := new(packer.MockCommunicator)
state.Put("communicator", comm)
// Test the run
if action := step.Run(state); action != multistep.ActionContinue {
t.Fatalf("bad action: %#v", action)
}
if _, ok := state.GetOk("error"); ok {
t.Fatal("should NOT have error")
}
// Verify
if comm.UploadPath != "/tmp/prl-win.iso" {
t.Fatalf("bad: %#v", comm.UploadPath)
}
}
func TestStepUploadParallelsTools_attach(t *testing.T) {
state := testState(t)
state.Put("parallels_tools_path", "./step_upload_parallels_tools_test.go")
step := new(StepUploadParallelsTools)
step.ParallelsToolsMode = "attach"
step.ParallelsToolsGuestPath = "/tmp/prl-lin.iso"
step.ParallelsToolsFlavor = "lin"
comm := new(packer.MockCommunicator)
state.Put("communicator", comm)
// Test the run
if action := step.Run(state); action != multistep.ActionContinue {
t.Fatalf("bad action: %#v", action)
}
if _, ok := state.GetOk("error"); ok {
t.Fatal("should NOT have error")
}
// Verify
if comm.UploadCalled {
t.Fatal("bad")
}
}

View File

@ -65,7 +65,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
Exclude: []string{ Exclude: []string{
"boot_command", "boot_command",
"prlctl", "prlctl",
"parallel_tools_guest_path", "parallels_tools_guest_path",
}, },
}, },
}, raws...) }, raws...)

View File

@ -41,7 +41,7 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) {
Exclude: []string{ Exclude: []string{
"boot_command", "boot_command",
"prlctl", "prlctl",
"parallel_tools_guest_path", "parallels_tools_guest_path",
}, },
}, },
}, raws...) }, raws...)

View File

@ -52,7 +52,7 @@ func (s *stepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction
} }
defer nc.Close() defer nc.Close()
c, err := vnc.Client(nc, &vnc.ClientConfig{Exclusive: true}) c, err := vnc.Client(nc, &vnc.ClientConfig{Exclusive: false})
if err != nil { if err != nil {
err := fmt.Errorf("Error handshaking with VNC: %s", err) err := fmt.Errorf("Error handshaking with VNC: %s", err)
state.Put("error", err) state.Put("error", err)
@ -177,7 +177,9 @@ func vncSendString(c *vnc.ClientConn, original string) {
} }
c.KeyEvent(keyCode, true) c.KeyEvent(keyCode, true)
time.Sleep(time.Second/10)
c.KeyEvent(keyCode, false) c.KeyEvent(keyCode, false)
time.Sleep(time.Second/10)
if keyShift { if keyShift {
c.KeyEvent(KeyLeftShift, false) c.KeyEvent(KeyLeftShift, false)

5
builder/vmware/common/step_clean_vmx.go Normal file → Executable file
View File

@ -2,11 +2,12 @@ package common
import ( import (
"fmt" "fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log" "log"
"regexp" "regexp"
"strings" "strings"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
) )
// This step cleans up the VMX by removing or changing this prior to // This step cleans up the VMX by removing or changing this prior to

0
builder/vmware/common/step_clean_vmx_test.go Normal file → Executable file
View File

0
builder/vmware/common/step_configure_vmx.go Normal file → Executable file
View File

View File

@ -57,7 +57,7 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction
} }
defer nc.Close() defer nc.Close()
c, err := vnc.Client(nc, &vnc.ClientConfig{Exclusive: true}) c, err := vnc.Client(nc, &vnc.ClientConfig{Exclusive: false})
if err != nil { if err != nil {
err := fmt.Errorf("Error handshaking with VNC: %s", err) err := fmt.Errorf("Error handshaking with VNC: %s", err)
state.Put("error", err) state.Put("error", err)

0
builder/vmware/common/vmx.go Normal file → Executable file
View File

0
builder/vmware/vmx/step_clone_vmx.go Normal file → Executable file
View File

View File

@ -64,7 +64,7 @@ type TestT interface {
// Test performs an acceptance test on a backend with the given test case. // Test performs an acceptance test on a backend with the given test case.
// //
// Tests are not run unless an environmental variable "TF_ACC" is // Tests are not run unless an environmental variable "PACKER_ACC" is
// set to some non-empty value. This is to avoid test cases surprising // set to some non-empty value. This is to avoid test cases surprising
// a user by creating real resources. // a user by creating real resources.
// //

View File

@ -53,6 +53,7 @@ func (s *StepConnect) Run(state multistep.StateBag) multistep.StepAction {
Config: s.Config, Config: s.Config,
Host: s.Host, Host: s.Host,
WinRMConfig: s.WinRMConfig, WinRMConfig: s.WinRMConfig,
WinRMPort: s.SSHPort,
}, },
} }
for k, v := range s.CustomConnect { for k, v := range s.CustomConnect {

View File

@ -25,6 +25,7 @@ type StepConnectWinRM struct {
Config *Config Config *Config
Host func(multistep.StateBag) (string, error) Host func(multistep.StateBag) (string, error)
WinRMConfig func(multistep.StateBag) (*WinRMConfig, error) WinRMConfig func(multistep.StateBag) (*WinRMConfig, error)
WinRMPort func(multistep.StateBag) (int, error)
} }
func (s *StepConnectWinRM) Run(state multistep.StateBag) multistep.StepAction { func (s *StepConnectWinRM) Run(state multistep.StateBag) multistep.StepAction {
@ -96,6 +97,13 @@ func (s *StepConnectWinRM) waitForWinRM(state multistep.StateBag, cancel <-chan
continue continue
} }
port := s.Config.WinRMPort port := s.Config.WinRMPort
if s.WinRMPort != nil {
port, err = s.WinRMPort(state)
if err != nil {
log.Printf("[DEBUG] Error getting WinRM port: %s", err)
continue
}
}
user := s.Config.WinRMUser user := s.Config.WinRMUser
password := s.Config.WinRMPassword password := s.Config.WinRMPassword

View File

@ -1,13 +1,13 @@
package rpc package rpc
import ( import (
"fmt"
"github.com/hashicorp/go-msgpack/codec"
"github.com/mitchellh/packer/packer"
"io" "io"
"log" "log"
"net/rpc" "net/rpc"
"sync/atomic" "sync/atomic"
"github.com/hashicorp/go-msgpack/codec"
"github.com/mitchellh/packer/packer"
) )
var endpointId uint64 var endpointId uint64
@ -149,7 +149,7 @@ func (s *Server) Serve() {
func registerComponent(server *rpc.Server, name string, rcvr interface{}, id bool) string { func registerComponent(server *rpc.Server, name string, rcvr interface{}, id bool) string {
endpoint := name endpoint := name
if id { if id {
fmt.Sprintf("%s.%d", endpoint, atomic.AddUint64(&endpointId, 1)) log.Printf("%s.%d", endpoint, atomic.AddUint64(&endpointId, 1))
} }
server.RegisterName(endpoint, rcvr) server.RegisterName(endpoint, rcvr)

View File

@ -0,0 +1,15 @@
package main
import (
"github.com/mitchellh/packer/packer/plugin"
"github.com/mitchellh/packer/post-processor/artifice"
)
func main() {
server, err := plugin.Server()
if err != nil {
panic(err)
}
server.RegisterPostProcessor(new(artifice.PostProcessor))
server.Serve()
}

View File

@ -0,0 +1,56 @@
package artifice
import (
"fmt"
"os"
"strings"
)
const BuilderId = "packer.post-processor.artifice"
type Artifact struct {
files []string
}
func NewArtifact(files []string) (*Artifact, error) {
for _, f := range files {
if _, err := os.Stat(f); err != nil {
return nil, err
}
}
artifact := &Artifact{
files: files,
}
return artifact, nil
}
func (a *Artifact) BuilderId() string {
return BuilderId
}
func (a *Artifact) Files() []string {
return a.files
}
func (a *Artifact) Id() string {
return ""
}
func (a *Artifact) String() string {
files := strings.Join(a.files, ", ")
return fmt.Sprintf("Created artifact from files: %s", files)
}
func (a *Artifact) State(name string) interface{} {
return nil
}
func (a *Artifact) Destroy() error {
for _, f := range a.files {
err := os.RemoveAll(f)
if err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,60 @@
package artifice
import (
"fmt"
"strings"
"github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/helper/config"
"github.com/mitchellh/packer/packer"
"github.com/mitchellh/packer/template/interpolate"
)
// The artifact-override post-processor allows you to specify arbitrary files as
// artifacts. These will override any other artifacts created by the builder.
// This allows you to use a builder and provisioner to create some file, such as
// a compiled binary or tarball, extract it from the builder (VM or container)
// and then save that binary or tarball and throw away the builder.
type Config struct {
common.PackerConfig `mapstructure:",squash"`
Files []string `mapstructure:"files"`
Keep bool `mapstructure:"keep_input_artifact"`
ctx interpolate.Context
}
type PostProcessor struct {
config Config
}
func (p *PostProcessor) Configure(raws ...interface{}) error {
err := config.Decode(&p.config, &config.DecodeOpts{
Interpolate: true,
InterpolateContext: &p.config.ctx,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{},
},
}, raws...)
if err != nil {
return err
}
if len(p.config.Files) == 0 {
return fmt.Errorf("No files specified in artifice configuration")
}
return nil
}
func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) {
if len(artifact.Files()) > 0 {
ui.Say(fmt.Sprintf("Discarding artifact files: %s", strings.Join(artifact.Files(), ", ")))
}
artifact, err := NewArtifact(p.config.Files)
ui.Say(fmt.Sprintf("Using these artifact files: %s", strings.Join(artifact.Files(), ", ")))
return artifact, true, err
}

View File

@ -0,0 +1 @@
package artifice

View File

@ -55,9 +55,12 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
Interpolate: true, Interpolate: true,
InterpolateContext: &p.config.ctx, InterpolateContext: &p.config.ctx,
InterpolateFilter: &interpolate.RenderFilter{ InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{}, Exclude: []string{"output"},
}, },
}, raws...) }, raws...)
if err != nil {
return err
}
errs := new(packer.MultiError) errs := new(packer.MultiError)
@ -67,16 +70,7 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
} }
if p.config.OutputPath == "" { if p.config.OutputPath == "" {
p.config.OutputPath = "packer_{{.BuildName}}_{{.Provider}}" p.config.OutputPath = "packer_{{.BuildName}}_{{.BuilderType}}"
}
if err = interpolate.Validate(p.config.OutputPath, &p.config.ctx); err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Error parsing target template: %s", err))
}
templates := map[string]*string{
"output": &p.config.OutputPath,
} }
if p.config.CompressionLevel > pgzip.BestCompression { if p.config.CompressionLevel > pgzip.BestCompression {
@ -89,17 +83,9 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
p.config.CompressionLevel = pgzip.DefaultCompression p.config.CompressionLevel = pgzip.DefaultCompression
} }
for key, ptr := range templates { if err = interpolate.Validate(p.config.OutputPath, &p.config.ctx); err != nil {
if *ptr == "" { errs = packer.MultiErrorAppend(
errs = packer.MultiErrorAppend( errs, fmt.Errorf("Error parsing target template: %s", err))
errs, fmt.Errorf("%s must be set", key))
}
*ptr, err = interpolate.Render(p.config.OutputPath, &p.config.ctx)
if err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Error processing %s: %s", key, err))
}
} }
p.config.detectFromFilename() p.config.detectFromFilename()
@ -113,7 +99,19 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) { func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) {
target := p.config.OutputPath // These are extra variables that will be made available for interpolation.
p.config.ctx.Data = map[string]string{
"BuildName": p.config.PackerBuildName,
"BuilderType": p.config.PackerBuilderType,
}
target, err := interpolate.Render(p.config.OutputPath, &p.config.ctx)
if err != nil {
return nil, false, fmt.Errorf("Error interpolating output value: %s", err)
} else {
fmt.Println(target)
}
keep := p.config.KeepInputArtifact keep := p.config.KeepInputArtifact
newArtifact := &Artifact{Path: target} newArtifact := &Artifact{Path: target}

View File

@ -150,6 +150,37 @@ func TestCompressOptions(t *testing.T) {
} }
} }
func TestCompressInterpolation(t *testing.T) {
const config = `
{
"post-processors": [
{
"type": "compress",
"output": "{{ build_name}}-{{ .BuildName }}-{{.BuilderType}}.gz"
}
]
}
`
artifact := testArchive(t, config)
defer artifact.Destroy()
// You can interpolate using the .BuildName variable or build_name global
// function. We'll check both.
filename := "chocolate-vanilla-file.gz"
archive, err := os.Open(filename)
if err != nil {
t.Fatalf("Unable to read %s: %s", filename, err)
}
gzipReader, _ := gzip.NewReader(archive)
data, _ := ioutil.ReadAll(gzipReader)
if string(data) != expectedFileContents {
t.Errorf("Expected:\n%s\nFound:\n%s\n", expectedFileContents, data)
}
}
// Test Helpers // Test Helpers
func setup(t *testing.T) (packer.Ui, packer.Artifact, error) { func setup(t *testing.T) (packer.Ui, packer.Artifact, error) {
@ -201,6 +232,13 @@ func testArchive(t *testing.T, config string) packer.Artifact {
compressor := PostProcessor{} compressor := PostProcessor{}
compressor.Configure(tpl.PostProcessors[0][0].Config) compressor.Configure(tpl.PostProcessors[0][0].Config)
// I get the feeling these should be automatically available somewhere, but
// some of the post-processors construct this manually.
compressor.config.ctx.BuildName = "chocolate"
compressor.config.PackerBuildName = "vanilla"
compressor.config.PackerBuilderType = "file"
artifactOut, _, err := compressor.PostProcess(ui, artifact) artifactOut, _, err := compressor.PostProcess(ui, artifact)
if err != nil { if err != nil {
t.Fatalf("Failed to compress artifact: %s", err) t.Fatalf("Failed to compress artifact: %s", err)

View File

@ -3,14 +3,23 @@ package vagrant
import ( import (
"archive/tar" "archive/tar"
"compress/flate" "compress/flate"
"compress/gzip"
"encoding/json" "encoding/json"
"fmt" "fmt"
"github.com/mitchellh/packer/packer"
"io" "io"
"log" "log"
"os" "os"
"path/filepath" "path/filepath"
"runtime"
"github.com/klauspost/pgzip"
"github.com/mitchellh/packer/packer"
)
var (
// ErrInvalidCompressionLevel is returned when the compression level passed
// to gzip is not in the expected range. See compress/flate for details.
ErrInvalidCompressionLevel = fmt.Errorf(
"Invalid compression level. Expected an integer from -1 to 9.")
) )
// Copies a file by copying the contents of the file to another place. // Copies a file by copying the contents of the file to another place.
@ -60,10 +69,10 @@ func DirToBox(dst, dir string, ui packer.Ui, level int) error {
} }
defer dstF.Close() defer dstF.Close()
var dstWriter io.Writer = dstF var dstWriter io.WriteCloser = dstF
if level != flate.NoCompression { if level != flate.NoCompression {
log.Printf("Compressing with gzip compression level: %d", level) log.Printf("Compressing with gzip compression level: %d", level)
gzipWriter, err := gzip.NewWriterLevel(dstWriter, level) gzipWriter, err := makePgzipWriter(dstWriter, level)
if err != nil { if err != nil {
return err return err
} }
@ -143,3 +152,12 @@ func WriteMetadata(dir string, contents interface{}) error {
return nil return nil
} }
func makePgzipWriter(output io.WriteCloser, compressionLevel int) (io.WriteCloser, error) {
gzipWriter, err := pgzip.NewWriterLevel(output, compressionLevel)
if err != nil {
return nil, ErrInvalidCompressionLevel
}
gzipWriter.SetConcurrency(500000, runtime.GOMAXPROCS(-1))
return gzipWriter, nil
}

View File

@ -287,10 +287,10 @@ func (p *Provisioner) createKnifeConfig(ui packer.Ui, comm packer.Communicator,
ctx := p.config.ctx ctx := p.config.ctx
ctx.Data = &ConfigTemplate{ ctx.Data = &ConfigTemplate{
NodeName: nodeName, NodeName: nodeName,
ServerUrl: serverUrl, ServerUrl: serverUrl,
ClientKey: clientKey, ClientKey: clientKey,
SslVerifyMode: sslVerifyMode, SslVerifyMode: sslVerifyMode,
} }
configString, err := interpolate.Render(tpl, &ctx) configString, err := interpolate.Render(tpl, &ctx)
if err != nil { if err != nil {

View File

@ -399,7 +399,7 @@ func (p *Provisioner) createCommandText() (command string, err error) {
Vars: flattenedEnvVars, Vars: flattenedEnvVars,
Path: p.config.RemotePath, Path: p.config.RemotePath,
} }
command, err = interpolate.Render(p.config.ExecuteCommand, &p.config.ctx) command, err = interpolate.Render(p.config.ElevatedExecuteCommand, &p.config.ctx)
if err != nil { if err != nil {
return "", fmt.Errorf("Error processing command: %s", err) return "", fmt.Errorf("Error processing command: %s", err)
} }

View File

@ -15,6 +15,8 @@ import (
) )
const DefaultTempConfigDir = "/tmp/salt" const DefaultTempConfigDir = "/tmp/salt"
const DefaultStateTreeDir = "/srv/salt"
const DefaultPillarRootDir = "/srv/pillar"
type Config struct { type Config struct {
common.PackerConfig `mapstructure:",squash"` common.PackerConfig `mapstructure:",squash"`
@ -34,6 +36,12 @@ type Config struct {
// Local path to the salt pillar roots // Local path to the salt pillar roots
LocalPillarRoots string `mapstructure:"local_pillar_roots"` LocalPillarRoots string `mapstructure:"local_pillar_roots"`
// Remote path to the salt state tree
RemoteStateTree string `mapstructure:"remote_state_tree"`
// Remote path to the salt pillar roots
RemotePillarRoots string `mapstructure:"remote_pillar_roots"`
// Where files will be copied before moving to the /srv/salt directory // Where files will be copied before moving to the /srv/salt directory
TempConfigDir string `mapstructure:"temp_config_dir"` TempConfigDir string `mapstructure:"temp_config_dir"`
@ -60,6 +68,14 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
p.config.TempConfigDir = DefaultTempConfigDir p.config.TempConfigDir = DefaultTempConfigDir
} }
if p.config.RemoteStateTree == "" {
p.config.RemoteStateTree = DefaultStateTreeDir
}
if p.config.RemotePillarRoots == "" {
p.config.RemotePillarRoots = DefaultPillarRootDir
}
var errs *packer.MultiError var errs *packer.MultiError
// require a salt state tree // require a salt state tree
@ -116,9 +132,9 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
} }
} }
ui.Message(fmt.Sprintf("Creating remote directory: %s", p.config.TempConfigDir)) ui.Message(fmt.Sprintf("Creating remote temporary directory: %s", p.config.TempConfigDir))
if err := p.createDir(ui, comm, p.config.TempConfigDir); err != nil { if err := p.createDir(ui, comm, p.config.TempConfigDir); err != nil {
return fmt.Errorf("Error creating remote salt state directory: %s", err) return fmt.Errorf("Error creating remote temporary directory: %s", err)
} }
if p.config.MinionConfig != "" { if p.config.MinionConfig != "" {
@ -130,6 +146,10 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
} }
// move minion config into /etc/salt // move minion config into /etc/salt
ui.Message(fmt.Sprintf("Make sure directory %s exists", "/etc/salt"))
if err := p.createDir(ui, comm, "/etc/salt"); err != nil {
return fmt.Errorf("Error creating remote salt configuration directory: %s", err)
}
src = filepath.ToSlash(filepath.Join(p.config.TempConfigDir, "minion")) src = filepath.ToSlash(filepath.Join(p.config.TempConfigDir, "minion"))
dst = "/etc/salt/minion" dst = "/etc/salt/minion"
if err = p.moveFile(ui, comm, dst, src); err != nil { if err = p.moveFile(ui, comm, dst, src); err != nil {
@ -144,11 +164,14 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
return fmt.Errorf("Error uploading local state tree to remote: %s", err) return fmt.Errorf("Error uploading local state tree to remote: %s", err)
} }
// move state tree into /srv/salt // move state tree from temporary directory
src = filepath.ToSlash(filepath.Join(p.config.TempConfigDir, "states")) src = filepath.ToSlash(filepath.Join(p.config.TempConfigDir, "states"))
dst = "/srv/salt" dst = p.config.RemoteStateTree
if err = p.removeDir(ui, comm, dst); err != nil {
return fmt.Errorf("Unable to clear salt tree: %s", err)
}
if err = p.moveFile(ui, comm, dst, src); err != nil { if err = p.moveFile(ui, comm, dst, src); err != nil {
return fmt.Errorf("Unable to move %s/states to /srv/salt: %s", p.config.TempConfigDir, err) return fmt.Errorf("Unable to move %s/states to %s: %s", p.config.TempConfigDir, dst, err)
} }
if p.config.LocalPillarRoots != "" { if p.config.LocalPillarRoots != "" {
@ -159,16 +182,19 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
return fmt.Errorf("Error uploading local pillar roots to remote: %s", err) return fmt.Errorf("Error uploading local pillar roots to remote: %s", err)
} }
// move pillar tree into /srv/pillar // move pillar root from temporary directory
src = filepath.ToSlash(filepath.Join(p.config.TempConfigDir, "pillar")) src = filepath.ToSlash(filepath.Join(p.config.TempConfigDir, "pillar"))
dst = "/srv/pillar" dst = p.config.RemotePillarRoots
if err = p.removeDir(ui, comm, dst); err != nil {
return fmt.Errorf("Unable to clear pillat root: %s", err)
}
if err = p.moveFile(ui, comm, dst, src); err != nil { if err = p.moveFile(ui, comm, dst, src); err != nil {
return fmt.Errorf("Unable to move %s/pillar to /srv/pillar: %s", p.config.TempConfigDir, err) return fmt.Errorf("Unable to move %s/pillar to %s: %s", p.config.TempConfigDir, dst, err)
} }
} }
ui.Message("Running highstate") ui.Message("Running highstate")
cmd := &packer.RemoteCmd{Command: p.sudo("salt-call --local state.highstate -l info --retcode-passthrough")} cmd := &packer.RemoteCmd{Command: fmt.Sprintf(p.sudo("salt-call --local state.highstate --file-root=%s --pillar-root=%s -l info --retcode-passthrough"),p.config.RemoteStateTree, p.config.RemotePillarRoots)}
if err = cmd.StartWithUi(comm, ui); err != nil || cmd.ExitStatus != 0 { if err = cmd.StartWithUi(comm, ui); err != nil || cmd.ExitStatus != 0 {
if err == nil { if err == nil {
err = fmt.Errorf("Bad exit status: %d", cmd.ExitStatus) err = fmt.Errorf("Bad exit status: %d", cmd.ExitStatus)
@ -216,7 +242,7 @@ func (p *Provisioner) moveFile(ui packer.Ui, comm packer.Communicator, dst, src
err = fmt.Errorf("Bad exit status: %d", cmd.ExitStatus) err = fmt.Errorf("Bad exit status: %d", cmd.ExitStatus)
} }
return fmt.Errorf("Unable to move %s/minion to /etc/salt/minion: %s", p.config.TempConfigDir, err) return fmt.Errorf("Unable to move %s to %s: %s", src, dst, err)
} }
return nil return nil
} }
@ -235,6 +261,20 @@ func (p *Provisioner) createDir(ui packer.Ui, comm packer.Communicator, dir stri
return nil return nil
} }
func (p *Provisioner) removeDir(ui packer.Ui, comm packer.Communicator, dir string) error {
ui.Message(fmt.Sprintf("Removing directory: %s", dir))
cmd := &packer.RemoteCmd{
Command: fmt.Sprintf("rm -rf '%s'", dir),
}
if err := cmd.StartWithUi(comm, ui); err != nil {
return err
}
if cmd.ExitStatus != 0 {
return fmt.Errorf("Non-zero exit status.")
}
return nil
}
func (p *Provisioner) uploadDir(ui packer.Ui, comm packer.Communicator, dst, src string, ignore []string) error { func (p *Provisioner) uploadDir(ui packer.Ui, comm packer.Communicator, dst, src string, ignore []string) error {
if err := p.createDir(ui, comm, dst); err != nil { if err := p.createDir(ui, comm, dst); err != nil {
return err return err

View File

@ -134,7 +134,6 @@ WaitLoop:
case <-p.cancel: case <-p.cancel:
close(waitDone) close(waitDone)
return fmt.Errorf("Interrupt detected, quitting waiting for machine to restart") return fmt.Errorf("Interrupt detected, quitting waiting for machine to restart")
break WaitLoop
} }
} }

View File

@ -4,9 +4,9 @@ package main
var GitCommit string var GitCommit string
// The main version number that is being run at the moment. // The main version number that is being run at the moment.
const Version = "0.8.2" const Version = "0.8.6"
// A pre-release marker for the version. If this is "" (empty string) // A pre-release marker for the version. If this is "" (empty string)
// then it means that it is a final release. Otherwise, this is a pre-release // then it means that it is a final release. Otherwise, this is a pre-release
// such as "dev" (in development), "beta", "rc1", etc. // such as "dev" (in development), "beta", "rc1", etc.
const VersionPrerelease = "" const VersionPrerelease = "dev"

View File

@ -3,3 +3,5 @@ source "https://rubygems.org"
ruby "2.2.2" ruby "2.2.2"
gem "middleman-hashicorp", github: "hashicorp/middleman-hashicorp" gem "middleman-hashicorp", github: "hashicorp/middleman-hashicorp"
gem "middleman-breadcrumbs"
gem "htmlbeautifier"

View File

@ -69,6 +69,7 @@ GEM
hitimes (1.2.2) hitimes (1.2.2)
hooks (0.4.0) hooks (0.4.0)
uber (~> 0.0.4) uber (~> 0.0.4)
htmlbeautifier (1.1.0)
htmlcompressor (0.2.0) htmlcompressor (0.2.0)
http_parser.rb (0.6.0) http_parser.rb (0.6.0)
i18n (0.7.0) i18n (0.7.0)
@ -92,6 +93,8 @@ GEM
middleman-sprockets (>= 3.1.2) middleman-sprockets (>= 3.1.2)
sass (>= 3.4.0, < 4.0) sass (>= 3.4.0, < 4.0)
uglifier (~> 2.5) uglifier (~> 2.5)
middleman-breadcrumbs (0.1.0)
middleman (>= 3.3.5)
middleman-core (3.3.12) middleman-core (3.3.12)
activesupport (~> 4.1.0) activesupport (~> 4.1.0)
bundler (~> 1.1) bundler (~> 1.1)
@ -179,4 +182,6 @@ PLATFORMS
ruby ruby
DEPENDENCIES DEPENDENCIES
htmlbeautifier
middleman-breadcrumbs
middleman-hashicorp! middleman-hashicorp!

View File

@ -8,3 +8,10 @@ dev: init
build: init build: init
PACKER_DISABLE_DOWNLOAD_FETCH=true PACKER_VERSION=1.0 bundle exec middleman build PACKER_DISABLE_DOWNLOAD_FETCH=true PACKER_VERSION=1.0 bundle exec middleman build
format:
bundle exec htmlbeautifier -t 2 source/*.erb
bundle exec htmlbeautifier -t 2 source/layouts/*.erb
@pandoc -v > /dev/null || echo "pandoc must be installed in order to format markdown content"
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "pandoc -r markdown -w markdown --tab-stop=4 --atx-headers -s --columns=80 {} > {}.new"\; || true
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "mv {}.new {}"\; || true

View File

@ -21,3 +21,13 @@ make dev
Then open up `localhost:4567`. Note that some URLs you may need to append Then open up `localhost:4567`. Note that some URLs you may need to append
".html" to make them work (in the navigation and such). ".html" to make them work (in the navigation and such).
## Keeping Tidy
To keep the source code nicely formatted, there is a `make format` target. This
runs `htmlbeautify` and `pandoc` to reformat the source code so it's nicely formatted.
make format
Note that you will need to install pandoc yourself. `make format` will skip it
if you don't have it installed.

View File

@ -4,6 +4,8 @@
set :base_url, "https://www.packer.io/" set :base_url, "https://www.packer.io/"
activate :breadcrumbs
activate :hashicorp do |h| activate :hashicorp do |h|
h.version = ENV["PACKER_VERSION"] h.version = ENV["PACKER_VERSION"]
h.bintray_enabled = ENV["BINTRAY_ENABLED"] h.bintray_enabled = ENV["BINTRAY_ENABLED"]

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

View File

@ -12,45 +12,45 @@ footer {
margin-left: -20px; margin-left: -20px;
} }
ul { ul {
margin-top: 40px; margin-top: 40px;
@include respond-to(mobile) { @include respond-to(mobile) {
margin-left: $baseline; margin-left: $baseline;
margin-top: $baseline; margin-top: $baseline;
} }
li { li {
display: inline; display: inline;
margin-right: 50px; margin-right: 50px;
@include respond-to(mobile) { @include respond-to(mobile) {
margin-right: 20px; margin-right: 20px;
display: list-item; display: list-item;
} }
} }
.hashi-logo { .hashi-logo {
background: image-url('logo_footer.png') no-repeat center top; background: image-url('logo_footer.png') no-repeat center top;
height: 40px; height: 40px;
width: 40px; width: 40px;
background-size: 37px 40px; background-size: 37px 40px;
text-indent: -999999px; text-indent: -999999px;
display: inline-block; display: inline-block;
margin-top: -10px; margin-top: -10px;
margin-right: 0; margin-right: 0;
@include respond-to(mobile) { @include respond-to(mobile) {
margin-top: -50px; margin-top: -50px;
margin-right: $baseline; margin-right: $baseline;
} }
} }
} }
.active { .active {
color: $green; color: $green;
} }
button { button {
margin-top: 20px; margin-top: 20px;
} }
} }
.page-wrap { .page-wrap {

View File

@ -70,17 +70,17 @@ $mono: 'Inconsolata', 'courier new', courier, mono-space;
background-color: #000; background-color: #000;
color: $white; color: $white;
a { a {
color: inherit; color: inherit;
&:hover { &:hover {
color: $green; color: $green;
} }
&:active { &:active {
color: darken($green, 30%); color: darken($green, 30%);
} }
} }
} }
.white-background { .white-background {
@ -102,9 +102,9 @@ $mono: 'Inconsolata', 'courier new', courier, mono-space;
color: $orange; color: $orange;
font-size: 20px; font-size: 20px;
a:hover, a:active, a:visited { a:hover, a:active, a:visited {
color: inherit; color: inherit;
} }
} }
// media queries // media queries
@ -170,13 +170,13 @@ $break-lg: 980px;
@mixin transform-scale($value) { @mixin transform-scale($value) {
-webkit-transform: scale($value); -webkit-transform: scale($value);
-moz-transform: scale($value); -moz-transform: scale($value);
transform: scale($value); transform: scale($value);
} }
@mixin transition($type, $speed, $easing) { @mixin transition($type, $speed, $easing) {
-webkit-transition: $type $speed $easing; -webkit-transition: $type $speed $easing;
-moz-transition: $type $speed $easing; -moz-transition: $type $speed $easing;
-o-transition: $type $speed $easing; -o-transition: $type $speed $easing;
transition: $type $speed $easing; transition: $type $speed $easing;
} }

View File

@ -14,10 +14,10 @@ form, input, textarea, button {
line-height: 1.0; line-height: 1.0;
color: inherit; color: inherit;
&:focus { &:focus {
line-height: 1.0; line-height: 1.0;
box-shadow: none !important; box-shadow: none !important;
outline: none; outline: none;
vertical-align: middle; vertical-align: middle;
} }
} }

View File

@ -1,22 +1,25 @@
--- ---
layout: "community" description: |
page_title: "Community" Packer is a new project with a growing community. Despite this, there are
description: |- dedicated users willing to help through various mediums.
Packer is a new project with a growing community. Despite this, there are dedicated users willing to help through various mediums. layout: community
--- page_title: Community
...
# Community # Community
Packer is a new project with a growing community. Despite this, there are Packer is a new project with a growing community. Despite this, there are
dedicated users willing to help through various mediums. dedicated users willing to help through various mediums.
**IRC:**&nbsp;`#packer-tool` on Freenode. **IRC:** `#packer-tool` on Freenode.
**Mailing List:**&nbsp;[Packer Google Group](http://groups.google.com/group/packer-tool) **Mailing List:** [Packer Google
Group](http://groups.google.com/group/packer-tool)
**Bug Tracker:**&nbsp;[Issue tracker on GitHub](https://github.com/mitchellh/packer/issues). **Bug Tracker:** [Issue tracker on
Please only use this for reporting bugs. Do not ask for general help here. Use IRC GitHub](https://github.com/mitchellh/packer/issues). Please only use this for
or the mailing list for that. reporting bugs. Do not ask for general help here. Use IRC or the mailing list
for that.
## People ## People
@ -25,62 +28,82 @@ to Packer in some core way. Over time, faces may appear and disappear from this
list as contributors come and go. list as contributors come and go.
<div class="people"> <div class="people">
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/54079122b67de9677c1f93933ce8b63a.png?s=125">
<div class="bio">
<h3>Mitchell Hashimoto (<a href="https://github.com/mitchellh">@mitchellh</a>)</h3>
<p>
Mitchell Hashimoto is the creator of Packer. He developed the
core of Packer as well as the Amazon, VirtualBox, and VMware
builders. In addition to Packer, Mitchell is the creator of
<a href="http://www.vagrantup.com">Vagrant</a>. He is self
described as "automation obsessed."
</p>
</div>
</div>
<div class="person"> <div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2acc31dd6370a54b18f6755cd0710ce6.png?s=125">
<div class="bio">
<h3>Jack Pearkes (<a href="https://github.com/pearkes">@pearkes</a>)</h3>
<p>
<a href="http://jack.ly/">Jack Pearkes</a> created and maintains the DigitalOcean builder
for Packer. Outside of Packer, Jack is an avid open source
contributor and software consultant.</p>
</div>
</div>
<div class="person"> <img class="pull-left" src="http://www.gravatar.com/avatar/54079122b67de9677c1f93933ce8b63a.png?s=125">
<img class="pull-left" src="http://www.gravatar.com/avatar/2f7fc9cb7558e3ea48f5a86fa90a78da.png?s=125"> <div class="bio">
<div class="bio"> <h3>Mitchell Hashimoto (<a href="https://github.com/mitchellh">@mitchellh</a>)</h3>
<h3>Mark Peek (<a href="https://github.com/markpeek">@markpeek</a>)</h3> <p>
<p> Mitchell Hashimoto is the creator of Packer. He developed the
In addition to Packer, Mark Peek helps maintain core of Packer as well as the Amazon, VirtualBox, and VMware
various open source projects such as builders. In addition to Packer, Mitchell is the creator of
<a href="https://github.com/cloudtools">cloudtools</a> and <a href="http://www.vagrantup.com">Vagrant</a>. He is self
<a href="https://github.com/ironport">IronPort Python libraries</a>. described as "automation obsessed."
Mark is also a <a href="https://FreeBSD.org">FreeBSD committer</a>.</p> </p>
</div> </div>
</div>
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2acc31dd6370a54b18f6755cd0710ce6.png?s=125">
<div class="bio">
<h3>Jack Pearkes (<a href="https://github.com/pearkes">@pearkes</a>)</h3>
<p>
<a href="http://jack.ly/">Jack Pearkes</a> created and maintains the DigitalOcean builder
for Packer. Outside of Packer, Jack is an avid open source
contributor and software consultant.</p>
</div>
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2f7fc9cb7558e3ea48f5a86fa90a78da.png?s=125">
<div class="bio">
<h3>Mark Peek (<a href="https://github.com/markpeek">@markpeek</a>)</h3>
<p>
In addition to Packer, Mark Peek helps maintain
various open source projects such as
<a href="https://github.com/cloudtools">cloudtools</a> and
<a href="https://github.com/ironport">IronPort Python libraries</a>.
Mark is also a <a href="https://FreeBSD.org">FreeBSD committer</a>.</p>
</div>
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/1fca64df3d7db1e2f258a8956d2b0aff.png?s=125">
<div class="bio">
<h3>Ross Smith II (<a href="https://github.com/rasa" target="_blank">@rasa</a>)</h3>
<p>
<a href="http://smithii.com/" target="_blank">Ross Smith</a> maintains our
VMware builder on Windows, and provides other valuable assistance. Ross is an
open source enthusiast, published author, and freelance consultant.
</p>
</div>
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/c9f6bf7b5b865012be5eded656ebed7d.png?s=125">
<div class="bio">
<h3>Rickard von Essen<br/>(<a href="https://github.com/rickard-von-essen" target="_blank">@rickard-von-essen</a>)</h3>
<p>
Rickard von Essen maintains our Parallels Desktop builder. Rickard is an
polyglot programmer and consults on Continuous Delivery.
</p>
</div>
</div>
<div class="clearfix">
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/1fca64df3d7db1e2f258a8956d2b0aff.png?s=125">
<div class="bio">
<h3>Ross Smith II (<a href="https://github.com/rasa" target="_blank">@rasa</a>)</h3>
<p>
<a href="http://smithii.com/" target="_blank">Ross Smith</a> maintains our VMware builder on Windows, and provides other valuable assistance.
Ross is an open source enthusiast, published author, and freelance consultant.</p>
</div>
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/c9f6bf7b5b865012be5eded656ebed7d.png?s=125">
<div class="bio">
<h3>Rickard von Essen<br/>(<a href="https://github.com/rickard-von-essen" target="_blank">@rickard-von-essen</a>)</h3>
<p>
Rickard von Essen maintains our Parallels Desktop builder. Rickard is an polyglot programmer and consults on Continuous Delivery.</p>
</div>
</div>
<div class="clearfix"></div>
</div> </div>

View File

@ -1,54 +1,57 @@
--- ---
layout: "docs" description: |
page_title: "Packer Terminology" There are a handful of terms used throughout the Packer documentation where the
description: |- meaning may not be immediately obvious if you haven't used Packer before.
There are a handful of terms used throughout the Packer documentation where the meaning may not be immediately obvious if you haven't used Packer before. Luckily, there are relatively few. This page documents all the terminology required to understand and use Packer. The terminology is in alphabetical order for easy referencing. Luckily, there are relatively few. This page documents all the terminology
--- required to understand and use Packer. The terminology is in alphabetical order
for easy referencing.
layout: docs
page_title: Packer Terminology
...
# Packer Terminology # Packer Terminology
There are a handful of terms used throughout the Packer documentation where There are a handful of terms used throughout the Packer documentation where the
the meaning may not be immediately obvious if you haven't used Packer before. meaning may not be immediately obvious if you haven't used Packer before.
Luckily, there are relatively few. This page documents all the terminology Luckily, there are relatively few. This page documents all the terminology
required to understand and use Packer. The terminology is in alphabetical required to understand and use Packer. The terminology is in alphabetical order
order for easy referencing. for easy referencing.
- `Artifacts` are the results of a single build, and are usually a set of IDs - `Artifacts` are the results of a single build, and are usually a set of IDs
or files to represent a machine image. Every builder produces a single or files to represent a machine image. Every builder produces a
artifact. As an example, in the case of the Amazon EC2 builder, the artifact is single artifact. As an example, in the case of the Amazon EC2 builder, the
a set of AMI IDs (one per region). For the VMware builder, the artifact is a artifact is a set of AMI IDs (one per region). For the VMware builder, the
directory of files comprising the created virtual machine. artifact is a directory of files comprising the created virtual machine.
- `Builds` are a single task that eventually produces an image for a single - `Builds` are a single task that eventually produces an image for a
platform. Multiple builds run in parallel. Example usage in a single platform. Multiple builds run in parallel. Example usage in a
sentence: "The Packer build produced an AMI to run our web application." sentence: "The Packer build produced an AMI to run our web application." Or:
Or: "Packer is running the builds now for VMware, AWS, and VirtualBox." "Packer is running the builds now for VMware, AWS, and VirtualBox."
- `Builders` are components of Packer that are able to create a machine - `Builders` are components of Packer that are able to create a machine image
image for a single platform. Builders read in some configuration and use for a single platform. Builders read in some configuration and use that to
that to run and generate a machine image. A builder is invoked as part of a run and generate a machine image. A builder is invoked as part of a build in
build in order to create the actual resulting images. Example builders include order to create the actual resulting images. Example builders include
VirtualBox, VMware, and Amazon EC2. Builders can be created and added to VirtualBox, VMware, and Amazon EC2. Builders can be created and added to
Packer in the form of plugins. Packer in the form of plugins.
- `Commands` are sub-commands for the `packer` program that perform some - `Commands` are sub-commands for the `packer` program that perform some job.
job. An example command is "build", which is invoked as `packer build`. An example command is "build", which is invoked as `packer build`. Packer
Packer ships with a set of commands out of the box in order to define ships with a set of commands out of the box in order to define its
its command-line interface. Commands can also be created and added to command-line interface. Commands can also be created and added to Packer in
Packer in the form of plugins. the form of plugins.
- `Post-processors` are components of Packer that take the result of - `Post-processors` are components of Packer that take the result of a builder
a builder or another post-processor and process that to or another post-processor and process that to create a new artifact.
create a new artifact. Examples of post-processors are Examples of post-processors are compress to compress artifacts, upload to
compress to compress artifacts, upload to upload artifacts, etc. upload artifacts, etc.
- `Provisioners` are components of Packer that install and configure - `Provisioners` are components of Packer that install and configure software
software within a running machine prior to that machine being turned within a running machine prior to that machine being turned into a
into a static image. They perform the major work of making the image contain static image. They perform the major work of making the image contain
useful software. Example provisioners include shell scripts, Chef, Puppet, useful software. Example provisioners include shell scripts, Chef,
etc. Puppet, etc.
- `Templates` are JSON files which define one or more builds - `Templates` are JSON files which define one or more builds by configuring
by configuring the various components of Packer. Packer is able to read a the various components of Packer. Packer is able to read a template and use
template and use that information to create multiple machine images in that information to create multiple machine images in parallel.
parallel.

View File

@ -1,49 +1,52 @@
--- ---
layout: "docs" description: |
page_title: "Amazon AMI Builder (chroot)" The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by an
description: |- EBS volume as the root device. For more information on the difference between
The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by an EBS volume as the root device. For more information on the difference between instance storage and EBS-backed instances, storage for the root device section in the EC2 documentation. instance storage and EBS-backed instances, storage for the root device section
--- in the EC2 documentation.
layout: docs
page_title: 'Amazon AMI Builder (chroot)'
...
# AMI Builder (chroot) # AMI Builder (chroot)
Type: `amazon-chroot` Type: `amazon-chroot`
The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by an
an EBS volume as the root device. For more information on the difference EBS volume as the root device. For more information on the difference between
between instance storage and EBS-backed instances, see the instance storage and EBS-backed instances, see the ["storage for the root
["storage for the root device" section in the EC2 documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device). device" section in the EC2
documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
The difference between this builder and the `amazon-ebs` builder is that The difference between this builder and the `amazon-ebs` builder is that this
this builder is able to build an EBS-backed AMI without launching a new builder is able to build an EBS-backed AMI without launching a new EC2 instance.
EC2 instance. This can dramatically speed up AMI builds for organizations This can dramatically speed up AMI builds for organizations who need the extra
who need the extra fast build. fast build.
~> **This is an advanced builder** If you're just getting \~&gt; **This is an advanced builder** If you're just getting started with
started with Packer, we recommend starting with the Packer, we recommend starting with the [amazon-ebs
[amazon-ebs builder](/docs/builders/amazon-ebs.html), which is builder](/docs/builders/amazon-ebs.html), which is much easier to use.
much easier to use.
The builder does _not_ manage AMIs. Once it creates an AMI and stores it The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
in your account, it is up to you to use, delete, etc. the AMI. account, it is up to you to use, delete, etc. the AMI.
## How Does it Work? ## How Does it Work?
This builder works by creating a new EBS volume from an existing source AMI This builder works by creating a new EBS volume from an existing source AMI and
and attaching it into an already-running EC2 instance. Once attached, a attaching it into an already-running EC2 instance. Once attached, a
[chroot](http://en.wikipedia.org/wiki/Chroot) is used to provision the [chroot](http://en.wikipedia.org/wiki/Chroot) is used to provision the system
system within that volume. After provisioning, the volume is detached, within that volume. After provisioning, the volume is detached, snapshotted, and
snapshotted, and an AMI is made. an AMI is made.
Using this process, minutes can be shaved off the AMI creation process Using this process, minutes can be shaved off the AMI creation process because a
because a new EC2 instance doesn't need to be launched. new EC2 instance doesn't need to be launched.
There are some restrictions, however. The host EC2 instance where the There are some restrictions, however. The host EC2 instance where the volume is
volume is attached to must be a similar system (generally the same OS attached to must be a similar system (generally the same OS version, kernel
version, kernel versions, etc.) as the AMI being built. Additionally, versions, etc.) as the AMI being built. Additionally, this process is much more
this process is much more expensive because the EC2 instance must be kept expensive because the EC2 instance must be kept running persistently in order to
running persistently in order to build AMIs, whereas the other AMI builders build AMIs, whereas the other AMI builders start instances on-demand to build
start instances on-demand to build AMIs as needed. AMIs as needed.
## Configuration Reference ## Configuration Reference
@ -52,107 +55,101 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized. each category, the available configuration keys are alphabetized.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `access_key` (string) - The access key used to communicate with AWS. - `access_key` (string) - The access key used to communicate with AWS. [Learn
If not specified, Packer will use the key from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
or fall back to environment variables `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` (in that order), if set.
If the environmental variables aren't set and Packer is running on
an EC2 instance, Packer will check the instance metadata for IAM role
keys.
* `ami_name` (string) - The name of the resulting AMI that will appear - `ami_name` (string) - The name of the resulting AMI that will appear when
when managing AMIs in the AWS console or via APIs. This must be unique. managing AMIs in the AWS console or via APIs. This must be unique. To help
To help make this unique, use a function like `timestamp` (see make this unique, use a function like `timestamp` (see [configuration
[configuration templates](/docs/templates/configuration-templates.html) for more info) templates](/docs/templates/configuration-templates.html) for more info)
* `secret_key` (string) - The secret key used to communicate with AWS. - `secret_key` (string) - The secret key used to communicate with AWS. [Learn
If not specified, Packer will use the secret from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
or fall back to environment variables `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` (in that order), if set.
If the environmental variables aren't set and Packer is running on
an EC2 instance, Packer will check the instance metadata for IAM role
keys.
* `source_ami` (string) - The source AMI whose root volume will be copied - `source_ami` (string) - The source AMI whose root volume will be copied and
and provisioned on the currently running instance. This must be an provisioned on the currently running instance. This must be an EBS-backed
EBS-backed AMI with a root volume snapshot that you have access to. AMI with a root volume snapshot that you have access to.
### Optional: ### Optional:
* `ami_description` (string) - The description to set for the resulting - `ami_description` (string) - The description to set for the
AMI(s). By default this description is empty. resulting AMI(s). By default this description is empty.
* `ami_groups` (array of strings) - A list of groups that have access - `ami_groups` (array of strings) - A list of groups that have access to
to launch the resulting AMI(s). By default no groups have permission launch the resulting AMI(s). By default no groups have permission to launch
to launch the AMI. `all` will make the AMI publicly accessible. the AMI. `all` will make the AMI publicly accessible.
* `ami_product_codes` (array of strings) - A list of product codes to - `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with associate with the AMI. By default no product codes are associated with
the AMI. the AMI.
* `ami_regions` (array of strings) - A list of regions to copy the AMI to. - `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time Tags and attributes are copied along with the AMI. AMI copying takes time
depending on the size of the AMI, but will generally take many minutes. depending on the size of the AMI, but will generally take many minutes.
* `ami_users` (array of strings) - A list of account IDs that have access - `ami_users` (array of strings) - A list of account IDs that have access to
to launch the resulting AMI(s). By default no additional users other than the user launch the resulting AMI(s). By default no additional users other than the
creating the AMI has permissions to launch it. user creating the AMI has permissions to launch it.
* `ami_virtualization_type` (string) - The type of virtualization for the AMI - `ami_virtualization_type` (string) - The type of virtualization for the AMI
you are building. This option is required to register HVM images. Can be you are building. This option is required to register HVM images. Can be
"paravirtual" (default) or "hvm". "paravirtual" (default) or "hvm".
* `chroot_mounts` (array of array of strings) - This is a list of additional - `chroot_mounts` (array of array of strings) - This is a list of additional
devices to mount into the chroot environment. This configuration parameter devices to mount into the chroot environment. This configuration parameter
requires some additional documentation which is in the "Chroot Mounts" section requires some additional documentation which is in the "Chroot Mounts"
below. Please read that section for more information on how to use this. section below. Please read that section for more information on how to
use this.
* `command_wrapper` (string) - How to run shell commands. This - `command_wrapper` (string) - How to run shell commands. This defaults
defaults to "{{.Command}}". This may be useful to set if you want to set to "{{.Command}}". This may be useful to set if you want to set
environmental variables or perhaps run it with `sudo` or so on. This is a environmental variables or perhaps run it with `sudo` or so on. This is a
configuration template where the `.Command` variable is replaced with the configuration template where the `.Command` variable is replaced with the
command to be run. command to be run.
* `copy_files` (array of strings) - Paths to files on the running EC2 instance - `copy_files` (array of strings) - Paths to files on the running EC2 instance
that will be copied into the chroot environment prior to provisioning. that will be copied into the chroot environment prior to provisioning. This
This is useful, for example, to copy `/etc/resolv.conf` so that DNS lookups is useful, for example, to copy `/etc/resolv.conf` so that DNS lookups work.
work.
* `device_path` (string) - The path to the device where the root volume - `device_path` (string) - The path to the device where the root volume of the
of the source AMI will be attached. This defaults to "" (empty string), source AMI will be attached. This defaults to "" (empty string), which
which forces Packer to find an open device automatically. forces Packer to find an open device automatically.
* `enhanced_networking` (boolean) - Enable enhanced networking (SriovNetSupport) on - `enhanced_networking` (boolean) - Enable enhanced
HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy. networking (SriovNetSupport) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy.
* `force_deregister` (boolean) - Force Packer to first deregister an existing - `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`. AMI if one with the same name already exists. Default `false`.
* `mount_path` (string) - The path where the volume will be mounted. This is - `mount_path` (string) - The path where the volume will be mounted. This is
where the chroot environment will be. This defaults to where the chroot environment will be. This defaults to
`packer-amazon-chroot-volumes/{{.Device}}`. This is a configuration `packer-amazon-chroot-volumes/{{.Device}}`. This is a configuration template
template where the `.Device` variable is replaced with the name of the where the `.Device` variable is replaced with the name of the device where
device where the volume is attached. the volume is attached.
* `mount_options` (array of strings) - Options to supply the `mount` command - `mount_options` (array of strings) - Options to supply the `mount` command
when mounting devices. Each option will be prefixed with `-o ` and supplied to when mounting devices. Each option will be prefixed with `-o` and supplied
the `mount` command ran by Packer. Because this command is ran in a shell, user to the `mount` command ran by Packer. Because this command is ran in a
discrestion is advised. See [this manual page for the mount command][1] for valid shell, user discrestion is advised. See [this manual page for the mount
file system specific options command](http://linuxcommand.org/man_pages/mount8.html) for valid file
system specific options
* `root_volume_size` (integer) - The size of the root volume for the chroot - `root_volume_size` (integer) - The size of the root volume for the chroot
environment, and the resulting AMI environment, and the resulting AMI
* `tags` (object of key/value strings) - Tags applied to the AMI. - `tags` (object of key/value strings) - Tags applied to the AMI.
## Basic Example ## Basic Example
Here is a basic example. It is completely valid except for the access keys: Here is a basic example. It is completely valid except for the access keys:
```javascript ``` {.javascript}
{ {
"type": "amazon-chroot", "type": "amazon-chroot",
"access_key": "YOUR KEY HERE", "access_key": "YOUR KEY HERE",
@ -164,21 +161,21 @@ Here is a basic example. It is completely valid except for the access keys:
## Chroot Mounts ## Chroot Mounts
The `chroot_mounts` configuration can be used to mount additional devices The `chroot_mounts` configuration can be used to mount additional devices within
within the chroot. By default, the following additional mounts are added the chroot. By default, the following additional mounts are added into the
into the chroot by Packer: chroot by Packer:
* `/proc` (proc) - `/proc` (proc)
* `/sys` (sysfs) - `/sys` (sysfs)
* `/dev` (bind to real `/dev`) - `/dev` (bind to real `/dev`)
* `/dev/pts` (devpts) - `/dev/pts` (devpts)
* `/proc/sys/fs/binfmt_misc` (binfmt_misc) - `/proc/sys/fs/binfmt_misc` (binfmt\_misc)
These default mounts are usually good enough for anyone and are sane These default mounts are usually good enough for anyone and are sane defaults.
defaults. However, if you want to change or add the mount points, you may However, if you want to change or add the mount points, you may using the
using the `chroot_mounts` configuration. Here is an example configuration: `chroot_mounts` configuration. Here is an example configuration:
```javascript ``` {.javascript}
{ {
"chroot_mounts": [ "chroot_mounts": [
["proc", "proc", "/proc"], ["proc", "proc", "/proc"],
@ -187,25 +184,25 @@ using the `chroot_mounts` configuration. Here is an example configuration:
} }
``` ```
`chroot_mounts` is a list of a 3-tuples of strings. The three components `chroot_mounts` is a list of a 3-tuples of strings. The three components of the
of the 3-tuple, in order, are: 3-tuple, in order, are:
* The filesystem type. If this is "bind", then Packer will properly bind - The filesystem type. If this is "bind", then Packer will properly bind the
the filesystem to another mount point. filesystem to another mount point.
* The source device. - The source device.
* The mount directory. - The mount directory.
## Parallelism ## Parallelism
A quick note on parallelism: it is perfectly safe to run multiple A quick note on parallelism: it is perfectly safe to run multiple *separate*
_separate_ Packer processes with the `amazon-chroot` builder on the same Packer processes with the `amazon-chroot` builder on the same EC2 instance. In
EC2 instance. In fact, this is recommended as a way to push the most performance fact, this is recommended as a way to push the most performance out of your AMI
out of your AMI builds. builds.
Packer properly obtains a process lock for the parallelism-sensitive parts Packer properly obtains a process lock for the parallelism-sensitive parts of
of its internals such as finding an available device. its internals such as finding an available device.
## Gotchas ## Gotchas
@ -213,10 +210,12 @@ One of the difficulties with using the chroot builder is that your provisioning
scripts must not leave any processes running or packer will be unable to unmount scripts must not leave any processes running or packer will be unable to unmount
the filesystem. the filesystem.
For debian based distributions you can setup a [policy-rc.d](http://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt) file which will For debian based distributions you can setup a
prevent packages installed by your provisioners from starting services: [policy-rc.d](http://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt)
file which will prevent packages installed by your provisioners from starting
services:
```javascript ``` {.javascript}
{ {
"type": "shell", "type": "shell",
"inline": [ "inline": [
@ -235,6 +234,3 @@ prevent packages installed by your provisioners from starting services:
] ]
} }
``` ```
[1]: http://linuxcommand.org/man_pages/mount8.html

View File

@ -1,29 +1,32 @@
--- ---
layout: "docs" description: |
page_title: "Amazon AMI Builder (EBS backed)" The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS
description: |- volumes for use in EC2. For more information on the difference between
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS volumes for use in EC2. For more information on the difference between EBS-backed instances and instance-store backed instances, see the storage for the root device section in the EC2 documentation. EBS-backed instances and instance-store backed instances, see the storage for
--- the root device section in the EC2 documentation.
layout: docs
page_title: 'Amazon AMI Builder (EBS backed)'
...
# AMI Builder (EBS backed) # AMI Builder (EBS backed)
Type: `amazon-ebs` Type: `amazon-ebs`
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS
volumes for use in [EC2](http://aws.amazon.com/ec2/). For more information volumes for use in [EC2](http://aws.amazon.com/ec2/). For more information on
on the difference between EBS-backed instances and instance-store backed the difference between EBS-backed instances and instance-store backed instances,
instances, see the see the ["storage for the root device" section in the EC2
["storage for the root device" section in the EC2 documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device). documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
This builder builds an AMI by launching an EC2 instance from a source AMI, This builder builds an AMI by launching an EC2 instance from a source AMI,
provisioning that running machine, and then creating an AMI from that machine. provisioning that running machine, and then creating an AMI from that machine.
This is all done in your own AWS account. The builder will create temporary This is all done in your own AWS account. The builder will create temporary
keypairs, security group rules, etc. that provide it temporary access to keypairs, security group rules, etc. that provide it temporary access to the
the instance while the image is being created. This simplifies configuration instance while the image is being created. This simplifies configuration quite a
quite a bit. bit.
The builder does _not_ manage AMIs. Once it creates an AMI and stores it The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
in your account, it is up to you to use, delete, etc. the AMI. account, it is up to you to use, delete, etc. the AMI.
## Configuration Reference ## Configuration Reference
@ -32,170 +35,169 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized. each category, the available configuration keys are alphabetized.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `access_key` (string) - The access key used to communicate with AWS. - `access_key` (string) - The access key used to communicate with AWS. [Learn
If not specified, Packer will use the key from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
or fall back to environment variables `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` (in that order), if set.
* `ami_name` (string) - The name of the resulting AMI that will appear - `ami_name` (string) - The name of the resulting AMI that will appear when
when managing AMIs in the AWS console or via APIs. This must be unique. managing AMIs in the AWS console or via APIs. This must be unique. To help
To help make this unique, use a function like `timestamp` (see make this unique, use a function like `timestamp` (see [configuration
[configuration templates](/docs/templates/configuration-templates.html) for more info) templates](/docs/templates/configuration-templates.html) for more info)
* `instance_type` (string) - The EC2 instance type to use while building - `instance_type` (string) - The EC2 instance type to use while building the
the AMI, such as "m1.small". AMI, such as "m1.small".
* `region` (string) - The name of the region, such as "us-east-1", in which - `region` (string) - The name of the region, such as "us-east-1", in which to
to launch the EC2 instance to create the AMI. launch the EC2 instance to create the AMI.
* `secret_key` (string) - The secret key used to communicate with AWS. - `secret_key` (string) - The secret key used to communicate with AWS. [Learn
If not specified, Packer will use the secret from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
or fall back to environment variables `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` (in that order), if set.
* `source_ami` (string) - The initial AMI used as a base for the newly - `source_ami` (string) - The initial AMI used as a base for the newly
created machine. created machine.
* `ssh_username` (string) - The username to use in order to communicate - `ssh_username` (string) - The username to use in order to communicate over
over SSH to the running machine. SSH to the running machine.
### Optional: ### Optional:
* `ami_block_device_mappings` (array of block device mappings) - Add the block - `ami_block_device_mappings` (array of block device mappings) - Add the block
device mappings to the AMI. The block device mappings allow for keys: device mappings to the AMI. The block device mappings allow for keys:
- `device_name` (string) - The device name exposed to the instance (for - `device_name` (string) - The device name exposed to the instance (for
example, "/dev/sdh" or "xvdh") example, "/dev/sdh" or "xvdh")
- `virtual_name` (string) - The virtual device name. See the documentation on - `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device Mapping][1] for more information [Block Device
- `snapshot_id` (string) - The ID of the snapshot Mapping](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD) for more information
volumes, io1 for Provisioned IOPS (SSD) volumes, and standard for Magnetic - `snapshot_id` (string) - The ID of the snapshot
volumes - `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
- `volume_size` (integer) - The size of the volume, in GiB. Required if not volumes, io1 for Provisioned IOPS (SSD) volumes, and standard for Magnetic
specifying a `snapshot_id` volumes
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is - `volume_size` (integer) - The size of the volume, in GiB. Required if not
deleted on instance termination specifying a `snapshot_id`
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not - `delete_on_termination` (boolean) - Indicates whether the EBS volume is
- `no_device` (boolean) - Suppresses the specified device included in the deleted on instance termination
block device mapping of the AMI - `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `iops` (integer) - The number of I/O operations per second (IOPS) that the - `no_device` (boolean) - Suppresses the specified device included in the
volume supports. See the documentation on [IOPs][2] for more information block device mapping of the AMI
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on
[IOPs](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
for more information
- `ami_description` (string) - The description to set for the
resulting AMI(s). By default this description is empty.
- `ami_groups` (array of strings) - A list of groups that have access to
launch the resulting AMI(s). By default no groups have permission to launch
the AMI. `all` will make the AMI publicly accessible. AWS currently doesn't
accept any value other than "all".
* `ami_description` (string) - The description to set for the resulting - `ami_product_codes` (array of strings) - A list of product codes to
AMI(s). By default this description is empty. associate with the AMI. By default no product codes are associated with
the AMI.
* `ami_groups` (array of strings) - A list of groups that have access - `ami_regions` (array of strings) - A list of regions to copy the AMI to.
to launch the resulting AMI(s). By default no groups have permission Tags and attributes are copied along with the AMI. AMI copying takes time
to launch the AMI. `all` will make the AMI publicly accessible. depending on the size of the AMI, but will generally take many minutes.
AWS currently doesn't accept any value other than "all".
* `ami_product_codes` (array of strings) - A list of product codes to - `ami_users` (array of strings) - A list of account IDs that have access to
associate with the AMI. By default no product codes are associated with launch the resulting AMI(s). By default no additional users other than the
the AMI. user creating the AMI has permissions to launch it.
* `ami_regions` (array of strings) - A list of regions to copy the AMI to. - `associate_public_ip_address` (boolean) - If using a non-default VPC, public
Tags and attributes are copied along with the AMI. AMI copying takes time IP addresses are not provided by default. If this is toggled, your new
depending on the size of the AMI, but will generally take many minutes. instance will get a Public IP.
* `ami_users` (array of strings) - A list of account IDs that have access - `availability_zone` (string) - Destination availability zone to launch
to launch the resulting AMI(s). By default no additional users other than the user instance in. Leave this empty to allow Amazon to auto-assign.
creating the AMI has permissions to launch it.
* `associate_public_ip_address` (boolean) - If using a non-default VPC, public - `enhanced_networking` (boolean) - Enable enhanced
IP addresses are not provided by default. If this is toggled, your new networking (SriovNetSupport) on HVM-compatible AMIs. If true, add
instance will get a Public IP. `ec2:ModifyInstanceAttribute` to your AWS IAM policy.
* `availability_zone` (string) - Destination availability zone to launch instance in. - `force_deregister` (boolean) - Force Packer to first deregister an existing
Leave this empty to allow Amazon to auto-assign. AMI if one with the same name already exists. Default `false`.
* `enhanced_networking` (boolean) - Enable enhanced networking (SriovNetSupport) on - `iam_instance_profile` (string) - The name of an [IAM instance
HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy. profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
to launch the EC2 instance with.
* `force_deregister` (boolean) - Force Packer to first deregister an existing - `launch_block_device_mappings` (array of block device mappings) - Add the
AMI if one with the same name already exists. Default `false`. block device mappings to the launch instance. The block device mappings are
the same as `ami_block_device_mappings` above.
* `iam_instance_profile` (string) - The name of an - `run_tags` (object of key/value strings) - Tags to apply to the instance
[IAM instance profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) that is *launched* to create the AMI. These tags are *not* applied to the
to launch the EC2 instance with. resulting AMI unless they're duplicated in `tags`.
* `launch_block_device_mappings` (array of block device mappings) - Add the - `security_group_id` (string) - The ID (*not* the name) of the security group
block device mappings to the launch instance. The block device mappings are to assign to the instance. By default this is not set and Packer will
the same as `ami_block_device_mappings` above. automatically create a new temporary security group to allow SSH access.
Note that if this is specified, you must be sure the security group allows
access to the `ssh_port` given below.
* `run_tags` (object of key/value strings) - Tags to apply to the instance - `security_group_ids` (array of strings) - A list of security groups as
that is _launched_ to create the AMI. These tags are _not_ applied to described above. Note that if this is specified, you must omit the
the resulting AMI unless they're duplicated in `tags`. `security_group_id`.
* `security_group_id` (string) - The ID (_not_ the name) of the security - `spot_price` (string) - The maximum hourly price to pay for a spot instance
group to assign to the instance. By default this is not set and Packer to create the AMI. Spot instances are a type of instance that EC2 starts
will automatically create a new temporary security group to allow SSH when the current spot price is less than the maximum price you specify. Spot
access. Note that if this is specified, you must be sure the security price will be updated based on available spot instance capacity and current
group allows access to the `ssh_port` given below. spot instance requests. It may save you some costs. You can set this to
"auto" for Packer to automatically discover the best spot price.
* `security_group_ids` (array of strings) - A list of security groups as - `spot_price_auto_product` (string) - Required if `spot_price` is set
described above. Note that if this is specified, you must omit the to "auto". This tells Packer what sort of AMI you're launching to find the
`security_group_id`. best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)`
* `spot_price` (string) - The maximum hourly price to pay for a spot instance - `ssh_keypair_name` (string) - If specified, this is the key that will be
to create the AMI. Spot instances are a type of instance that EC2 starts when used for SSH with the machine. By default, this is blank, and Packer will
the current spot price is less than the maximum price you specify. Spot price generate a temporary keypair. `ssh_private_key_file` must be specified
will be updated based on available spot instance capacity and current spot with this.
instance requests. It may save you some costs. You can set this to "auto" for
Packer to automatically discover the best spot price.
* `spot_price_auto_product` (string) - Required if `spot_price` is set to - `ssh_private_ip` (boolean) - If true, then SSH will always use the private
"auto". This tells Packer what sort of AMI you're launching to find the best IP if available.
spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)`
* `ssh_keypair_name` (string) - If specified, this is the key that will be - `subnet_id` (string) - If using VPC, the ID of the subnet, such as
used for SSH with the machine. By default, this is blank, and Packer will "subnet-12345def", where Packer will launch the EC2 instance. This field is
generate a temporary keypair. `ssh_private_key_file` must be specified required if you are using an non-default VPC.
with this.
* `ssh_private_ip` (boolean) - If true, then SSH will always use the private - `tags` (object of key/value strings) - Tags applied to the AMI and
IP if available. relevant snapshots.
* `subnet_id` (string) - If using VPC, the ID of the subnet, such as - `temporary_key_pair_name` (string) - The name of the temporary keypair
"subnet-12345def", where Packer will launch the EC2 instance. This field is to generate. By default, Packer generates a name with a UUID.
required if you are using an non-default VPC.
* `tags` (object of key/value strings) - Tags applied to the AMI and - `token` (string) - The access token to use. This is different from the
relevant snapshots. access key and secret key. If you're not sure what this is, then you
probably don't need it. This will also be read from the `AWS_SECURITY_TOKEN`
environmental variable.
* `temporary_key_pair_name` (string) - The name of the temporary keypair - `user_data` (string) - User data to apply when launching the instance. Note
to generate. By default, Packer generates a name with a UUID. that you need to be careful about escaping characters due to the templates
being JSON. It is often more convenient to use `user_data_file`, instead.
* `token` (string) - The access token to use. This is different from - `user_data_file` (string) - Path to a file that will be used for the user
the access key and secret key. If you're not sure what this is, then you data when launching the instance.
probably don't need it. This will also be read from the `AWS_SECURITY_TOKEN`
environmental variable.
* `user_data` (string) - User data to apply when launching the instance. - `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID
Note that you need to be careful about escaping characters due to the in order to create a temporary security group within the VPC.
templates being JSON. It is often more convenient to use `user_data_file`,
instead.
* `user_data_file` (string) - Path to a file that will be used for the - `windows_password_timeout` (string) - The timeout for waiting for a Windows
user data when launching the instance. password for Windows instances. Defaults to 20 minutes. Example value: "10m"
* `vpc_id` (string) - If launching into a VPC subnet, Packer needs the
VPC ID in order to create a temporary security group within the VPC.
* `windows_password_timeout` (string) - The timeout for waiting for
a Windows password for Windows instances. Defaults to 20 minutes.
Example value: "10m"
## Basic Example ## Basic Example
Here is a basic example. It is completely valid except for the access keys: Here is a basic example. It is completely valid except for the access keys:
```javascript ``` {.javascript}
{ {
"type": "amazon-ebs", "type": "amazon-ebs",
"access_key": "YOUR KEY HERE", "access_key": "YOUR KEY HERE",
@ -208,25 +210,23 @@ Here is a basic example. It is completely valid except for the access keys:
} }
``` ```
-> **Note:** Packer can also read the access key and secret -&gt; **Note:** Packer can also read the access key and secret access key from
access key from environmental variables. See the configuration reference in environmental variables. See the configuration reference in the section above
the section above for more information on what environmental variables Packer for more information on what environmental variables Packer will look for.
will look for.
## Accessing the Instance to Debug ## Accessing the Instance to Debug
If you need to access the instance to debug for some reason, run the builder If you need to access the instance to debug for some reason, run the builder
with the `-debug` flag. In debug mode, the Amazon builder will save the with the `-debug` flag. In debug mode, the Amazon builder will save the private
private key in the current directory and will output the DNS or IP information key in the current directory and will output the DNS or IP information as well.
as well. You can use this information to access the instance as it is You can use this information to access the instance as it is running.
running.
## AMI Block Device Mappings Example ## AMI Block Device Mappings Example
Here is an example using the optional AMI block device mappings. This will add Here is an example using the optional AMI block device mappings. This will add
the /dev/sdb and /dev/sdc block device mappings to the finished AMI. the /dev/sdb and /dev/sdc block device mappings to the finished AMI.
```javascript ``` {.javascript}
{ {
"type": "amazon-ebs", "type": "amazon-ebs",
"access_key": "YOUR KEY HERE", "access_key": "YOUR KEY HERE",
@ -252,9 +252,9 @@ the /dev/sdb and /dev/sdc block device mappings to the finished AMI.
## Tag Example ## Tag Example
Here is an example using the optional AMI tags. This will add the tags Here is an example using the optional AMI tags. This will add the tags
"OS_Version" and "Release" to the finished AMI. "OS\_Version" and "Release" to the finished AMI.
```javascript ``` {.javascript}
{ {
"type": "amazon-ebs", "type": "amazon-ebs",
"access_key": "YOUR KEY HERE", "access_key": "YOUR KEY HERE",
@ -271,13 +271,10 @@ Here is an example using the optional AMI tags. This will add the tags
} }
``` ```
-> **Note:** Packer uses pre-built AMIs as the source for building images. -&gt; **Note:** Packer uses pre-built AMIs as the source for building images.
These source AMIs may include volumes that are not flagged to be destroyed on These source AMIs may include volumes that are not flagged to be destroyed on
termiation of the instance building the new image. Packer will attempt to clean termiation of the instance building the new image. Packer will attempt to clean
up all residual volumes that are not designated by the user to remain after up all residual volumes that are not designated by the user to remain after
termination. If you need to preserve those source volumes, you can overwrite the termination. If you need to preserve those source volumes, you can overwrite the
termination setting by specifying `delete_on_termination=false` in the termination setting by specifying `delete_on_termination=false` in the
`launch_device_mappings` block for the device. `launch_device_mappings` block for the device.
[1]: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html
[2]: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html

View File

@ -1,9 +1,12 @@
--- ---
layout: "docs" description: |
page_title: "Amazon AMI Builder (instance-store)" The `amazon-instance` Packer builder is able to create Amazon AMIs backed by
description: |- instance storage as the root device. For more information on the difference
The `amazon-instance` Packer builder is able to create Amazon AMIs backed by instance storage as the root device. For more information on the difference between instance storage and EBS-backed instances, see the storage for the root device section in the EC2 documentation. between instance storage and EBS-backed instances, see the storage for the root
--- device section in the EC2 documentation.
layout: docs
page_title: 'Amazon AMI Builder (instance-store)'
...
# AMI Builder (instance-store) # AMI Builder (instance-store)
@ -11,24 +14,24 @@ Type: `amazon-instance`
The `amazon-instance` Packer builder is able to create Amazon AMIs backed by The `amazon-instance` Packer builder is able to create Amazon AMIs backed by
instance storage as the root device. For more information on the difference instance storage as the root device. For more information on the difference
between instance storage and EBS-backed instances, see the between instance storage and EBS-backed instances, see the ["storage for the
["storage for the root device" section in the EC2 documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device). root device" section in the EC2
documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
This builder builds an AMI by launching an EC2 instance from an existing This builder builds an AMI by launching an EC2 instance from an existing
instance-storage backed AMI, provisioning that running machine, and then instance-storage backed AMI, provisioning that running machine, and then
bundling and creating a new AMI from that machine. bundling and creating a new AMI from that machine. This is all done in your own
This is all done in your own AWS account. The builder will create temporary AWS account. The builder will create temporary keypairs, security group rules,
keypairs, security group rules, etc. that provide it temporary access to etc. that provide it temporary access to the instance while the image is being
the instance while the image is being created. This simplifies configuration created. This simplifies configuration quite a bit.
quite a bit.
The builder does _not_ manage AMIs. Once it creates an AMI and stores it The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
in your account, it is up to you to use, delete, etc. the AMI. account, it is up to you to use, delete, etc. the AMI.
-> **Note** This builder requires that the -&gt; **Note** This builder requires that the [Amazon EC2 AMI
[Amazon EC2 AMI Tools](http://aws.amazon.com/developertools/368) Tools](http://aws.amazon.com/developertools/368) are installed onto the machine.
are installed onto the machine. This can be done within a provisioner, but This can be done within a provisioner, but must be done before the builder
must be done before the builder finishes running. finishes running.
## Configuration Reference ## Configuration Reference
@ -37,204 +40,204 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized. each category, the available configuration keys are alphabetized.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `access_key` (string) - The access key used to communicate with AWS. - `access_key` (string) - The access key used to communicate with AWS. [Learn
If not specified, Packer will use the key from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
or fall back to environment variables `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` (in that order), if set.
* `account_id` (string) - Your AWS account ID. This is required for bundling - `account_id` (string) - Your AWS account ID. This is required for bundling
the AMI. This is _not the same_ as the access key. You can find your the AMI. This is *not the same* as the access key. You can find your account
account ID in the security credentials page of your AWS account. ID in the security credentials page of your AWS account.
* `ami_name` (string) - The name of the resulting AMI that will appear - `ami_name` (string) - The name of the resulting AMI that will appear when
when managing AMIs in the AWS console or via APIs. This must be unique. managing AMIs in the AWS console or via APIs. This must be unique. To help
To help make this unique, use a function like `timestamp` (see make this unique, use a function like `timestamp` (see [configuration
[configuration templates](/docs/templates/configuration-templates.html) for more info) templates](/docs/templates/configuration-templates.html) for more info)
* `instance_type` (string) - The EC2 instance type to use while building - `instance_type` (string) - The EC2 instance type to use while building the
the AMI, such as "m1.small". AMI, such as "m1.small".
* `region` (string) - The name of the region, such as "us-east-1", in which - `region` (string) - The name of the region, such as "us-east-1", in which to
to launch the EC2 instance to create the AMI. launch the EC2 instance to create the AMI.
* `s3_bucket` (string) - The name of the S3 bucket to upload the AMI. - `s3_bucket` (string) - The name of the S3 bucket to upload the AMI. This
This bucket will be created if it doesn't exist. bucket will be created if it doesn't exist.
* `secret_key` (string) - The secret key used to communicate with AWS. - `secret_key` (string) - The secret key used to communicate with AWS. [Learn
If not specified, Packer will use the secret from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
or fall back to environment variables `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` (in that order), if set.
* `source_ami` (string) - The initial AMI used as a base for the newly - `source_ami` (string) - The initial AMI used as a base for the newly
created machine. created machine.
* `ssh_username` (string) - The username to use in order to communicate - `ssh_username` (string) - The username to use in order to communicate over
over SSH to the running machine. SSH to the running machine.
* `x509_cert_path` (string) - The local path to a valid X509 certificate for - `x509_cert_path` (string) - The local path to a valid X509 certificate for
your AWS account. This is used for bundling the AMI. This X509 certificate your AWS account. This is used for bundling the AMI. This X509 certificate
must be registered with your account from the security credentials page must be registered with your account from the security credentials page in
in the AWS console. the AWS console.
* `x509_key_path` (string) - The local path to the private key for the X509 - `x509_key_path` (string) - The local path to the private key for the X509
certificate specified by `x509_cert_path`. This is used for bundling the AMI. certificate specified by `x509_cert_path`. This is used for bundling
the AMI.
### Optional: ### Optional:
* `ami_block_device_mappings` (array of block device mappings) - Add the block - `ami_block_device_mappings` (array of block device mappings) - Add the block
device mappings to the AMI. The block device mappings allow for keys: device mappings to the AMI. The block device mappings allow for keys:
- `device_name` (string) - The device name exposed to the instance (for - `device_name` (string) - The device name exposed to the instance (for
example, "/dev/sdh" or "xvdh") example, "/dev/sdh" or "xvdh")
- `virtual_name` (string) - The virtual device name. See the documentation on - `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device Mapping][1] for more information [Block Device
- `snapshot_id` (string) - The ID of the snapshot Mapping](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD) for more information
volumes, io1 for Provisioned IOPS (SSD) volumes, and standard for Magnetic - `snapshot_id` (string) - The ID of the snapshot
volumes - `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
- `volume_size` (integer) - The size of the volume, in GiB. Required if not volumes, io1 for Provisioned IOPS (SSD) volumes, and standard for Magnetic
specifying a `snapshot_id` volumes
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is - `volume_size` (integer) - The size of the volume, in GiB. Required if not
deleted on instance termination specifying a `snapshot_id`
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not - `delete_on_termination` (boolean) - Indicates whether the EBS volume is
- `no_device` (boolean) - Suppresses the specified device included in the deleted on instance termination
block device mapping of the AMI - `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `iops` (integer) - The number of I/O operations per second (IOPS) that the - `no_device` (boolean) - Suppresses the specified device included in the
volume supports. See the documentation on [IOPs][2] for more information block device mapping of the AMI
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on
[IOPs](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
for more information
- `ami_description` (string) - The description to set for the
resulting AMI(s). By default this description is empty.
* `ami_description` (string) - The description to set for the resulting - `ami_groups` (array of strings) - A list of groups that have access to
AMI(s). By default this description is empty. launch the resulting AMI(s). By default no groups have permission to launch
the AMI. `all` will make the AMI publicly accessible. AWS currently doesn't
accept any value other than "all".
* `ami_groups` (array of strings) - A list of groups that have access - `ami_product_codes` (array of strings) - A list of product codes to
to launch the resulting AMI(s). By default no groups have permission associate with the AMI. By default no product codes are associated with
to launch the AMI. `all` will make the AMI publicly accessible. the AMI.
AWS currently doesn't accept any value other than "all".
* `ami_product_codes` (array of strings) - A list of product codes to - `ami_regions` (array of strings) - A list of regions to copy the AMI to.
associate with the AMI. By default no product codes are associated with Tags and attributes are copied along with the AMI. AMI copying takes time
the AMI. depending on the size of the AMI, but will generally take many minutes.
* `ami_regions` (array of strings) - A list of regions to copy the AMI to. - `ami_users` (array of strings) - A list of account IDs that have access to
Tags and attributes are copied along with the AMI. AMI copying takes time launch the resulting AMI(s). By default no additional users other than the
depending on the size of the AMI, but will generally take many minutes. user creating the AMI has permissions to launch it.
* `ami_users` (array of strings) - A list of account IDs that have access - `ami_virtualization_type` (string) - The type of virtualization for the AMI
to launch the resulting AMI(s). By default no additional users other than the user you are building. This option is required to register HVM images. Can be
creating the AMI has permissions to launch it. "paravirtual" (default) or "hvm".
* `ami_virtualization_type` (string) - The type of virtualization for the AMI - `associate_public_ip_address` (boolean) - If using a non-default VPC, public
you are building. This option is required to register HVM images. Can be IP addresses are not provided by default. If this is toggled, your new
"paravirtual" (default) or "hvm". instance will get a Public IP.
* `associate_public_ip_address` (boolean) - If using a non-default VPC, public - `availability_zone` (string) - Destination availability zone to launch
IP addresses are not provided by default. If this is toggled, your new instance in. Leave this empty to allow Amazon to auto-assign.
instance will get a Public IP.
* `availability_zone` (string) - Destination availability zone to launch instance in. - `bundle_destination` (string) - The directory on the running instance where
Leave this empty to allow Amazon to auto-assign. the bundled AMI will be saved prior to uploading. By default this is "/tmp".
This directory must exist and be writable.
* `bundle_destination` (string) - The directory on the running instance - `bundle_prefix` (string) - The prefix for files created from bundling the
where the bundled AMI will be saved prior to uploading. By default this is root volume. By default this is "image-{{timestamp}}". The `timestamp`
"/tmp". This directory must exist and be writable. variable should be used to make sure this is unique, otherwise it can
collide with other created AMIs by Packer in your account.
* `bundle_prefix` (string) - The prefix for files created from bundling - `bundle_upload_command` (string) - The command to use to upload the
the root volume. By default this is "image-{{timestamp}}". The `timestamp` bundled volume. See the "custom bundle commands" section below for
variable should be used to make sure this is unique, otherwise it can more information.
collide with other created AMIs by Packer in your account.
* `bundle_upload_command` (string) - The command to use to upload the - `bundle_vol_command` (string) - The command to use to bundle the volume. See
bundled volume. See the "custom bundle commands" section below for more the "custom bundle commands" section below for more information.
information.
* `bundle_vol_command` (string) - The command to use to bundle the volume. - `enhanced_networking` (boolean) - Enable enhanced
See the "custom bundle commands" section below for more information. networking (SriovNetSupport) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy.
* `enhanced_networking` (boolean) - Enable enhanced networking (SriovNetSupport) on - `force_deregister` (boolean) - Force Packer to first deregister an existing
HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy. AMI if one with the same name already exists. Default `false`.
* `force_deregister` (boolean) - Force Packer to first deregister an existing - `iam_instance_profile` (string) - The name of an [IAM instance
AMI if one with the same name already exists. Default `false`. profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
to launch the EC2 instance with.
* `iam_instance_profile` (string) - The name of an - `launch_block_device_mappings` (array of block device mappings) - Add the
[IAM instance profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) block device mappings to the launch instance. The block device mappings are
to launch the EC2 instance with. the same as `ami_block_device_mappings` above.
* `launch_block_device_mappings` (array of block device mappings) - Add the - `run_tags` (object of key/value strings) - Tags to apply to the instance
block device mappings to the launch instance. The block device mappings are that is *launched* to create the AMI. These tags are *not* applied to the
the same as `ami_block_device_mappings` above. resulting AMI unless they're duplicated in `tags`.
* `run_tags` (object of key/value strings) - Tags to apply to the instance - `security_group_id` (string) - The ID (*not* the name) of the security group
that is _launched_ to create the AMI. These tags are _not_ applied to to assign to the instance. By default this is not set and Packer will
the resulting AMI unless they're duplicated in `tags`. automatically create a new temporary security group to allow SSH access.
Note that if this is specified, you must be sure the security group allows
access to the `ssh_port` given below.
* `security_group_id` (string) - The ID (_not_ the name) of the security - `security_group_ids` (array of strings) - A list of security groups as
group to assign to the instance. By default this is not set and Packer described above. Note that if this is specified, you must omit the
will automatically create a new temporary security group to allow SSH `security_group_id`.
access. Note that if this is specified, you must be sure the security
group allows access to the `ssh_port` given below.
* `security_group_ids` (array of strings) - A list of security groups as - `spot_price` (string) - The maximum hourly price to launch a spot instance
described above. Note that if this is specified, you must omit the to create the AMI. It is a type of instances that EC2 starts when the
`security_group_id`. maximum price that you specify exceeds the current spot price. Spot price
will be updated based on available spot instance capacity and current spot
Instance requests. It may save you some costs. You can set this to "auto"
for Packer to automatically discover the best spot price.
* `spot_price` (string) - The maximum hourly price to launch a spot instance - `spot_price_auto_product` (string) - Required if `spot_price` is set
to create the AMI. It is a type of instances that EC2 starts when the maximum to "auto". This tells Packer what sort of AMI you're launching to find the
price that you specify exceeds the current spot price. Spot price will be best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
updated based on available spot instance capacity and current spot Instance `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)`
requests. It may save you some costs. You can set this to "auto" for
Packer to automatically discover the best spot price.
* `spot_price_auto_product` (string) - Required if `spot_price` is set to - `ssh_keypair_name` (string) - If specified, this is the key that will be
"auto". This tells Packer what sort of AMI you're launching to find the best used for SSH with the machine. By default, this is blank, and Packer will
spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`, generate a temporary keypair. `ssh_private_key_file` must be specified
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)` with this.
* `ssh_keypair_name` (string) - If specified, this is the key that will be - `ssh_private_ip` (boolean) - If true, then SSH will always use the private
used for SSH with the machine. By default, this is blank, and Packer will IP if available.
generate a temporary keypair. `ssh_private_key_file` must be specified
with this.
* `ssh_private_ip` (boolean) - If true, then SSH will always use the private - `subnet_id` (string) - If using VPC, the ID of the subnet, such as
IP if available. "subnet-12345def", where Packer will launch the EC2 instance. This field is
required if you are using an non-default VPC.
* `subnet_id` (string) - If using VPC, the ID of the subnet, such as - `tags` (object of key/value strings) - Tags applied to the AMI.
"subnet-12345def", where Packer will launch the EC2 instance. This field is
required if you are using an non-default VPC.
* `tags` (object of key/value strings) - Tags applied to the AMI. - `temporary_key_pair_name` (string) - The name of the temporary keypair
to generate. By default, Packer generates a name with a UUID.
* `temporary_key_pair_name` (string) - The name of the temporary keypair - `user_data` (string) - User data to apply when launching the instance. Note
to generate. By default, Packer generates a name with a UUID. that you need to be careful about escaping characters due to the templates
being JSON. It is often more convenient to use `user_data_file`, instead.
* `user_data` (string) - User data to apply when launching the instance. - `user_data_file` (string) - Path to a file that will be used for the user
Note that you need to be careful about escaping characters due to the data when launching the instance.
templates being JSON. It is often more convenient to use `user_data_file`,
instead.
* `user_data_file` (string) - Path to a file that will be used for the - `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID
user data when launching the instance. in order to create a temporary security group within the VPC.
* `vpc_id` (string) - If launching into a VPC subnet, Packer needs the - `x509_upload_path` (string) - The path on the remote machine where the X509
VPC ID in order to create a temporary security group within the VPC. certificate will be uploaded. This path must already exist and be writable.
X509 certificates are uploaded after provisioning is run, so it is perfectly
okay to create this directory as part of the provisioning process.
* `x509_upload_path` (string) - The path on the remote machine where the - `windows_password_timeout` (string) - The timeout for waiting for a Windows
X509 certificate will be uploaded. This path must already exist and be password for Windows instances. Defaults to 20 minutes. Example value: "10m"
writable. X509 certificates are uploaded after provisioning is run, so
it is perfectly okay to create this directory as part of the provisioning
process.
* `windows_password_timeout` (string) - The timeout for waiting for
a Windows password for Windows instances. Defaults to 20 minutes.
Example value: "10m"
## Basic Example ## Basic Example
Here is a basic example. It is completely valid except for the access keys: Here is a basic example. It is completely valid except for the access keys:
```javascript ``` {.javascript}
{ {
"type": "amazon-instance", "type": "amazon-instance",
"access_key": "YOUR KEY HERE", "access_key": "YOUR KEY HERE",
@ -254,84 +257,79 @@ Here is a basic example. It is completely valid except for the access keys:
} }
``` ```
-> **Note:** Packer can also read the access key and secret -&gt; **Note:** Packer can also read the access key and secret access key from
access key from environmental variables. See the configuration reference in environmental variables. See the configuration reference in the section above
the section above for more information on what environmental variables Packer for more information on what environmental variables Packer will look for.
will look for.
## Accessing the Instance to Debug ## Accessing the Instance to Debug
If you need to access the instance to debug for some reason, run the builder If you need to access the instance to debug for some reason, run the builder
with the `-debug` flag. In debug mode, the Amazon builder will save the with the `-debug` flag. In debug mode, the Amazon builder will save the private
private key in the current directory and will output the DNS or IP information key in the current directory and will output the DNS or IP information as well.
as well. You can use this information to access the instance as it is You can use this information to access the instance as it is running.
running.
## Custom Bundle Commands ## Custom Bundle Commands
A lot of the process required for creating an instance-store backed AMI A lot of the process required for creating an instance-store backed AMI involves
involves commands being run on the actual source instance. Specifically, the commands being run on the actual source instance. Specifically, the
`ec2-bundle-vol` and `ec2-upload-bundle` commands must be used to bundle `ec2-bundle-vol` and `ec2-upload-bundle` commands must be used to bundle the
the root filesystem and upload it, respectively. root filesystem and upload it, respectively.
Each of these commands have a lot of available flags. Instead of exposing each Each of these commands have a lot of available flags. Instead of exposing each
possible flag as a template configuration option, the instance-store AMI possible flag as a template configuration option, the instance-store AMI builder
builder for Packer lets you customize the entire command used to bundle for Packer lets you customize the entire command used to bundle and upload the
and upload the AMI. AMI.
These are configured with `bundle_vol_command` and `bundle_upload_command`. These are configured with `bundle_vol_command` and `bundle_upload_command`. Both
Both of these configurations are of these configurations are [configuration
[configuration templates](/docs/templates/configuration-templates.html) templates](/docs/templates/configuration-templates.html) and have support for
and have support for their own set of template variables. their own set of template variables.
### Bundle Volume Command ### Bundle Volume Command
The default value for `bundle_vol_command` is shown below. It is split The default value for `bundle_vol_command` is shown below. It is split across
across multiple lines for convenience of reading. The bundle volume command multiple lines for convenience of reading. The bundle volume command is
is responsible for executing `ec2-bundle-vol` in order to store and image responsible for executing `ec2-bundle-vol` in order to store and image of the
of the root filesystem to use to create the AMI. root filesystem to use to create the AMI.
```text ``` {.text}
sudo -i -n ec2-bundle-vol \ sudo -i -n ec2-bundle-vol \
-k {{.KeyPath}} \ -k {{.KeyPath}} \
-u {{.AccountId}} \ -u {{.AccountId}} \
-c {{.CertPath}} \ -c {{.CertPath}} \
-r {{.Architecture}} \ -r {{.Architecture}} \
-e {{.PrivatePath}}/* \ -e {{.PrivatePath}}/* \
-d {{.Destination}} \ -d {{.Destination}} \
-p {{.Prefix}} \ -p {{.Prefix}} \
--batch \ --batch \
--no-filter --no-filter
``` ```
The available template variables should be self-explanatory based on the The available template variables should be self-explanatory based on the
parameters they're used to satisfy the `ec2-bundle-vol` command. parameters they're used to satisfy the `ec2-bundle-vol` command.
~> **Warning!** Some versions of ec2-bundle-vol silently ignore all .pem and \~&gt; **Warning!** Some versions of ec2-bundle-vol silently ignore all .pem and
.gpg files during the bundling of the AMI, which can cause problems on some .gpg files during the bundling of the AMI, which can cause problems on some
systems, such as Ubuntu. You may want to customize the bundle volume command systems, such as Ubuntu. You may want to customize the bundle volume command to
to include those files (see the `--no-filter` option of ec2-bundle-vol). include those files (see the `--no-filter` option of ec2-bundle-vol).
### Bundle Upload Command ### Bundle Upload Command
The default value for `bundle_upload_command` is shown below. It is split The default value for `bundle_upload_command` is shown below. It is split across
across multiple lines for convenience of reading. The bundle upload command multiple lines for convenience of reading. The bundle upload command is
is responsible for taking the bundled volume and uploading it to S3. responsible for taking the bundled volume and uploading it to S3.
```text ``` {.text}
sudo -i -n ec2-upload-bundle \ sudo -i -n ec2-upload-bundle \
-b {{.BucketName}} \ -b {{.BucketName}} \
-m {{.ManifestPath}} \ -m {{.ManifestPath}} \
-a {{.AccessKey}} \ -a {{.AccessKey}} \
-s {{.SecretKey}} \ -s {{.SecretKey}} \
-d {{.BundleDirectory}} \ -d {{.BundleDirectory}} \
--batch \ --batch \
--region {{.Region}} \ --region {{.Region}} \
--retry --retry
``` ```
The available template variables should be self-explanatory based on the The available template variables should be self-explanatory based on the
parameters they're used to satisfy the `ec2-upload-bundle` command. parameters they're used to satisfy the `ec2-upload-bundle` command.
[1]: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html
[2]: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html

View File

@ -1,44 +1,93 @@
--- ---
layout: "docs" description: |
page_title: "Amazon AMI Builder" Packer is able to create Amazon AMIs. To achieve this, Packer comes with
description: |- multiple builders depending on the strategy you want to use to build the AMI.
Packer is able to create Amazon AMIs. To achieve this, Packer comes with multiple builders depending on the strategy you want to use to build the AMI. layout: docs
--- page_title: Amazon AMI Builder
...
# Amazon AMI Builder # Amazon AMI Builder
Packer is able to create Amazon AMIs. To achieve this, Packer comes with Packer is able to create Amazon AMIs. To achieve this, Packer comes with
multiple builders depending on the strategy you want to use to build the multiple builders depending on the strategy you want to use to build the AMI.
AMI. Packer supports the following builders at the moment: Packer supports the following builders at the moment:
* [amazon-ebs](/docs/builders/amazon-ebs.html) - Create EBS-backed AMIs - [amazon-ebs](/docs/builders/amazon-ebs.html) - Create EBS-backed AMIs by
by launching a source AMI and re-packaging it into a new AMI after launching a source AMI and re-packaging it into a new AMI
provisioning. If in doubt, use this builder, which is the easiest to get after provisioning. If in doubt, use this builder, which is the easiest to
started with. get started with.
* [amazon-instance](/docs/builders/amazon-instance.html) - Create - [amazon-instance](/docs/builders/amazon-instance.html) - Create
instance-store AMIs by launching and provisioning a source instance, then instance-store AMIs by launching and provisioning a source instance, then
rebundling it and uploading it to S3. rebundling it and uploading it to S3.
* [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs - [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs
from an existing EC2 instance by mounting the root device and using a from an existing EC2 instance by mounting the root device and using a
[Chroot](http://en.wikipedia.org/wiki/Chroot) environment to provision [Chroot](http://en.wikipedia.org/wiki/Chroot) environment to provision
that device. This is an **advanced builder and should not be used by that device. This is an **advanced builder and should not be used by
newcomers**. However, it is also the fastest way to build an EBS-backed newcomers**. However, it is also the fastest way to build an EBS-backed AMI
AMI since no new EC2 instance needs to be launched. since no new EC2 instance needs to be launched.
-> **Don't know which builder to use?** If in doubt, use the -&gt; **Don't know which builder to use?** If in doubt, use the [amazon-ebs
[amazon-ebs builder](/docs/builders/amazon-ebs.html). It is builder](/docs/builders/amazon-ebs.html). It is much easier to use and Amazon
much easier to use and Amazon generally recommends EBS-backed images nowadays. generally recommends EBS-backed images nowadays.
<span id="specifying-amazon-credentials"></span>
## Specifying Amazon Credentials
When you use any of the amazon builders, you must provide credentials to the API
in the form of an access key id and secret. These look like:
access key id: AKIAIOSFODNN7EXAMPLE
secret access key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
If you use other AWS tools you may already have these configured. If so, packer
will try to use them, *unless* they are specified in your packer template.
Credentials are resolved in the following order:
1. Values hard-coded in the packer template are always authoritative.
2. *Variables* in the packer template may be resolved from command-line flags
or from environment variables. Please read about [User
Variables](https://packer.io/docs/templates/user-variables.html)
for details.
3. If no credentials are found, packer falls back to automatic lookup.
### Automatic Lookup
If no AWS credentials are found in a packer template, we proceed on to the
following steps:
1. Lookup via environment variables.
- First `AWS_ACCESS_KEY_ID`, then `AWS_ACCESS_KEY`
- First `AWS_SECRET_ACCESS_KEY`, then `AWS_SECRET_KEY`
2. Look for [local AWS configuration
files](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files)
- First `~/.aws/credentials`
- Next based on `AWS_PROFILE`
3. Lookup an IAM role for the current EC2 instance (if you're running in EC2)
\~&gt; **Subtle details of automatic lookup may change over time.** The most
reliable way to specify your configuration is by setting them in template
variables (directly or indirectly), or by using the `AWS_ACCESS_KEY_ID` and
`AWS_SECRET_ACCESS_KEY` environment variables.
Environment variables provide the best portability, allowing you to run your
packer build on your workstation, in Atlas, or on another build server.
## Using an IAM Instance Profile ## Using an IAM Instance Profile
If AWS keys are not specified in the template, a [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file or through environment variables If AWS keys are not specified in the template, a
Packer will use credentials provided by the instance's IAM profile, if it has one. [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files)
file or through environment variables Packer will use credentials provided by
the instance's IAM profile, if it has one.
The following policy document provides the minimal set permissions necessary for Packer to work: The following policy document provides the minimal set permissions necessary for
Packer to work:
```javascript ``` {.javascript}
{ {
"Statement": [{ "Statement": [{
"Effect": "Allow", "Effect": "Allow",
@ -70,3 +119,29 @@ The following policy document provides the minimal set permissions necessary for
}] }]
} }
``` ```
## Troubleshooting
### Attaching IAM Policies to Roles
IAM policies can be associated with user or roles. If you use packer with IAM
roles, you may encounter an error like this one:
==> amazon-ebs: Error launching source instance: You are not authorized to perform this operation.
You can read more about why this happens on the [Amazon Security
Blog](http://blogs.aws.amazon.com/security/post/Tx3M0IFB5XBOCQX/Granting-Permission-to-Launch-EC2-Instances-with-IAM-Roles-PassRole-Permission).
The example policy below may help packer work with IAM roles. Note that this
example provides more than the minimal set of permissions needed for packer to
work, but specifics will depend on your use-case.
``` {.json}
{
"Sid": "PackerIAMPassRole",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": [
"*"
]
}
```

View File

@ -1,13 +1,15 @@
--- ---
layout: "docs" description: |
page_title: "Custom Builder" Packer is extensible, allowing you to write new builders without having to
description: |- modify the core source code of Packer itself. Documentation for creating new
Packer is extensible, allowing you to write new builders without having to modify the core source code of Packer itself. Documentation for creating new builders is covered in the custom builders page of the Packer plugin section. builders is covered in the custom builders page of the Packer plugin section.
--- layout: docs
page_title: Custom Builder
...
# Custom Builder # Custom Builder
Packer is extensible, allowing you to write new builders without having to Packer is extensible, allowing you to write new builders without having to
modify the core source code of Packer itself. Documentation for creating modify the core source code of Packer itself. Documentation for creating new
new builders is covered in the [custom builders](/docs/extend/builder.html) builders is covered in the [custom builders](/docs/extend/builder.html) page of
page of the Packer plugin section. the Packer plugin section.

View File

@ -1,22 +1,26 @@
--- ---
layout: "docs" description: |
page_title: "DigitalOcean Builder" The `digitalocean` Packer builder is able to create new images for use with
description: |- DigitalOcean. The builder takes a source image, runs any provisioning necessary
The `digitalocean` Packer builder is able to create new images for use with DigitalOcean. The builder takes a source image, runs any provisioning necessary on the image after launching it, then snapshots it into a reusable image. This reusable image can then be used as the foundation of new servers that are launched within DigitalOcean. on the image after launching it, then snapshots it into a reusable image. This
--- reusable image can then be used as the foundation of new servers that are
launched within DigitalOcean.
layout: docs
page_title: DigitalOcean Builder
...
# DigitalOcean Builder # DigitalOcean Builder
Type: `digitalocean` Type: `digitalocean`
The `digitalocean` Packer builder is able to create new images for use with The `digitalocean` Packer builder is able to create new images for use with
[DigitalOcean](http://www.digitalocean.com). The builder takes a source [DigitalOcean](http://www.digitalocean.com). The builder takes a source image,
image, runs any provisioning necessary on the image after launching it, runs any provisioning necessary on the image after launching it, then snapshots
then snapshots it into a reusable image. This reusable image can then be it into a reusable image. This reusable image can then be used as the foundation
used as the foundation of new servers that are launched within DigitalOcean. of new servers that are launched within DigitalOcean.
The builder does _not_ manage images. Once it creates an image, it is up to The builder does *not* manage images. Once it creates an image, it is up to you
you to use it or delete it. to use it or delete it.
## Configuration Reference ## Configuration Reference
@ -25,50 +29,55 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized. each category, the available configuration keys are alphabetized.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `api_token` (string) - The client TOKEN to use to access your account. - `api_token` (string) - The client TOKEN to use to access your account. It
It can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`, if set. can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`,
if set.
* `image` (string) - The name (or slug) of the base image to use. This is the - `image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it. image that will be used to launch a new droplet and provision it. See
See https://developers.digitalocean.com/documentation/v2/#list-all-images for details on how to get a list of the the accepted image names/slugs. https://developers.digitalocean.com/documentation/v2/\#list-all-images for
details on how to get a list of the the accepted image names/slugs.
* `region` (string) - The name (or slug) of the region to launch the droplet in. - `region` (string) - The name (or slug) of the region to launch the
Consequently, this is the region where the snapshot will be available. droplet in. Consequently, this is the region where the snapshot will
See https://developers.digitalocean.com/documentation/v2/#list-all-regions for the accepted region names/slugs. be available. See
https://developers.digitalocean.com/documentation/v2/\#list-all-regions for
the accepted region names/slugs.
* `size` (string) - The name (or slug) of the droplet size to use. - `size` (string) - The name (or slug) of the droplet size to use. See
See https://developers.digitalocean.com/documentation/v2/#list-all-sizes for the accepted size names/slugs. https://developers.digitalocean.com/documentation/v2/\#list-all-sizes for
the accepted size names/slugs.
### Optional: ### Optional:
* `droplet_name` (string) - The name assigned to the droplet. DigitalOcean - `droplet_name` (string) - The name assigned to the droplet. DigitalOcean
sets the hostname of the machine to this value. sets the hostname of the machine to this value.
* `private_networking` (boolean) - Set to `true` to enable private networking - `private_networking` (boolean) - Set to `true` to enable private networking
for the droplet being created. This defaults to `false`, or not enabled. for the droplet being created. This defaults to `false`, or not enabled.
* `snapshot_name` (string) - The name of the resulting snapshot that will - `snapshot_name` (string) - The name of the resulting snapshot that will
appear in your account. This must be unique. appear in your account. This must be unique. To help make this unique, use a
To help make this unique, use a function like `timestamp` (see function like `timestamp` (see [configuration
[configuration templates](/docs/templates/configuration-templates.html) for more info) templates](/docs/templates/configuration-templates.html) for more info)
* `state_timeout` (string) - The time to wait, as a duration string, - `state_timeout` (string) - The time to wait, as a duration string, for a
for a droplet to enter a desired state (such as "active") before droplet to enter a desired state (such as "active") before timing out. The
timing out. The default state timeout is "6m". default state timeout is "6m".
* `user_data` (string) - User data to launch with the Droplet. - `user_data` (string) - User data to launch with the Droplet.
## Basic Example ## Basic Example
Here is a basic example. It is completely valid as soon as you enter your Here is a basic example. It is completely valid as soon as you enter your own
own access tokens: access tokens:
```javascript ``` {.javascript}
{ {
"type": "digitalocean", "type": "digitalocean",
"api_token": "YOUR API KEY", "api_token": "YOUR API KEY",

View File

@ -1,39 +1,40 @@
--- ---
layout: "docs" description: |
page_title: "Docker Builder" The `docker` Packer builder builds Docker images using Docker. The builder
description: |- starts a Docker container, runs provisioners within this container, then exports
The `docker` Packer builder builds Docker images using Docker. The builder starts a Docker container, runs provisioners within this container, then exports the container for reuse or commits the image. the container for reuse or commits the image.
--- layout: docs
page_title: Docker Builder
...
# Docker Builder # Docker Builder
Type: `docker` Type: `docker`
The `docker` Packer builder builds [Docker](http://www.docker.io) images using The `docker` Packer builder builds [Docker](http://www.docker.io) images using
Docker. The builder starts a Docker container, runs provisioners within Docker. The builder starts a Docker container, runs provisioners within this
this container, then exports the container for reuse or commits the image. container, then exports the container for reuse or commits the image.
Packer builds Docker containers _without_ the use of Packer builds Docker containers *without* the use of
[Dockerfiles](https://docs.docker.com/reference/builder/). [Dockerfiles](https://docs.docker.com/reference/builder/). By not using
By not using Dockerfiles, Packer is able to provision Dockerfiles, Packer is able to provision containers with portable scripts or
containers with portable scripts or configuration management systems configuration management systems that are not tied to Docker in any way. It also
that are not tied to Docker in any way. It also has a simpler mental model: has a simpler mental model: you provision containers much the same way you
you provision containers much the same way you provision a normal virtualized provision a normal virtualized or dedicated server. For more information, read
or dedicated server. For more information, read the section on the section on [Dockerfiles](#toc_8).
[Dockerfiles](#toc_8).
The Docker builder must run on a machine that has Docker installed. Therefore The Docker builder must run on a machine that has Docker installed. Therefore
the builder only works on machines that support Docker (modern Linux machines). the builder only works on machines that support Docker (modern Linux machines).
If you want to use Packer to build Docker containers on another platform, If you want to use Packer to build Docker containers on another platform, use
use [Vagrant](http://www.vagrantup.com) to start a Linux environment, then [Vagrant](http://www.vagrantup.com) to start a Linux environment, then run
run Packer within that environment. Packer within that environment.
## Basic Example: Export ## Basic Example: Export
Below is a fully functioning example. It doesn't do anything useful, since Below is a fully functioning example. It doesn't do anything useful, since no
no provisioners are defined, but it will effectively repackage an image. provisioners are defined, but it will effectively repackage an image.
```javascript ``` {.javascript}
{ {
"type": "docker", "type": "docker",
"image": "ubuntu", "image": "ubuntu",
@ -43,11 +44,11 @@ no provisioners are defined, but it will effectively repackage an image.
## Basic Example: Commit ## Basic Example: Commit
Below is another example, the same as above but instead of exporting the Below is another example, the same as above but instead of exporting the running
running container, this one commits the container to an image. The image container, this one commits the container to an image. The image can then be
can then be more easily tagged, pushed, etc. more easily tagged, pushed, etc.
```javascript ``` {.javascript}
{ {
"type": "docker", "type": "docker",
"image": "ubuntu", "image": "ubuntu",
@ -55,7 +56,6 @@ can then be more easily tagged, pushed, etc.
} }
``` ```
## Configuration Reference ## Configuration Reference
Configuration options are organized below into two categories: required and Configuration options are organized below into two categories: required and
@ -63,47 +63,47 @@ optional. Within each category, the available options are alphabetized and
described. described.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `commit` (boolean) - If true, the container will be committed to an - `commit` (boolean) - If true, the container will be committed to an image
image rather than exported. This cannot be set if `export_path` is set. rather than exported. This cannot be set if `export_path` is set.
* `export_path` (string) - The path where the final container will be exported - `export_path` (string) - The path where the final container will be exported
as a tar file. This cannot be set if `commit` is set to true. as a tar file. This cannot be set if `commit` is set to true.
* `image` (string) - The base image for the Docker container that will - `image` (string) - The base image for the Docker container that will
be started. This image will be pulled from the Docker registry if it be started. This image will be pulled from the Docker registry if it doesn't
doesn't already exist. already exist.
### Optional: ### Optional:
* `login` (boolean) - Defaults to false. If true, the builder will - `login` (boolean) - Defaults to false. If true, the builder will login in
login in order to pull the image. The builder only logs in for the order to pull the image. The builder only logs in for the duration of
duration of the pull. It always logs out afterwards. the pull. It always logs out afterwards.
* `login_email` (string) - The email to use to authenticate to login. - `login_email` (string) - The email to use to authenticate to login.
* `login_username` (string) - The username to use to authenticate to login. - `login_username` (string) - The username to use to authenticate to login.
* `login_password` (string) - The password to use to authenticate to login. - `login_password` (string) - The password to use to authenticate to login.
* `login_server` (string) - The server address to login to. - `login_server` (string) - The server address to login to.
* `pull` (boolean) - If true, the configured image will be pulled using - `pull` (boolean) - If true, the configured image will be pulled using
`docker pull` prior to use. Otherwise, it is assumed the image already `docker pull` prior to use. Otherwise, it is assumed the image already
exists and can be used. This defaults to true if not set. exists and can be used. This defaults to true if not set.
* `run_command` (array of strings) - An array of arguments to pass to - `run_command` (array of strings) - An array of arguments to pass to
`docker run` in order to run the container. By default this is set to `docker run` in order to run the container. By default this is set to
`["-d", "-i", "-t", "{{.Image}}", "/bin/bash"]`. `["-d", "-i", "-t", "{{.Image}}", "/bin/bash"]`. As you can see, you have a
As you can see, you have a couple template variables to customize, as well. couple template variables to customize, as well.
* `volumes` (map of strings to strings) - A mapping of additional volumes - `volumes` (map of strings to strings) - A mapping of additional volumes to
to mount into this container. The key of the object is the host path, mount into this container. The key of the object is the host path, the value
the value is the container path. is the container path.
## Using the Artifact: Export ## Using the Artifact: Export
@ -113,27 +113,26 @@ with the [docker-import](/docs/post-processors/docker-import.html) and
[docker-push](/docs/post-processors/docker-push.html) post-processors. [docker-push](/docs/post-processors/docker-push.html) post-processors.
**Note:** This section is covering how to use an artifact that has been **Note:** This section is covering how to use an artifact that has been
_exported_. More specifically, if you set `export_path` in your configuration. *exported*. More specifically, if you set `export_path` in your configuration.
If you set `commit`, see the next section. If you set `commit`, see the next section.
The example below shows a full configuration that would import and push The example below shows a full configuration that would import and push the
the created image. This is accomplished using a sequence definition (a created image. This is accomplished using a sequence definition (a collection of
collection of post-processors that are treated as as single pipeline, see post-processors that are treated as as single pipeline, see
[Post-Processors](/docs/templates/post-processors.html) [Post-Processors](/docs/templates/post-processors.html) for more information):
for more information):
```javascript ``` {.javascript}
{ {
"post-processors": [ "post-processors": [
[ [
{ {
"type": "docker-import", "type": "docker-import",
"repository": "mitchellh/packer", "repository": "mitchellh/packer",
"tag": "0.7" "tag": "0.7"
}, },
"docker-push" "docker-push"
] ]
] ]
} }
``` ```
@ -143,10 +142,10 @@ post-processor which will import the artifact as a docker image. The resulting
docker image is then passed on to the `docker-push` post-processor which handles docker image is then passed on to the `docker-push` post-processor which handles
pushing the image to a container repository. pushing the image to a container repository.
If you want to do this manually, however, perhaps from a script, you can If you want to do this manually, however, perhaps from a script, you can import
import the image using the process below: the image using the process below:
```text ``` {.text}
$ docker import - registry.mydomain.com/mycontainer:latest < artifact.tar $ docker import - registry.mydomain.com/mycontainer:latest < artifact.tar
``` ```
@ -157,23 +156,22 @@ and `docker push`, respectively.
If you committed your container to an image, you probably want to tag, save, If you committed your container to an image, you probably want to tag, save,
push, etc. Packer can do this automatically for you. An example is shown below push, etc. Packer can do this automatically for you. An example is shown below
which tags and pushes an image. This is accomplished using a sequence which tags and pushes an image. This is accomplished using a sequence definition
definition (a collection of post-processors that are treated as as single (a collection of post-processors that are treated as as single pipeline, see
pipeline, see [Post-Processors](/docs/templates/post-processors.html) for more [Post-Processors](/docs/templates/post-processors.html) for more information):
information):
```javascript ``` {.javascript}
{ {
"post-processors": [ "post-processors": [
[ [
{ {
"type": "docker-tag", "type": "docker-tag",
"repository": "mitchellh/packer", "repository": "mitchellh/packer",
"tag": "0.7" "tag": "0.7"
}, },
"docker-push" "docker-push"
] ]
] ]
} }
``` ```
@ -187,52 +185,52 @@ Going a step further, if you wanted to tag and push an image to multiple
container repositories, this could be accomplished by defining two, container repositories, this could be accomplished by defining two,
nearly-identical sequence definitions, as demonstrated by the example below: nearly-identical sequence definitions, as demonstrated by the example below:
```javascript ``` {.javascript}
{ {
"post-processors": [ "post-processors": [
[ [
{ {
"type": "docker-tag", "type": "docker-tag",
"repository": "mitchellh/packer", "repository": "mitchellh/packer",
"tag": "0.7" "tag": "0.7"
}, },
"docker-push" "docker-push"
], ],
[ [
{ {
"type": "docker-tag", "type": "docker-tag",
"repository": "hashicorp/packer", "repository": "hashicorp/packer",
"tag": "0.7" "tag": "0.7"
}, },
"docker-push" "docker-push"
] ]
] ]
} }
``` ```
## Dockerfiles ## Dockerfiles
This builder allows you to build Docker images _without_ Dockerfiles. This builder allows you to build Docker images *without* Dockerfiles.
With this builder, you can repeatably create Docker images without the use of With this builder, you can repeatably create Docker images without the use of a
a Dockerfile. You don't need to know the syntax or semantics of Dockerfiles. Dockerfile. You don't need to know the syntax or semantics of Dockerfiles.
Instead, you can just provide shell scripts, Chef recipes, Puppet manifests, Instead, you can just provide shell scripts, Chef recipes, Puppet manifests,
etc. to provision your Docker container just like you would a regular etc. to provision your Docker container just like you would a regular
virtualized or dedicated machine. virtualized or dedicated machine.
While Docker has many features, Packer views Docker simply as an LXC While Docker has many features, Packer views Docker simply as an LXC container
container runner. To that end, Packer is able to repeatably build these runner. To that end, Packer is able to repeatably build these LXC containers
LXC containers using portable provisioning scripts. using portable provisioning scripts.
Dockerfiles have some additional features that Packer doesn't support Dockerfiles have some additional features that Packer doesn't support which are
which are able to be worked around. Many of these features will be automated able to be worked around. Many of these features will be automated by Packer in
by Packer in the future: the future:
* Dockerfiles will snapshot the container at each step, allowing you to - Dockerfiles will snapshot the container at each step, allowing you to go
go back to any step in the history of building. Packer doesn't do this yet, back to any step in the history of building. Packer doesn't do this yet, but
but inter-step snapshotting is on the way. inter-step snapshotting is on the way.
* Dockerfiles can contain information such as exposed ports, shared - Dockerfiles can contain information such as exposed ports, shared volumes,
volumes, and other metadata. Packer builds a raw Docker container image and other metadata. Packer builds a raw Docker container image that has none
that has none of this metadata. You can pass in much of this metadata of this metadata. You can pass in much of this metadata at runtime with
at runtime with `docker run`. `docker run`.

View File

@ -0,0 +1,151 @@
---
description: |
The `googlecompute` Packer builder is able to create images for use with Google
Compute Engine (GCE) based on existing images. Google Compute Engine doesn't
allow the creation of images from scratch.
layout: docs
page_title: Google Compute Builder
...
# Google Compute Builder
Type: `googlecompute`
The `googlecompute` Packer builder is able to create
[images](https://developers.google.com/compute/docs/images) for use with [Google
Compute Engine](https://cloud.google.com/products/compute-engine)(GCE) based on
existing images. Google Compute Engine doesn't allow the creation of images from
scratch.
## Authentication
Authenticating with Google Cloud services requires at most one JSON file, called
the *account file*. The *account file* is **not** required if you are running
the `googlecompute` Packer builder from a GCE instance with a
properly-configured [Compute Engine Service
Account](https://cloud.google.com/compute/docs/authentication).
### Running With a Compute Engine Service Account
If you run the `googlecompute` Packer builder from a GCE instance, you can
configure that instance to use a [Compute Engine Service
Account](https://cloud.google.com/compute/docs/authentication). This will allow
Packer to authenticate to Google Cloud without having to bake in a separate
credential/authentication file.
To create a GCE instance that uses a service account, provide the required
scopes when launching the instance.
For `gcloud`, do this via the `--scopes` parameter:
``` {.sh}
gcloud compute --project YOUR_PROJECT instances create "INSTANCE-NAME" ... \
--scopes "https://www.googleapis.com/auth/compute" \
"https://www.googleapis.com/auth/devstorage.full_control" \
...
```
For the [Google Developers Console](https://console.developers.google.com):
1. Choose "Show advanced options"
2. Tick "Enable Compute Engine service account"
3. Choose "Read Write" for Compute
4. Chose "Full" for "Storage"
**The service account will be used automatically by Packer as long as there is
no *account file* specified in the Packer configuration file.**
### Running Without a Compute Engine Service Account
The [Google Developers Console](https://console.developers.google.com) allows
you to create and download a credential file that will let you use the
`googlecompute` Packer builder anywhere. To make the process more
straightforwarded, it is documented here.
1. Log into the [Google Developers
Console](https://console.developers.google.com) and select a project.
2. Under the "APIs & Auth" section, click "Credentials."
3. Click the "Create new Client ID" button, select "Service account", and click
"Create Client ID"
4. Click "Generate new JSON key" for the Service Account you just created. A
JSON file will be downloaded automatically. This is your *account file*.
## Basic Example
Below is a fully functioning example. It doesn't do anything useful, since no
provisioners are defined, but it will effectively repackage an existing GCE
image. The account file is obtained in the previous section.
``` {.javascript}
{
"type": "googlecompute",
"account_file": "account.json",
"project_id": "my-project",
"source_image": "debian-7-wheezy-v20150127",
"zone": "us-central1-a"
}
```
## Configuration Reference
Configuration options are organized below into two categories: required and
optional. Within each category, the available options are alphabetized and
described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
- `project_id` (string) - The project ID that will be used to launch instances
and store images.
- `source_image` (string) - The source image to use to create the new
image from. Example: `"debian-7-wheezy-v20150127"`
- `zone` (string) - The zone in which to launch the instance used to create
the image. Example: `"us-central1-a"`
### Optional:
- `account_file` (string) - The JSON file containing your account credentials.
Not required if you run Packer on a GCE instance with a service account.
Instructions for creating file or using service accounts are above.
- `disk_size` (integer) - The size of the disk in GB. This defaults to `10`,
which is 10GB.
- `image_name` (string) - The unique name of the resulting image. Defaults to
`"packer-{{timestamp}}"`.
- `image_description` (string) - The description of the resulting image.
- `instance_name` (string) - A name to give the launched instance. Beware that
this must be unique. Defaults to `"packer-{{uuid}}"`.
- `machine_type` (string) - The machine type. Defaults to `"n1-standard-1"`.
- `metadata` (object of key/value strings)
- `network` (string) - The Google Compute network to use for the
launched instance. Defaults to `"default"`.
- `state_timeout` (string) - The time to wait for instance state changes.
Defaults to `"5m"`.
- `tags` (array of strings)
- `use_internal_ip` (boolean) - If true, use the instance's internal IP
instead of its external IP during building.
## Gotchas
Centos images have root ssh access disabled by default. Set `ssh_username` to
any user, which will be created by packer with sudo access.
The machine type must have a scratch disk, which means you can't use an
`f1-micro` or `g1-small` to build images.

View File

@ -1,136 +0,0 @@
---
layout: "docs"
page_title: "Google Compute Builder"
description: |-
The `googlecompute` Packer builder is able to create images for use with Google Compute Engine (GCE) based on existing images. Google Compute Engine doesn't allow the creation of images from scratch.
---
# Google Compute Builder
Type: `googlecompute`
The `googlecompute` Packer builder is able to create [images](https://developers.google.com/compute/docs/images) for use with
[Google Compute Engine](https://cloud.google.com/products/compute-engine)(GCE) based on existing images. Google
Compute Engine doesn't allow the creation of images from scratch.
## Authentication
Authenticating with Google Cloud services requires at most one JSON file,
called the _account file_. The _account file_ is **not** required if you are running
the `googlecompute` Packer builder from a GCE instance with a properly-configured
[Compute Engine Service Account](https://cloud.google.com/compute/docs/authentication).
### Running With a Compute Engine Service Account
If you run the `googlecompute` Packer builder from a GCE instance, you can configure that
instance to use a [Compute Engine Service Account](https://cloud.google.com/compute/docs/authentication). This will allow Packer to authenticate
to Google Cloud without having to bake in a separate credential/authentication file.
To create a GCE instance that uses a service account, provide the required scopes when
launching the instance.
For `gcloud`, do this via the `--scopes` parameter:
```sh
gcloud compute --project YOUR_PROJECT instances create "INSTANCE-NAME" ... \
--scopes "https://www.googleapis.com/auth/compute" \
"https://www.googleapis.com/auth/devstorage.full_control" \
...
```
For the [Google Developers Console](https://console.developers.google.com):
1. Choose "Show advanced options"
2. Tick "Enable Compute Engine service account"
3. Choose "Read Write" for Compute
4. Chose "Full" for "Storage"
**The service account will be used automatically by Packer as long as there is
no _account file_ specified in the Packer configuration file.**
### Running Without a Compute Engine Service Account
The [Google Developers Console](https://console.developers.google.com) allows you to
create and download a credential file that will let you use the `googlecompute` Packer
builder anywhere. To make
the process more straightforwarded, it is documented here.
1. Log into the [Google Developers Console](https://console.developers.google.com)
and select a project.
2. Under the "APIs & Auth" section, click "Credentials."
3. Click the "Create new Client ID" button, select "Service account", and click "Create Client ID"
4. Click "Generate new JSON key" for the Service Account you just created. A JSON file will be downloaded automatically. This is your
_account file_.
## Basic Example
Below is a fully functioning example. It doesn't do anything useful,
since no provisioners are defined, but it will effectively repackage an
existing GCE image. The account file is obtained in the previous section.
```javascript
{
"type": "googlecompute",
"account_file": "account.json",
"project_id": "my-project",
"source_image": "debian-7-wheezy-v20150127",
"zone": "us-central1-a"
}
```
## Configuration Reference
Configuration options are organized below into two categories: required and optional. Within
each category, the available options are alphabetized and described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
### Required:
* `project_id` (string) - The project ID that will be used to launch instances
and store images.
* `source_image` (string) - The source image to use to create the new image
from. Example: `"debian-7-wheezy-v20150127"`
* `zone` (string) - The zone in which to launch the instance used to create
the image. Example: `"us-central1-a"`
### Optional:
* `account_file` (string) - The JSON file containing your account credentials.
Not required if you run Packer on a GCE instance with a service account.
Instructions for creating file or using service accounts are above.
* `disk_size` (integer) - The size of the disk in GB.
This defaults to `10`, which is 10GB.
* `image_name` (string) - The unique name of the resulting image.
Defaults to `"packer-{{timestamp}}"`.
* `image_description` (string) - The description of the resulting image.
* `instance_name` (string) - A name to give the launched instance. Beware
that this must be unique. Defaults to `"packer-{{uuid}}"`.
* `machine_type` (string) - The machine type. Defaults to `"n1-standard-1"`.
* `metadata` (object of key/value strings)
* `network` (string) - The Google Compute network to use for the launched
instance. Defaults to `"default"`.
* `state_timeout` (string) - The time to wait for instance state changes.
Defaults to `"5m"`.
* `tags` (array of strings)
## Gotchas
Centos images have root ssh access disabled by default. Set `ssh_username` to any user, which will be created by packer with sudo access.
The machine type must have a scratch disk, which means you can't use an `f1-micro` or `g1-small` to build images.

View File

@ -1,24 +1,28 @@
--- ---
layout: "docs" description: |
page_title: "Null Builder" The `null` Packer builder is not really a builder, it just sets up an SSH
description: |- connection and runs the provisioners. It can be used to debug provisioners
The `null` Packer builder is not really a builder, it just sets up an SSH connection and runs the provisioners. It can be used to debug provisioners without incurring high wait times. It does not create any kind of image or artifact. without incurring high wait times. It does not create any kind of image or
--- artifact.
layout: docs
page_title: Null Builder
...
# Null Builder # Null Builder
Type: `null` Type: `null`
The `null` Packer builder is not really a builder, it just sets up an SSH connection The `null` Packer builder is not really a builder, it just sets up an SSH
and runs the provisioners. It can be used to debug provisioners without connection and runs the provisioners. It can be used to debug provisioners
incurring high wait times. It does not create any kind of image or artifact. without incurring high wait times. It does not create any kind of image or
artifact.
## Basic Example ## Basic Example
Below is a fully functioning example. It doesn't do anything useful, since Below is a fully functioning example. It doesn't do anything useful, since no
no provisioners are defined, but it will connect to the specified host via ssh. provisioners are defined, but it will connect to the specified host via ssh.
```javascript ``` {.javascript}
{ {
"type": "null", "type": "null",
"ssh_host": "127.0.0.1", "ssh_host": "127.0.0.1",
@ -31,4 +35,3 @@ no provisioners are defined, but it will connect to the specified host via ssh.
The null builder has no configuration parameters other than the The null builder has no configuration parameters other than the
[communicator](/docs/templates/communicator.html) settings. [communicator](/docs/templates/communicator.html) settings.

View File

@ -1,25 +1,30 @@
--- ---
layout: "docs" description: |
page_title: "OpenStack Builder" The `openstack` Packer builder is able to create new images for use with
description: |- OpenStack. The builder takes a source image, runs any provisioning necessary on
The `openstack` Packer builder is able to create new images for use with OpenStack. The builder takes a source image, runs any provisioning necessary on the image after launching it, then creates a new reusable image. This reusable image can then be used as the foundation of new servers that are launched within OpenStack. The builder will create temporary keypairs that provide temporary access to the server while the image is being created. This simplifies configuration quite a bit. the image after launching it, then creates a new reusable image. This reusable
--- image can then be used as the foundation of new servers that are launched within
OpenStack. The builder will create temporary keypairs that provide temporary
access to the server while the image is being created. This simplifies
configuration quite a bit.
layout: docs
page_title: OpenStack Builder
...
# OpenStack Builder # OpenStack Builder
Type: `openstack` Type: `openstack`
The `openstack` Packer builder is able to create new images for use with The `openstack` Packer builder is able to create new images for use with
[OpenStack](http://www.openstack.org). The builder takes a source [OpenStack](http://www.openstack.org). The builder takes a source image, runs
image, runs any provisioning necessary on the image after launching it, any provisioning necessary on the image after launching it, then creates a new
then creates a new reusable image. This reusable image can then be reusable image. This reusable image can then be used as the foundation of new
used as the foundation of new servers that are launched within OpenStack. servers that are launched within OpenStack. The builder will create temporary
The builder will create temporary keypairs that provide temporary access to keypairs that provide temporary access to the server while the image is being
the server while the image is being created. This simplifies configuration created. This simplifies configuration quite a bit.
quite a bit.
The builder does _not_ manage images. Once it creates an image, it is up to The builder does *not* manage images. Once it creates an image, it is up to you
you to use it or delete it. to use it or delete it.
## Configuration Reference ## Configuration Reference
@ -28,81 +33,82 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized. each category, the available configuration keys are alphabetized.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `flavor` (string) - The ID, name, or full URL for the desired flavor for the - `flavor` (string) - The ID, name, or full URL for the desired flavor for the
server to be created. server to be created.
* `image_name` (string) - The name of the resulting image. - `image_name` (string) - The name of the resulting image.
* `source_image` (string) - The ID or full URL to the base image to use. - `source_image` (string) - The ID or full URL to the base image to use. This
This is the image that will be used to launch a new server and provision it. is the image that will be used to launch a new server and provision it.
Unless you specify completely custom SSH settings, the source image must Unless you specify completely custom SSH settings, the source image must
have `cloud-init` installed so that the keypair gets assigned properly. have `cloud-init` installed so that the keypair gets assigned properly.
* `username` (string) - The username used to connect to the OpenStack service. - `username` (string) - The username used to connect to the OpenStack service.
If not specified, Packer will use the environment variable If not specified, Packer will use the environment variable `OS_USERNAME`,
`OS_USERNAME`, if set. if set.
* `password` (string) - The password used to connect to the OpenStack service. - `password` (string) - The password used to connect to the OpenStack service.
If not specified, Packer will use the environment variables If not specified, Packer will use the environment variables `OS_PASSWORD`,
`OS_PASSWORD`, if set. if set.
### Optional: ### Optional:
* `api_key` (string) - The API key used to access OpenStack. Some OpenStack - `api_key` (string) - The API key used to access OpenStack. Some OpenStack
installations require this. installations require this.
* `availability_zone` (string) - The availability zone to launch the - `availability_zone` (string) - The availability zone to launch the
server in. If this isn't specified, the default enforced by your OpenStack server in. If this isn't specified, the default enforced by your OpenStack
cluster will be used. This may be required for some OpenStack clusters. cluster will be used. This may be required for some OpenStack clusters.
* `floating_ip` (string) - A specific floating IP to assign to this instance. - `floating_ip` (string) - A specific floating IP to assign to this instance.
`use_floating_ip` must also be set to true for this to have an affect. `use_floating_ip` must also be set to true for this to have an affect.
* `floating_ip_pool` (string) - The name of the floating IP pool to use - `floating_ip_pool` (string) - The name of the floating IP pool to use to
to allocate a floating IP. `use_floating_ip` must also be set to true allocate a floating IP. `use_floating_ip` must also be set to true for this
for this to have an affect. to have an affect.
* `insecure` (boolean) - Whether or not the connection to OpenStack can be done - `insecure` (boolean) - Whether or not the connection to OpenStack can be
over an insecure connection. By default this is false. done over an insecure connection. By default this is false.
* `networks` (array of strings) - A list of networks by UUID to attach - `networks` (array of strings) - A list of networks by UUID to attach to
to this instance. this instance.
* `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the - `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the
instance into. Some OpenStack installations require this. instance into. Some OpenStack installations require this. If not specified,
If not specified, Packer will use the environment variable Packer will use the environment variable `OS_TENANT_NAME`, if set.
`OS_TENANT_NAME`, if set.
* `security_groups` (array of strings) - A list of security groups by name - `security_groups` (array of strings) - A list of security groups by name to
to add to this instance. add to this instance.
* `region` (string) - The name of the region, such as "DFW", in which - `region` (string) - The name of the region, such as "DFW", in which to
to launch the server to create the AMI. launch the server to create the AMI. If not specified, Packer will use the
If not specified, Packer will use the environment variable environment variable `OS_REGION_NAME`, if set.
`OS_REGION_NAME`, if set.
* `ssh_interface` (string) - The type of interface to connect via SSH. Values - `ssh_interface` (string) - The type of interface to connect via SSH. Values
useful for Rackspace are "public" or "private", and the default behavior is useful for Rackspace are "public" or "private", and the default behavior is
to connect via whichever is returned first from the OpenStack API. to connect via whichever is returned first from the OpenStack API.
* `use_floating_ip` (boolean) - Whether or not to use a floating IP for - `use_floating_ip` (boolean) - Whether or not to use a floating IP for
the instance. Defaults to false. the instance. Defaults to false.
* `rackconnect_wait` (boolean) - For rackspace, whether or not to wait for - `rackconnect_wait` (boolean) - For rackspace, whether or not to wait for
Rackconnect to assign the machine an IP address before connecting via SSH. Rackconnect to assign the machine an IP address before connecting via SSH.
Defaults to false. Defaults to false.
- `metadata` (object of key/value strings) - Glance metadata that will be
applied to the image.
## Basic Example: Rackspace public cloud ## Basic Example: Rackspace public cloud
Here is a basic example. This is a working example to build a Here is a basic example. This is a working example to build a Ubuntu 12.04 LTS
Ubuntu 12.04 LTS (Precise Pangolin) on Rackspace OpenStack cloud offering. (Precise Pangolin) on Rackspace OpenStack cloud offering.
```javascript ``` {.javascript}
{ {
"type": "openstack", "type": "openstack",
"username": "foo", "username": "foo",
@ -117,10 +123,10 @@ Ubuntu 12.04 LTS (Precise Pangolin) on Rackspace OpenStack cloud offering.
## Basic Example: Private OpenStack cloud ## Basic Example: Private OpenStack cloud
This example builds an Ubuntu 14.04 image on a private OpenStack cloud, This example builds an Ubuntu 14.04 image on a private OpenStack cloud, powered
powered by Metacloud. by Metacloud.
```javascript ``` {.javascript}
{ {
"type": "openstack", "type": "openstack",
"ssh_username": "root", "ssh_username": "root",
@ -130,12 +136,12 @@ powered by Metacloud.
} }
``` ```
In this case, the connection information for connecting to OpenStack In this case, the connection information for connecting to OpenStack doesn't
doesn't appear in the template. That is because I source a standard appear in the template. That is because I source a standard OpenStack script
OpenStack script with environment variables set before I run this. This with environment variables set before I run this. This script is setting
script is setting environment variables like: environment variables like:
* `OS_AUTH_URL` - `OS_AUTH_URL`
* `OS_TENANT_ID` - `OS_TENANT_ID`
* `OS_USERNAME` - `OS_USERNAME`
* `OS_PASSWORD` - `OS_PASSWORD`

View File

@ -1,31 +1,31 @@
--- ---
layout: "docs" description: |
page_title: "Parallels Builder (from an ISO)" The Parallels Packer builder is able to create Parallels Desktop for Mac virtual
description: |- machines and export them in the PVM format, starting from an ISO image.
The Parallels Packer builder is able to create Parallels Desktop for Mac virtual machines and export them in the PVM format, starting from an ISO image. layout: docs
--- page_title: 'Parallels Builder (from an ISO)'
...
# Parallels Builder (from an ISO) # Parallels Builder (from an ISO)
Type: `parallels-iso` Type: `parallels-iso`
The Parallels Packer builder is able to create The Parallels Packer builder is able to create [Parallels Desktop for
[Parallels Desktop for Mac](http://www.parallels.com/products/desktop/) virtual Mac](http://www.parallels.com/products/desktop/) virtual machines and export
machines and export them in the PVM format, starting from an them in the PVM format, starting from an ISO image.
ISO image.
The builder builds a virtual machine by creating a new virtual machine The builder builds a virtual machine by creating a new virtual machine from
from scratch, booting it, installing an OS, provisioning software within scratch, booting it, installing an OS, provisioning software within the OS, then
the OS, then shutting it down. The result of the Parallels builder is a directory shutting it down. The result of the Parallels builder is a directory containing
containing all the files necessary to run the virtual machine portably. all the files necessary to run the virtual machine portably.
## Basic Example ## Basic Example
Here is a basic example. This example is not functional. It will start the Here is a basic example. This example is not functional. It will start the OS
OS installer but then fail because we don't provide the preseed file for installer but then fail because we don't provide the preseed file for Ubuntu to
Ubuntu to self-install. Still, the example serves to show the basic configuration: self-install. Still, the example serves to show the basic configuration:
```javascript ``` {.javascript}
{ {
"type": "parallels-iso", "type": "parallels-iso",
"guest_os_type": "ubuntu", "guest_os_type": "ubuntu",
@ -40,219 +40,222 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
} }
``` ```
It is important to add a `shutdown_command`. By default Packer halts the It is important to add a `shutdown_command`. By default Packer halts the virtual
virtual machine and the file system may not be sync'd. Thus, changes made in a machine and the file system may not be sync'd. Thus, changes made in a
provisioner might not be saved. provisioner might not be saved.
## Configuration Reference ## Configuration Reference
There are many configuration options available for the Parallels builder. There are many configuration options available for the Parallels builder. They
They are organized below into two categories: required and optional. Within are organized below into two categories: required and optional. Within each
each category, the available options are alphabetized and described. category, the available options are alphabetized and described.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO - `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior files are so large, this is required and Packer will verify it prior to
to booting a virtual machine with the ISO attached. The type of the booting a virtual machine with the ISO attached. The type of the checksum is
checksum is specified with `iso_checksum_type`, documented below. specified with `iso_checksum_type`, documented below.
* `iso_checksum_type` (string) - The type of the checksum specified in - `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
"sha512" currently. While "none" will skip checksumming, this is not "sha512" currently. While "none" will skip checksumming, this is not
recommended since ISO files are generally large and corruption does happen recommended since ISO files are generally large and corruption does happen
from time to time. from time to time.
* `iso_url` (string) - A URL to the ISO containing the installation image. - `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file). This URL can be either an HTTP URL or a file URL (or path to a file). If
If this is an HTTP URL, Packer will download it and cache it between this is an HTTP URL, Packer will download it and cache it between runs.
runs.
* `ssh_username` (string) - The username to use to SSH into the machine - `ssh_username` (string) - The username to use to SSH into the machine once
once the OS is installed. the OS is installed.
* `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to - `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to
install into the VM. Valid values are "win", "lin", "mac", "os2" and "other". install into the VM. Valid values are "win", "lin", "mac", "os2"
This can be omitted only if `parallels_tools_mode` is "disable". and "other". This can be omitted only if `parallels_tools_mode`
is "disable".
### Optional: ### Optional:
* `boot_command` (array of strings) - This is an array of commands to type - `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot keys can be typed as well, and are covered in the section below on the
command. If this is not specified, it is assumed the installer will start boot command. If this is not specified, it is assumed the installer will
itself. start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual - `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified, five seconds and one minute 30 seconds, respectively. If this isn't
the default is 10 seconds. specified, the default is 10 seconds.
* `disk_size` (integer) - The size, in megabytes, of the hard disk to create - `disk_size` (integer) - The size, in megabytes, of the hard disk to create
for the VM. By default, this is 40000 (about 40 GB). for the VM. By default, this is 40000 (about 40 GB).
* `floppy_files` (array of strings) - A list of files to place onto a floppy - `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful disk that is attached when the VM is booted. This is most useful for
for unattended Windows installs, which look for an `Autounattend.xml` file unattended Windows installs, which look for an `Autounattend.xml` file on
on removable media. By default, no floppy will be attached. All files removable media. By default, no floppy will be attached. All files listed in
listed in this setting get placed into the root directory of the floppy this setting get placed into the root directory of the floppy and the floppy
and the floppy is attached as the first floppy device. Currently, no is attached as the first floppy device. Currently, no support exists for
support exists for creating sub-directories on the floppy. Wildcard creating sub-directories on the floppy. Wildcard characters (\*, ?,
characters (*, ?, and []) are allowed. Directory names are also allowed, and \[\]) are allowed. Directory names are also allowed, which will add all
which will add all the files found in the directory to the floppy. the files found in the directory to the floppy.
* `guest_os_type` (string) - The guest OS type being installed. By default - `guest_os_type` (string) - The guest OS type being installed. By default
this is "other", but you can get _dramatic_ performance improvements by this is "other", but you can get *dramatic* performance improvements by
setting this to the proper value. To view all available values for this setting this to the proper value. To view all available values for this run
run `prlctl create x --distribution list`. Setting the correct value hints to `prlctl create x --distribution list`. Setting the correct value hints to
Parallels Desktop how to optimize the virtual hardware to work best with Parallels Desktop how to optimize the virtual hardware to work best with
that operating system. that operating system.
* `hard_drive_interface` (string) - The type of controller that the - `hard_drive_interface` (string) - The type of controller that the hard
hard drives are attached to, defaults to "sata". Valid options are drives are attached to, defaults to "sata". Valid options are "sata", "ide",
"sata", "ide", and "scsi". and "scsi".
* `host_interfaces` (array of strings) - A list of which interfaces on the - `host_interfaces` (array of strings) - A list of which interfaces on the
host should be searched for a IP address. The first IP address found on host should be searched for a IP address. The first IP address found on one
one of these will be used as `{{ .HTTPIP }}` in the `boot_command`. of these will be used as `{{ .HTTPIP }}` in the `boot_command`. Defaults to
Defaults to ["en0", "en1", "en2", "en3", "en4", "en5", "en6", "en7", "en8", \["en0", "en1", "en2", "en3", "en4", "en5", "en6", "en7", "en8", "en9",
"en9", "ppp0", "ppp1", "ppp2"]. "ppp0", "ppp1", "ppp2"\].
* `http_directory` (string) - Path to a directory to serve using an HTTP - `http_directory` (string) - Path to a directory to serve using an
server. The files in this directory will be available over HTTP that will HTTP server. The files in this directory will be available over HTTP that
be requestable from the virtual machine. This is useful for hosting will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP kickstart files and so on. By default this is "", which means no HTTP server
server will be started. The address and port of the HTTP server will be will be started. The address and port of the HTTP server will be available
available as variables in `boot_command`. This is covered in more detail as variables in `boot_command`. This is covered in more detail below.
below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and - `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`. maximum port to use for the HTTP server started to serve the
Because Packer often runs in parallel, Packer will choose a randomly available `http_directory`. Because Packer often runs in parallel, Packer will choose
port in this range to run the HTTP server. If you want to force the HTTP a randomly available port in this range to run the HTTP server. If you want
server to be on one port, make this minimum and maximum port the same. to force the HTTP server to be on one port, make this minimum and maximum
By default the values are 8000 and 9000, respectively. port the same. By default the values are 8000 and 9000, respectively.
* `iso_urls` (array of strings) - Multiple URLs for the ISO to download. - `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to download Packer will try these in order. If anything goes wrong attempting to
or while downloading a single URL, it will move on to the next. All URLs download or while downloading a single URL, it will move on to the next. All
must point to the same file (same checksum). By default this is empty URLs must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified. and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
* `output_directory` (string) - This is the path to the directory where the - `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute. resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer` If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder. is executed. This directory must not exist or be empty prior to running
By default this is "output-BUILDNAME" where "BUILDNAME" is the name the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
of the build. name of the build.
* `parallels_tools_guest_path` (string) - The path in the virtual machine to upload - `parallels_tools_guest_path` (string) - The path in the virtual machine to
Parallels Tools. This only takes effect if `parallels_tools_mode` is "upload". upload Parallels Tools. This only takes effect if `parallels_tools_mode`
This is a [configuration template](/docs/templates/configuration-templates.html) is "upload". This is a [configuration
that has a single valid variable: `Flavor`, which will be the value of template](/docs/templates/configuration-templates.html) that has a single
`parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso" which valid variable: `Flavor`, which will be the value of
should upload into the login directory of the user. `parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso"
which should upload into the login directory of the user.
* `parallels_tools_mode` (string) - The method by which Parallels Tools are made - `parallels_tools_mode` (string) - The method by which Parallels Tools are
available to the guest for installation. Valid options are "upload", "attach", made available to the guest for installation. Valid options are "upload",
or "disable". If the mode is "attach" the Parallels Tools ISO will be attached "attach", or "disable". If the mode is "attach" the Parallels Tools ISO will
as a CD device to the virtual machine. If the mode is "upload" the Parallels be attached as a CD device to the virtual machine. If the mode is "upload"
Tools ISO will be uploaded to the path specified by the Parallels Tools ISO will be uploaded to the path specified by
`parallels_tools_guest_path`. The default value is "upload". `parallels_tools_guest_path`. The default value is "upload".
* `prlctl` (array of array of strings) - Custom `prlctl` commands to execute in - `prlctl` (array of array of strings) - Custom `prlctl` commands to execute
order to further customize the virtual machine being created. The value of in order to further customize the virtual machine being created. The value
this is an array of commands to execute. The commands are executed in the order of this is an array of commands to execute. The commands are executed in the
defined in the template. For each command, the command is defined itself as an order defined in the template. For each command, the command is defined
array of strings, where each string represents a single argument on the itself as an array of strings, where each string represents a single
command-line to `prlctl` (but excluding `prlctl` itself). Each arg is treated argument on the command-line to `prlctl` (but excluding `prlctl` itself).
as a [configuration template](/docs/templates/configuration-templates.html), Each arg is treated as a [configuration
where the `Name` variable is replaced with the VM name. More details on how template](/docs/templates/configuration-templates.html), where the `Name`
to use `prlctl` are below. variable is replaced with the VM name. More details on how to use `prlctl`
are below.
* `prlctl_post` (array of array of strings) - Identical to `prlctl`, - `prlctl_post` (array of array of strings) - Identical to `prlctl`, except
except that it is run after the virtual machine is shutdown, and before the that it is run after the virtual machine is shutdown, and before the virtual
virtual machine is exported. machine is exported.
* `prlctl_version_file` (string) - The path within the virtual machine to upload - `prlctl_version_file` (string) - The path within the virtual machine to
a file that contains the `prlctl` version that was used to create the machine. upload a file that contains the `prlctl` version that was used to create
This information can be useful for provisioning. By default this is the machine. This information can be useful for provisioning. By default
".prlctl_version", which will generally upload it into the home directory. this is ".prlctl\_version", which will generally upload it into the
home directory.
* `shutdown_command` (string) - The command to use to gracefully shut down - `shutdown_command` (string) - The command to use to gracefully shut down the
the machine once all the provisioning is done. By default this is an empty machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine. string, which tells Packer to just forcefully shut down the machine.
* `shutdown_timeout` (string) - The amount of time to wait after executing - `shutdown_timeout` (string) - The amount of time to wait after executing the
the `shutdown_command` for the virtual machine to actually shut down. `shutdown_command` for the virtual machine to actually shut down. If it
If it doesn't shut down in this time, it is an error. By default, the timeout doesn't shut down in this time, it is an error. By default, the timeout is
is "5m", or five minutes. "5m", or five minutes.
* `vm_name` (string) - This is the name of the PVM directory for the new - `vm_name` (string) - This is the name of the PVM directory for the new
virtual machine, without the file extension. By default this is virtual machine, without the file extension. By default this is
"packer-BUILDNAME", where "BUILDNAME" is the name of the build. "packer-BUILDNAME", where "BUILDNAME" is the name of the build.
## Boot Command ## Boot Command
The `boot_command` configuration is very important: it specifies the keys The `boot_command` configuration is very important: it specifies the keys to
to type when the virtual machine is first booted in order to start the type when the virtual machine is first booted in order to start the OS
OS installer. This command is typed after `boot_wait`, which gives the installer. This command is typed after `boot_wait`, which gives the virtual
virtual machine some time to actually load the ISO. machine some time to actually load the ISO.
As documented above, the `boot_command` is an array of strings. The As documented above, the `boot_command` is an array of strings. The strings are
strings are all typed in sequence. It is an array only to improve readability all typed in sequence. It is an array only to improve readability within the
within the template. template.
The boot command is "typed" character for character (using the Parallels The boot command is "typed" character for character (using the Parallels
Virtualization SDK, see [Parallels Builder](/docs/builders/parallels.html)) Virtualization SDK, see [Parallels Builder](/docs/builders/parallels.html))
simulating a human actually typing the keyboard. There are a set of special simulating a human actually typing the keyboard. There are a set of special keys
keys available. If these are in your boot command, they will be replaced by available. If these are in your boot command, they will be replaced by the
the proper key: proper key:
* `<bs>` - Backspace - `<bs>` - Backspace
* `<del>` - Delete - `<del>` - Delete
* `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress. - `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
* `<esc>` - Simulates pressing the escape key. - `<esc>` - Simulates pressing the escape key.
* `<tab>` - Simulates pressing the tab key. - `<tab>` - Simulates pressing the tab key.
* `<f1>` - `<f12>` - Simulates pressing a function key. - `<f1>` - `<f12>` - Simulates pressing a function key.
* `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key. - `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
* `<spacebar>` - Simulates pressing the spacebar. - `<spacebar>` - Simulates pressing the spacebar.
* `<insert>` - Simulates pressing the insert key. - `<insert>` - Simulates pressing the insert key.
* `<home>` `<end>` - Simulates pressing the home and end keys. - `<home>` `<end>` - Simulates pressing the home and end keys.
* `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys. - `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
* `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before sending any additional keys. This - `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before
is useful if you have to generally wait for the UI to update before typing more. sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
In addition to the special keys, each command to type is treated as a In addition to the special keys, each command to type is treated as a
[configuration template](/docs/templates/configuration-templates.html). [configuration template](/docs/templates/configuration-templates.html). The
The available variables are: available variables are:
* `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory` that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will configuration parameter. If `http_directory` isn't specified, these will be
be blank! blank!
Example boot command. This is actually a working boot command used to start Example boot command. This is actually a working boot command used to start an
an Ubuntu 12.04 installer: Ubuntu 12.04 installer:
```text ``` {.text}
[ [
"<esc><esc><enter><wait>", "<esc><esc><enter><wait>",
"/install/vmlinuz noapic ", "/install/vmlinuz noapic ",
@ -267,17 +270,18 @@ an Ubuntu 12.04 installer:
``` ```
## prlctl Commands ## prlctl Commands
In order to perform extra customization of the virtual machine, a template can In order to perform extra customization of the virtual machine, a template can
define extra calls to `prlctl` to perform. define extra calls to `prlctl` to perform.
[prlctl](http://download.parallels.com/desktop/v9/ga/docs/en_US/Parallels%20Command%20Line%20Reference%20Guide.pdf) [prlctl](http://download.parallels.com/desktop/v9/ga/docs/en_US/Parallels%20Command%20Line%20Reference%20Guide.pdf)
is the command-line interface to Parallels Desktop. It can be used to configure is the command-line interface to Parallels Desktop. It can be used to configure
the virtual machine, such as set RAM, CPUs, etc. the virtual machine, such as set RAM, CPUs, etc.
Extra `prlctl` commands are defined in the template in the `prlctl` section. Extra `prlctl` commands are defined in the template in the `prlctl` section. An
An example is shown below that sets the memory and number of CPUs within the example is shown below that sets the memory and number of CPUs within the
virtual machine: virtual machine:
```javascript ``` {.javascript}
{ {
"prlctl": [ "prlctl": [
["set", "{{.Name}}", "--memsize", "1024"], ["set", "{{.Name}}", "--memsize", "1024"],
@ -291,7 +295,7 @@ executed in the order defined. So in the above example, the memory will be set
followed by the CPUs. followed by the CPUs.
Each command itself is an array of strings, where each string is an argument to Each command itself is an array of strings, where each string is an argument to
`prlctl`. Each argument is treated as a `prlctl`. Each argument is treated as a [configuration
[configuration template](/docs/templates/configuration-templates.html). The only template](/docs/templates/configuration-templates.html). The only available
available variable is `Name` which is replaced with the unique name of the VM, variable is `Name` which is replaced with the unique name of the VM, which is
which is required for many `prlctl` calls. required for many `prlctl` calls.

View File

@ -1,30 +1,31 @@
--- ---
layout: "docs" description: |
page_title: "Parallels Builder (from a PVM)" This Parallels builder is able to create Parallels Desktop for Mac virtual
description: |- machines and export them in the PVM format, starting from an existing PVM
This Parallels builder is able to create Parallels Desktop for Mac virtual machines and export them in the PVM format, starting from an existing PVM (exported virtual machine image). (exported virtual machine image).
--- layout: docs
page_title: 'Parallels Builder (from a PVM)'
...
# Parallels Builder (from a PVM) # Parallels Builder (from a PVM)
Type: `parallels-pvm` Type: `parallels-pvm`
This Parallels builder is able to create This Parallels builder is able to create [Parallels Desktop for
[Parallels Desktop for Mac](http://www.parallels.com/products/desktop/) Mac](http://www.parallels.com/products/desktop/) virtual machines and export
virtual machines and export them in the PVM format, starting from an them in the PVM format, starting from an existing PVM (exported virtual machine
existing PVM (exported virtual machine image). image).
The builder builds a virtual machine by importing an existing PVM The builder builds a virtual machine by importing an existing PVM file. It then
file. It then boots this image, runs provisioners on this new VM, and boots this image, runs provisioners on this new VM, and exports that VM to
exports that VM to create the image. The imported machine is deleted prior create the image. The imported machine is deleted prior to finishing the build.
to finishing the build.
## Basic Example ## Basic Example
Here is a basic example. This example is functional if you have an PVM matching Here is a basic example. This example is functional if you have an PVM matching
the settings here. the settings here.
```javascript ``` {.javascript}
{ {
"type": "parallels-pvm", "type": "parallels-pvm",
"parallels_tools_flavor": "lin", "parallels_tools_flavor": "lin",
@ -36,175 +37,183 @@ the settings here.
} }
``` ```
It is important to add a `shutdown_command`. By default Packer halts the It is important to add a `shutdown_command`. By default Packer halts the virtual
virtual machine and the file system may not be sync'd. Thus, changes made in a machine and the file system may not be sync'd. Thus, changes made in a
provisioner might not be saved. provisioner might not be saved.
## Configuration Reference ## Configuration Reference
There are many configuration options available for the Parallels builder. There are many configuration options available for the Parallels builder. They
They are organized below into two categories: required and optional. Within are organized below into two categories: required and optional. Within each
each category, the available options are alphabetized and described. category, the available options are alphabetized and described.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `source_path` (string) - The path to a PVM directory that acts as - `source_path` (string) - The path to a PVM directory that acts as the source
the source of this build. of this build.
* `ssh_username` (string) - The username to use to SSH into the machine - `ssh_username` (string) - The username to use to SSH into the machine once
once the OS is installed. the OS is installed.
* `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to - `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to
install into the VM. Valid values are "win", "lin", "mac", "os2" and "other". install into the VM. Valid values are "win", "lin", "mac", "os2"
This can be omitted only if `parallels_tools_mode` is "disable". and "other". This can be omitted only if `parallels_tools_mode`
is "disable".
### Optional: ### Optional:
* `boot_command` (array of strings) - This is an array of commands to type - `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot keys can be typed as well, and are covered in the section below on the
command. If this is not specified, it is assumed the installer will start boot command. If this is not specified, it is assumed the installer will
itself. start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual - `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified, five seconds and one minute 30 seconds, respectively. If this isn't
the default is 10 seconds. specified, the default is 10 seconds.
* `floppy_files` (array of strings) - A list of files to put onto a floppy - `floppy_files` (array of strings) - A list of files to put onto a floppy
disk that is attached when the VM is booted for the first time. This is disk that is attached when the VM is booted for the first time. This is most
most useful for unattended Windows installs, which look for an useful for unattended Windows installs, which look for an `Autounattend.xml`
`Autounattend.xml` file on removable media. By default no floppy will file on removable media. By default no floppy will be attached. The files
be attached. The files listed in this configuration will all be put listed in this configuration will all be put into the root directory of the
into the root directory of the floppy disk; sub-directories are not supported. floppy disk; sub-directories are not supported.
* `reassign_mac` (boolean) - If this is "false" the MAC address of the first - `reassign_mac` (boolean) - If this is "false" the MAC address of the first
NIC will reused when imported else a new MAC address will be generated by NIC will reused when imported else a new MAC address will be generated
Parallels. Defaults to "false". by Parallels. Defaults to "false".
* `output_directory` (string) - This is the path to the directory where the - `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute. resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer` If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder. is executed. This directory must not exist or be empty prior to running
By default this is "output-BUILDNAME" where "BUILDNAME" is the name the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
of the build. name of the build.
* `parallels_tools_guest_path` (string) - The path in the VM to upload - `parallels_tools_guest_path` (string) - The path in the VM to upload
Parallels Tools. This only takes effect if `parallels_tools_mode` is "upload". Parallels Tools. This only takes effect if `parallels_tools_mode`
This is a [configuration template](/docs/templates/configuration-templates.html) is "upload". This is a [configuration
that has a single valid variable: `Flavor`, which will be the value of template](/docs/templates/configuration-templates.html) that has a single
`parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso" which valid variable: `Flavor`, which will be the value of
should upload into the login directory of the user. `parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso"
which should upload into the login directory of the user.
* `parallels_tools_mode` (string) - The method by which Parallels Tools are made - `parallels_tools_mode` (string) - The method by which Parallels Tools are
available to the guest for installation. Valid options are "upload", "attach", made available to the guest for installation. Valid options are "upload",
or "disable". If the mode is "attach" the Parallels Tools ISO will be attached "attach", or "disable". If the mode is "attach" the Parallels Tools ISO will
as a CD device to the virtual machine. If the mode is "upload" the Parallels be attached as a CD device to the virtual machine. If the mode is "upload"
Tools ISO will be uploaded to the path specified by the Parallels Tools ISO will be uploaded to the path specified by
`parallels_tools_guest_path`. The default value is "upload". `parallels_tools_guest_path`. The default value is "upload".
* `prlctl` (array of array of strings) - Custom `prlctl` commands to execute in - `prlctl` (array of array of strings) - Custom `prlctl` commands to execute
order to further customize the virtual machine being created. The value of in order to further customize the virtual machine being created. The value
this is an array of commands to execute. The commands are executed in the order of this is an array of commands to execute. The commands are executed in the
defined in the template. For each command, the command is defined itself as an order defined in the template. For each command, the command is defined
array of strings, where each string represents a single argument on the itself as an array of strings, where each string represents a single
command-line to `prlctl` (but excluding `prlctl` itself). Each arg is treated argument on the command-line to `prlctl` (but excluding `prlctl` itself).
as a [configuration template](/docs/templates/configuration-templates.html), Each arg is treated as a [configuration
where the `Name` variable is replaced with the VM name. More details on how template](/docs/templates/configuration-templates.html), where the `Name`
to use `prlctl` are below. variable is replaced with the VM name. More details on how to use `prlctl`
are below.
* `prlctl_post` (array of array of strings) - Identical to `prlctl`, - `prlctl_post` (array of array of strings) - Identical to `prlctl`, except
except that it is run after the virtual machine is shutdown, and before the that it is run after the virtual machine is shutdown, and before the virtual
virtual machine is exported. machine is exported.
* `prlctl_version_file` (string) - The path within the virtual machine to upload - `prlctl_version_file` (string) - The path within the virtual machine to
a file that contains the `prlctl` version that was used to create the machine. upload a file that contains the `prlctl` version that was used to create
This information can be useful for provisioning. By default this is the machine. This information can be useful for provisioning. By default
".prlctl_version", which will generally upload it into the home directory. this is ".prlctl\_version", which will generally upload it into the
home directory.
* `shutdown_command` (string) - The command to use to gracefully shut down - `shutdown_command` (string) - The command to use to gracefully shut down the
the machine once all the provisioning is done. By default this is an empty machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine. string, which tells Packer to just forcefully shut down the machine.
* `shutdown_timeout` (string) - The amount of time to wait after executing - `shutdown_timeout` (string) - The amount of time to wait after executing the
the `shutdown_command` for the virtual machine to actually shut down. `shutdown_command` for the virtual machine to actually shut down. If it
If it doesn't shut down in this time, it is an error. By default, the timeout doesn't shut down in this time, it is an error. By default, the timeout is
is "5m", or five minutes. "5m", or five minutes.
* `vm_name` (string) - This is the name of the virtual machine when it is - `vm_name` (string) - This is the name of the virtual machine when it is
imported as well as the name of the PVM directory when the virtual machine is imported as well as the name of the PVM directory when the virtual machine
exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is is exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is the
the name of the build. name of the build.
## Parallels Tools ## Parallels Tools
After the virtual machine is up and the operating system is installed, Packer After the virtual machine is up and the operating system is installed, Packer
uploads the Parallels Tools into the virtual machine. The path where they are uploads the Parallels Tools into the virtual machine. The path where they are
uploaded is controllable by `parallels_tools_path`, and defaults to uploaded is controllable by `parallels_tools_path`, and defaults to
"prl-tools.iso". Without an absolute path, it is uploaded to the home directory "prl-tools.iso". Without an absolute path, it is uploaded to the home directory
of the SSH user. Parallels Tools ISO's can be found in: of the SSH user. Parallels Tools ISO's can be found in: "/Applications/Parallels
"/Applications/Parallels Desktop.app/Contents/Resources/Tools/" Desktop.app/Contents/Resources/Tools/"
## Boot Command ## Boot Command
The `boot_command` specifies the keys to type when the virtual machine is first booted. This command is typed after `boot_wait`. The `boot_command` specifies the keys to type when the virtual machine is first
booted. This command is typed after `boot_wait`.
As documented above, the `boot_command` is an array of strings. The As documented above, the `boot_command` is an array of strings. The strings are
strings are all typed in sequence. It is an array only to improve readability all typed in sequence. It is an array only to improve readability within the
within the template. template.
The boot command is "typed" character for character (using the Parallels The boot command is "typed" character for character (using the Parallels
Virtualization SDK, see [Parallels Builder](/docs/builders/parallels.html)) Virtualization SDK, see [Parallels Builder](/docs/builders/parallels.html))
simulating a human actually typing the keyboard. There are a set of special simulating a human actually typing the keyboard. There are a set of special keys
keys available. If these are in your boot command, they will be replaced by available. If these are in your boot command, they will be replaced by the
the proper key: proper key:
* `<bs>` - Backspace - `<bs>` - Backspace
* `<del>` - Delete - `<del>` - Delete
* `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress. - `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
* `<esc>` - Simulates pressing the escape key. - `<esc>` - Simulates pressing the escape key.
* `<tab>` - Simulates pressing the tab key. - `<tab>` - Simulates pressing the tab key.
* `<f1>` - `<f12>` - Simulates pressing a function key. - `<f1>` - `<f12>` - Simulates pressing a function key.
* `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key. - `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
* `<spacebar>` - Simulates pressing the spacebar. - `<spacebar>` - Simulates pressing the spacebar.
* `<insert>` - Simulates pressing the insert key. - `<insert>` - Simulates pressing the insert key.
* `<home>` `<end>` - Simulates pressing the home and end keys. - `<home>` `<end>` - Simulates pressing the home and end keys.
* `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys. - `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
* `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before sending any additional keys. This - `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before
is useful if you have to generally wait for the UI to update before typing more. sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
In addition to the special keys, each command to type is treated as a In addition to the special keys, each command to type is treated as a
[configuration template](/docs/templates/configuration-templates.html). [configuration template](/docs/templates/configuration-templates.html). The
The available variables are: available variables are:
## prlctl Commands ## prlctl Commands
In order to perform extra customization of the virtual machine, a template can In order to perform extra customization of the virtual machine, a template can
define extra calls to `prlctl` to perform. define extra calls to `prlctl` to perform.
[prlctl](http://download.parallels.com/desktop/v9/ga/docs/en_US/Parallels%20Command%20Line%20Reference%20Guide.pdf) [prlctl](http://download.parallels.com/desktop/v9/ga/docs/en_US/Parallels%20Command%20Line%20Reference%20Guide.pdf)
is the command-line interface to Parallels Desktop. It can be used to configure is the command-line interface to Parallels Desktop. It can be used to configure
the virtual machine, such as set RAM, CPUs, etc. the virtual machine, such as set RAM, CPUs, etc.
Extra `prlctl` commands are defined in the template in the `prlctl` section. Extra `prlctl` commands are defined in the template in the `prlctl` section. An
An example is shown below that sets the memory and number of CPUs within the example is shown below that sets the memory and number of CPUs within the
virtual machine: virtual machine:
```javascript ``` {.javascript}
{ {
"prlctl": [ "prlctl": [
["set", "{{.Name}}", "--memsize", "1024"], ["set", "{{.Name}}", "--memsize", "1024"],
@ -218,7 +227,7 @@ executed in the order defined. So in the above example, the memory will be set
followed by the CPUs. followed by the CPUs.
Each command itself is an array of strings, where each string is an argument to Each command itself is an array of strings, where each string is an argument to
`prlctl`. Each argument is treated as a `prlctl`. Each argument is treated as a [configuration
[configuration template](/docs/templates/configuration-templates.html). The only template](/docs/templates/configuration-templates.html). The only available
available variable is `Name` which is replaced with the unique name of the VM, variable is `Name` which is replaced with the unique name of the VM, which is
which is required for many `prlctl` calls. required for many `prlctl` calls.

View File

@ -1,34 +1,37 @@
--- ---
layout: "docs" description: |
page_title: "Parallels Builder" The Parallels Packer builder is able to create Parallels Desktop for Mac virtual
description: |- machines and export them in the PVM format.
The Parallels Packer builder is able to create Parallels Desktop for Mac virtual machines and export them in the PVM format. layout: docs
--- page_title: Parallels Builder
...
# Parallels Builder # Parallels Builder
The Parallels Packer builder is able to create [Parallels Desktop for Mac](http://www.parallels.com/products/desktop/) virtual machines and export them in the PVM format. The Parallels Packer builder is able to create [Parallels Desktop for
Mac](http://www.parallels.com/products/desktop/) virtual machines and export
them in the PVM format.
Packer actually comes with multiple builders able to create Parallels Packer actually comes with multiple builders able to create Parallels machines,
machines, depending on the strategy you want to use to build the image. depending on the strategy you want to use to build the image. Packer supports
Packer supports the following Parallels builders: the following Parallels builders:
* [parallels-iso](/docs/builders/parallels-iso.html) - Starts from - [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO
an ISO file, creates a brand new Parallels VM, installs an OS, file, creates a brand new Parallels VM, installs an OS, provisions software
provisions software within the OS, then exports that machine to create within the OS, then exports that machine to create an image. This is best
an image. This is best for people who want to start from scratch. for people who want to start from scratch.
* [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder
imports an existing PVM file, runs provisioners on top of that VM,
and exports that machine to create an image. This is best if you have
an existing Parallels VM export you want to use as the source. As an
additional benefit, you can feed the artifact of this builder back into
itself to iterate on a machine.
- [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports an
existing PVM file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing Parallels
VM export you want to use as the source. As an additional benefit, you can
feed the artifact of this builder back into itself to iterate on a machine.
## Requirements ## Requirements
In addition to [Parallels Desktop for Mac](http://www.parallels.com/products/desktop/) this requires the In addition to [Parallels Desktop for
[Parallels Virtualization SDK](http://www.parallels.com/downloads/desktop/). Mac](http://www.parallels.com/products/desktop/) this requires the [Parallels
Virtualization SDK](http://www.parallels.com/downloads/desktop/).
The SDK can be installed by downloading and following the instructions in the dmg. The SDK can be installed by downloading and following the instructions in the
dmg.

View File

@ -1,30 +1,31 @@
--- ---
layout: "docs" description: |
page_title: "QEMU Builder" The Qemu Packer builder is able to create KVM and Xen virtual machine images.
description: |- Support for Xen is experimental at this time.
The Qemu Packer builder is able to create KVM and Xen virtual machine images. Support for Xen is experimental at this time. layout: docs
--- page_title: QEMU Builder
...
# QEMU Builder # QEMU Builder
Type: `qemu` Type: `qemu`
The Qemu Packer builder is able to create [KVM](http://www.linux-kvm.org) The Qemu Packer builder is able to create [KVM](http://www.linux-kvm.org) and
and [Xen](http://www.xenproject.org) virtual machine images. Support [Xen](http://www.xenproject.org) virtual machine images. Support for Xen is
for Xen is experimental at this time. experimental at this time.
The builder builds a virtual machine by creating a new virtual machine The builder builds a virtual machine by creating a new virtual machine from
from scratch, booting it, installing an OS, rebooting the machine with the scratch, booting it, installing an OS, rebooting the machine with the boot media
boot media as the virtual hard drive, provisioning software within as the virtual hard drive, provisioning software within the OS, then shutting it
the OS, then shutting it down. The result of the Qemu builder is a directory down. The result of the Qemu builder is a directory containing the image file
containing the image file necessary to run the virtual machine on KVM or Xen. necessary to run the virtual machine on KVM or Xen.
## Basic Example ## Basic Example
Here is a basic example. This example is functional so long as you fixup Here is a basic example. This example is functional so long as you fixup paths
paths to files, URLS for ISOs and checksums. to files, URLS for ISOs and checksums.
```javascript ``` {.javascript}
{ {
"builders": "builders":
[ [
@ -62,153 +63,153 @@ paths to files, URLS for ISOs and checksums.
} }
``` ```
A working CentOS 6.x kickstart file can be found A working CentOS 6.x kickstart file can be found [at this
[at this URL](https://gist.github.com/mitchellh/7328271/#file-centos6-ks-cfg), adapted from an unknown source. URL](https://gist.github.com/mitchellh/7328271/#file-centos6-ks-cfg), adapted
Place this file in the http directory with the proper name. For the from an unknown source. Place this file in the http directory with the proper
example above, it should go into "httpdir" with a name of "centos6-ks.cfg". name. For the example above, it should go into "httpdir" with a name of
"centos6-ks.cfg".
## Configuration Reference ## Configuration Reference
There are many configuration options available for the Qemu builder. There are many configuration options available for the Qemu builder. They are
They are organized below into two categories: required and optional. Within organized below into two categories: required and optional. Within each
each category, the available options are alphabetized and described. category, the available options are alphabetized and described.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO - `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior files are so large, this is required and Packer will verify it prior to
to booting a virtual machine with the ISO attached. The type of the booting a virtual machine with the ISO attached. The type of the checksum is
checksum is specified with `iso_checksum_type`, documented below. specified with `iso_checksum_type`, documented below.
* `iso_checksum_type` (string) - The type of the checksum specified in - `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "md5", "sha1", "sha256", or "sha512" currently. `iso_checksum`. Valid values are "md5", "sha1", "sha256", or
"sha512" currently.
* `iso_url` (string) - A URL to the ISO containing the installation image. - `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file). This URL can be either an HTTP URL or a file URL (or path to a file). If
If this is an HTTP URL, Packer will download it and cache it between this is an HTTP URL, Packer will download it and cache it between runs.
runs.
* `ssh_username` (string) - The username to use to SSH into the machine - `ssh_username` (string) - The username to use to SSH into the machine once
once the OS is installed. the OS is installed.
### Optional: ### Optional:
* `accelerator` (string) - The accelerator type to use when running the VM. - `accelerator` (string) - The accelerator type to use when running the VM.
This may have a value of either "none", "kvm", "tcg", or "xen" and you must have that This may have a value of either "none", "kvm", "tcg", or "xen" and you must
support in on the machine on which you run the builder. By default "kvm" have that support in on the machine on which you run the builder. By default
is used. "kvm" is used.
* `boot_command` (array of strings) - This is an array of commands to type - `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot keys can be typed as well, and are covered in the section below on the
command. If this is not specified, it is assumed the installer will start boot command. If this is not specified, it is assumed the installer will
itself. start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual - `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified, five seconds and one minute 30 seconds, respectively. If this isn't
the default is 10 seconds. specified, the default is 10 seconds.
* `disk_cache` (string) - The cache mode to use for disk. Allowed values - `disk_cache` (string) - The cache mode to use for disk. Allowed values
include any of "writethrough", "writeback", "none", "unsafe" or include any of "writethrough", "writeback", "none", "unsafe"
"directsync". By default, this is set to "writeback". or "directsync". By default, this is set to "writeback".
* `disk_discard` (string) - The discard mode to use for disk. Allowed values - `disk_discard` (string) - The discard mode to use for disk. Allowed values
include any of "unmap" or "ignore". By default, this is set to "ignore". include any of "unmap" or "ignore". By default, this is set to "ignore".
* `disk_image` (boolean) - Packer defaults to building from an ISO file, - `disk_image` (boolean) - Packer defaults to building from an ISO file, this
this parameter controls whether the ISO URL supplied is actually a bootable parameter controls whether the ISO URL supplied is actually a bootable
QEMU image. When this value is set to true, the machine will clone the QEMU image. When this value is set to true, the machine will clone the
source, resize it according to `disk_size` and boot the image. source, resize it according to `disk_size` and boot the image.
* `disk_interface` (string) - The interface to use for the disk. Allowed - `disk_interface` (string) - The interface to use for the disk. Allowed
values include any of "ide," "scsi" or "virtio." Note also that any boot values include any of "ide," "scsi" or "virtio." Note also that any boot
commands or kickstart type scripts must have proper adjustments for commands or kickstart type scripts must have proper adjustments for
resulting device names. The Qemu builder uses "virtio" by default. resulting device names. The Qemu builder uses "virtio" by default.
* `disk_size` (integer) - The size, in megabytes, of the hard disk to create - `disk_size` (integer) - The size, in megabytes, of the hard disk to create
for the VM. By default, this is 40000 (about 40 GB). for the VM. By default, this is 40000 (about 40 GB).
* `floppy_files` (array of strings) - A list of files to place onto a floppy - `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful disk that is attached when the VM is booted. This is most useful for
for unattended Windows installs, which look for an `Autounattend.xml` file unattended Windows installs, which look for an `Autounattend.xml` file on
on removable media. By default, no floppy will be attached. All files removable media. By default, no floppy will be attached. All files listed in
listed in this setting get placed into the root directory of the floppy this setting get placed into the root directory of the floppy and the floppy
and the floppy is attached as the first floppy device. Currently, no is attached as the first floppy device. Currently, no support exists for
support exists for creating sub-directories on the floppy. Wildcard creating sub-directories on the floppy. Wildcard characters (\*, ?,
characters (*, ?, and []) are allowed. Directory names are also allowed, and \[\]) are allowed. Directory names are also allowed, which will add all
which will add all the files found in the directory to the floppy. the files found in the directory to the floppy.
* `format` (string) - Either "qcow2" or "raw", this specifies the output - `format` (string) - Either "qcow2" or "raw", this specifies the output
format of the virtual machine image. This defaults to "qcow2". format of the virtual machine image. This defaults to "qcow2".
* `headless` (boolean) - Packer defaults to building QEMU virtual machines by - `headless` (boolean) - Packer defaults to building QEMU virtual machines by
launching a GUI that shows the console of the machine being built. launching a GUI that shows the console of the machine being built. When this
When this value is set to true, the machine will start without a console. value is set to true, the machine will start without a console.
* `http_directory` (string) - Path to a directory to serve using an HTTP - `http_directory` (string) - Path to a directory to serve using an
server. The files in this directory will be available over HTTP that will HTTP server. The files in this directory will be available over HTTP that
be requestable from the virtual machine. This is useful for hosting will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP kickstart files and so on. By default this is "", which means no HTTP server
server will be started. The address and port of the HTTP server will be will be started. The address and port of the HTTP server will be available
available as variables in `boot_command`. This is covered in more detail as variables in `boot_command`. This is covered in more detail below.
below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and - `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`. maximum port to use for the HTTP server started to serve the
Because Packer often runs in parallel, Packer will choose a randomly available `http_directory`. Because Packer often runs in parallel, Packer will choose
port in this range to run the HTTP server. If you want to force the HTTP a randomly available port in this range to run the HTTP server. If you want
server to be on one port, make this minimum and maximum port the same. to force the HTTP server to be on one port, make this minimum and maximum
By default the values are 8000 and 9000, respectively. port the same. By default the values are 8000 and 9000, respectively.
* `iso_urls` (array of strings) - Multiple URLs for the ISO to download. - `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to download Packer will try these in order. If anything goes wrong attempting to
or while downloading a single URL, it will move on to the next. All URLs download or while downloading a single URL, it will move on to the next. All
must point to the same file (same checksum). By default this is empty URLs must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified. and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
* `machine_type` (string) - The type of machine emulation to use. Run - `machine_type` (string) - The type of machine emulation to use. Run your
your qemu binary with the flags `-machine help` to list available types qemu binary with the flags `-machine help` to list available types for
for your system. This defaults to "pc". your system. This defaults to "pc".
* `net_device` (string) - The driver to use for the network interface. Allowed - `net_device` (string) - The driver to use for the network interface. Allowed
values "ne2k_pci," "i82551," "i82557b," "i82559er," "rtl8139," "e1000," values "ne2k\_pci," "i82551," "i82557b," "i82559er," "rtl8139," "e1000,"
"pcnet" or "virtio." The Qemu builder uses "virtio" by default. "pcnet" or "virtio." The Qemu builder uses "virtio" by default.
* `output_directory` (string) - This is the path to the directory where the - `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute. resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer` If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder. is executed. This directory must not exist or be empty prior to running
By default this is "output-BUILDNAME" where "BUILDNAME" is the name the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
of the build. name of the build.
* `qemu_binary` (string) - The name of the Qemu binary to look for. This - `qemu_binary` (string) - The name of the Qemu binary to look for. This
defaults to "qemu-system-x86_64", but may need to be changed for some defaults to "qemu-system-x86\_64", but may need to be changed for
platforms. For example "qemu-kvm", or "qemu-system-i386" may be a better some platforms. For example "qemu-kvm", or "qemu-system-i386" may be a
choice for some systems. better choice for some systems.
* `qemuargs` (array of array of strings) - Allows complete control over - `qemuargs` (array of array of strings) - Allows complete control over the
the qemu command line (though not, at this time, qemu-img). Each array qemu command line (though not, at this time, qemu-img). Each array of
of strings makes up a command line switch that overrides matching default strings makes up a command line switch that overrides matching default
switch/value pairs. Any value specified as an empty string is ignored. switch/value pairs. Any value specified as an empty string is ignored. All
All values after the switch are concatenated with no separator. values after the switch are concatenated with no separator.
~> **Warning:** The qemu command line allows extreme flexibility, so beware of \~&gt; **Warning:** The qemu command line allows extreme flexibility, so beware
conflicting arguments causing failures of your run. For instance, using of conflicting arguments causing failures of your run. For instance, using
--no-acpi could break the ability to send power signal type commands (e.g., --no-acpi could break the ability to send power signal type commands (e.g.,
shutdown -P now) to the virtual machine, thus preventing proper shutdown. To shutdown -P now) to the virtual machine, thus preventing proper shutdown. To see
see the defaults, look in the packer.log file and search for the the defaults, look in the packer.log file and search for the qemu-system-x86
qemu-system-x86 command. The arguments are all printed for review. command. The arguments are all printed for review.
The following shows a sample usage: The following shows a sample usage:
```javascript ``` {.javascript}
// ... // ...
"qemuargs": [ "qemuargs": [
[ "-m", "1024M" ], [ "-m", "1024M" ],
@ -224,91 +225,91 @@ qemu-system-x86 command. The arguments are all printed for review.
// ... // ...
``` ```
would produce the following (not including other defaults supplied by the builder and not otherwise conflicting with the qemuargs): would produce the following (not including other defaults supplied by the
builder and not otherwise conflicting with the qemuargs):
<pre class="prettyprint"> <pre class="prettyprint">
qemu-system-x86 -m 1024m --no-acpi -netdev user,id=mynet0,hostfwd=hostip:hostport-guestip:guestport -device virtio-net,netdev=mynet0" qemu-system-x86 -m 1024m --no-acpi -netdev user,id=mynet0,hostfwd=hostip:hostport-guestip:guestport -device virtio-net,netdev=mynet0"
</pre> </pre>
- `shutdown_command` (string) - The command to use to gracefully shut down the
machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
* `shutdown_command` (string) - The command to use to gracefully shut down - `shutdown_timeout` (string) - The amount of time to wait after executing the
the machine once all the provisioning is done. By default this is an empty `shutdown_command` for the virtual machine to actually shut down. If it
string, which tells Packer to just forcefully shut down the machine. doesn't shut down in this time, it is an error. By default, the timeout is
"5m", or five minutes.
* `shutdown_timeout` (string) - The amount of time to wait after executing - `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and
the `shutdown_command` for the virtual machine to actually shut down. maximum port to use for the SSH port on the host machine which is forwarded
If it doesn't shut down in this time, it is an error. By default, the timeout to the SSH port on the guest machine. Because Packer often runs in parallel,
is "5m", or five minutes. Packer will choose a randomly available port in this range to use as the
host port.
* `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and - `vm_name` (string) - This is the name of the image (QCOW2 or IMG) file for
maximum port to use for the SSH port on the host machine which is forwarded the new virtual machine, without the file extension. By default this is
to the SSH port on the guest machine. Because Packer often runs in parallel, "packer-BUILDNAME", where "BUILDNAME" is the name of the build.
Packer will choose a randomly available port in this range to use as the
host port.
* `vm_name` (string) - This is the name of the image (QCOW2 or IMG) file for - `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port
the new virtual machine, without the file extension. By default this is to use for the VNC port on the host machine which is forwarded to the VNC
"packer-BUILDNAME", where "BUILDNAME" is the name of the build. port on the guest machine. Because Packer often runs in parallel, Packer
will choose a randomly available port in this range to use as the host port.
* `vnc_port_min` and `vnc_port_max` (integer) - The minimum and
maximum port to use for the VNC port on the host machine which is forwarded
to the VNC port on the guest machine. Because Packer often runs in parallel,
Packer will choose a randomly available port in this range to use as the
host port.
## Boot Command ## Boot Command
The `boot_command` configuration is very important: it specifies the keys The `boot_command` configuration is very important: it specifies the keys to
to type when the virtual machine is first booted in order to start the type when the virtual machine is first booted in order to start the OS
OS installer. This command is typed after `boot_wait`, which gives the installer. This command is typed after `boot_wait`, which gives the virtual
virtual machine some time to actually load the ISO. machine some time to actually load the ISO.
As documented above, the `boot_command` is an array of strings. The As documented above, the `boot_command` is an array of strings. The strings are
strings are all typed in sequence. It is an array only to improve readability all typed in sequence. It is an array only to improve readability within the
within the template. template.
The boot command is "typed" character for character over a VNC connection The boot command is "typed" character for character over a VNC connection to the
to the machine, simulating a human actually typing the keyboard. There are machine, simulating a human actually typing the keyboard. There are a set of
a set of special keys available. If these are in your boot command, they special keys available. If these are in your boot command, they will be replaced
will be replaced by the proper key: by the proper key:
* `<bs>` - Backspace - `<bs>` - Backspace
* `<del>` - Delete - `<del>` - Delete
* `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress. - `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
* `<esc>` - Simulates pressing the escape key. - `<esc>` - Simulates pressing the escape key.
* `<tab>` - Simulates pressing the tab key. - `<tab>` - Simulates pressing the tab key.
* `<f1>` - `<f12>` - Simulates pressing a function key. - `<f1>` - `<f12>` - Simulates pressing a function key.
* `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key. - `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
* `<spacebar>` - Simulates pressing the spacebar. - `<spacebar>` - Simulates pressing the spacebar.
* `<insert>` - Simulates pressing the insert key. - `<insert>` - Simulates pressing the insert key.
* `<home>` `<end>` - Simulates pressing the home and end keys. - `<home>` `<end>` - Simulates pressing the home and end keys.
* `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys. - `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
* `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before sending any additional keys. This - `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before
is useful if you have to generally wait for the UI to update before typing more. sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
In addition to the special keys, each command to type is treated as a In addition to the special keys, each command to type is treated as a
[configuration template](/docs/templates/configuration-templates.html). [configuration template](/docs/templates/configuration-templates.html). The
The available variables are: available variables are:
* `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory` that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will configuration parameter. If `http_directory` isn't specified, these will be
be blank! blank!
Example boot command. This is actually a working boot command used to start Example boot command. This is actually a working boot command used to start an
an CentOS 6.4 installer: CentOS 6.4 installer:
```javascript ``` {.javascript}
"boot_command": "boot_command":
[ [
"<tab><wait>", "<tab><wait>",

View File

@ -1,30 +1,31 @@
--- ---
layout: "docs" description: |
page_title: "VirtualBox Builder (from an ISO)" The VirtualBox Packer builder is able to create VirtualBox virtual machines and
description: |- export them in the OVF format, starting from an ISO image.
The VirtualBox Packer builder is able to create VirtualBox virtual machines and export them in the OVF format, starting from an ISO image. layout: docs
--- page_title: 'VirtualBox Builder (from an ISO)'
...
# VirtualBox Builder (from an ISO) # VirtualBox Builder (from an ISO)
Type: `virtualbox-iso` Type: `virtualbox-iso`
The VirtualBox Packer builder is able to create [VirtualBox](https://www.virtualbox.org/) The VirtualBox Packer builder is able to create
virtual machines and export them in the OVF format, starting from an [VirtualBox](https://www.virtualbox.org/) virtual machines and export them in
ISO image. the OVF format, starting from an ISO image.
The builder builds a virtual machine by creating a new virtual machine The builder builds a virtual machine by creating a new virtual machine from
from scratch, booting it, installing an OS, provisioning software within scratch, booting it, installing an OS, provisioning software within the OS, then
the OS, then shutting it down. The result of the VirtualBox builder is a directory shutting it down. The result of the VirtualBox builder is a directory containing
containing all the files necessary to run the virtual machine portably. all the files necessary to run the virtual machine portably.
## Basic Example ## Basic Example
Here is a basic example. This example is not functional. It will start the Here is a basic example. This example is not functional. It will start the OS
OS installer but then fail because we don't provide the preseed file for installer but then fail because we don't provide the preseed file for Ubuntu to
Ubuntu to self-install. Still, the example serves to show the basic configuration: self-install. Still, the example serves to show the basic configuration:
```javascript ``` {.javascript}
{ {
"type": "virtualbox-iso", "type": "virtualbox-iso",
"guest_os_type": "Ubuntu_64", "guest_os_type": "Ubuntu_64",
@ -37,250 +38,254 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
} }
``` ```
It is important to add a `shutdown_command`. By default Packer halts the It is important to add a `shutdown_command`. By default Packer halts the virtual
virtual machine and the file system may not be sync'd. Thus, changes made in a machine and the file system may not be sync'd. Thus, changes made in a
provisioner might not be saved. provisioner might not be saved.
## Configuration Reference ## Configuration Reference
There are many configuration options available for the VirtualBox builder. There are many configuration options available for the VirtualBox builder. They
They are organized below into two categories: required and optional. Within are organized below into two categories: required and optional. Within each
each category, the available options are alphabetized and described. category, the available options are alphabetized and described.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO - `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior files are so large, this is required and Packer will verify it prior to
to booting a virtual machine with the ISO attached. The type of the booting a virtual machine with the ISO attached. The type of the checksum is
checksum is specified with `iso_checksum_type`, documented below. specified with `iso_checksum_type`, documented below.
* `iso_checksum_type` (string) - The type of the checksum specified in - `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
"sha512" currently. While "none" will skip checksumming, this is not "sha512" currently. While "none" will skip checksumming, this is not
recommended since ISO files are generally large and corruption does happen recommended since ISO files are generally large and corruption does happen
from time to time. from time to time.
* `iso_url` (string) - A URL to the ISO containing the installation image. - `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file). This URL can be either an HTTP URL or a file URL (or path to a file). If
If this is an HTTP URL, Packer will download it and cache it between this is an HTTP URL, Packer will download it and cache it between runs.
runs.
* `ssh_username` (string) - The username to use to SSH into the machine - `ssh_username` (string) - The username to use to SSH into the machine once
once the OS is installed. the OS is installed.
- `ssh_password` (string) - The password to use to SSH into the machine once
the OS is installed.
### Optional: ### Optional:
* `boot_command` (array of strings) - This is an array of commands to type - `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot keys can be typed as well, and are covered in the section below on the
command. If this is not specified, it is assumed the installer will start boot command. If this is not specified, it is assumed the installer will
itself. start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual - `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified, five seconds and one minute 30 seconds, respectively. If this isn't
the default is 10 seconds. specified, the default is 10 seconds.
* `disk_size` (integer) - The size, in megabytes, of the hard disk to create - `disk_size` (integer) - The size, in megabytes, of the hard disk to create
for the VM. By default, this is 40000 (about 40 GB). for the VM. By default, this is 40000 (about 40 GB).
* `export_opts` (array of strings) - Additional options to pass to the `VBoxManage export`. - `export_opts` (array of strings) - Additional options to pass to the
This can be useful for passing product information to include in the resulting `VBoxManage export`. This can be useful for passing product information to
appliance file. include in the resulting appliance file.
* `floppy_files` (array of strings) - A list of files to place onto a floppy - `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful disk that is attached when the VM is booted. This is most useful for
for unattended Windows installs, which look for an `Autounattend.xml` file unattended Windows installs, which look for an `Autounattend.xml` file on
on removable media. By default, no floppy will be attached. All files removable media. By default, no floppy will be attached. All files listed in
listed in this setting get placed into the root directory of the floppy this setting get placed into the root directory of the floppy and the floppy
and the floppy is attached as the first floppy device. Currently, no is attached as the first floppy device. Currently, no support exists for
support exists for creating sub-directories on the floppy. Wildcard creating sub-directories on the floppy. Wildcard characters (\*, ?,
characters (*, ?, and []) are allowed. Directory names are also allowed, and \[\]) are allowed. Directory names are also allowed, which will add all
which will add all the files found in the directory to the floppy. the files found in the directory to the floppy.
* `format` (string) - Either "ovf" or "ova", this specifies the output - `format` (string) - Either "ovf" or "ova", this specifies the output format
format of the exported virtual machine. This defaults to "ovf". of the exported virtual machine. This defaults to "ovf".
* `guest_additions_mode` (string) - The method by which guest additions - `guest_additions_mode` (string) - The method by which guest additions are
are made available to the guest for installation. Valid options are made available to the guest for installation. Valid options are "upload",
"upload", "attach", or "disable". If the mode is "attach" the guest "attach", or "disable". If the mode is "attach" the guest additions ISO will
additions ISO will be attached as a CD device to the virtual machine. be attached as a CD device to the virtual machine. If the mode is "upload"
If the mode is "upload" the guest additions ISO will be uploaded to the guest additions ISO will be uploaded to the path specified by
the path specified by `guest_additions_path`. The default value is `guest_additions_path`. The default value is "upload". If "disable" is used,
"upload". If "disable" is used, guest additions won't be downloaded, guest additions won't be downloaded, either.
either.
* `guest_additions_path` (string) - The path on the guest virtual machine - `guest_additions_path` (string) - The path on the guest virtual machine
where the VirtualBox guest additions ISO will be uploaded. By default this where the VirtualBox guest additions ISO will be uploaded. By default this
is "VBoxGuestAdditions.iso" which should upload into the login directory is "VBoxGuestAdditions.iso" which should upload into the login directory of
of the user. This is a [configuration template](/docs/templates/configuration-templates.html) the user. This is a [configuration
where the `Version` variable is replaced with the VirtualBox version. template](/docs/templates/configuration-templates.html) where the `Version`
variable is replaced with the VirtualBox version.
* `guest_additions_sha256` (string) - The SHA256 checksum of the guest - `guest_additions_sha256` (string) - The SHA256 checksum of the guest
additions ISO that will be uploaded to the guest VM. By default the additions ISO that will be uploaded to the guest VM. By default the
checksums will be downloaded from the VirtualBox website, so this only checksums will be downloaded from the VirtualBox website, so this only needs
needs to be set if you want to be explicit about the checksum. to be set if you want to be explicit about the checksum.
* `guest_additions_url` (string) - The URL to the guest additions ISO - `guest_additions_url` (string) - The URL to the guest additions ISO
to upload. This can also be a file URL if the ISO is at a local path. to upload. This can also be a file URL if the ISO is at a local path. By
By default, the VirtualBox builder will attempt to find the guest additions default, the VirtualBox builder will attempt to find the guest additions ISO
ISO on the local file system. If it is not available locally, the builder on the local file system. If it is not available locally, the builder will
will download the proper guest additions ISO from the internet. download the proper guest additions ISO from the internet.
* `guest_os_type` (string) - The guest OS type being installed. By default - `guest_os_type` (string) - The guest OS type being installed. By default
this is "other", but you can get _dramatic_ performance improvements by this is "other", but you can get *dramatic* performance improvements by
setting this to the proper value. To view all available values for this setting this to the proper value. To view all available values for this run
run `VBoxManage list ostypes`. Setting the correct value hints to VirtualBox `VBoxManage list ostypes`. Setting the correct value hints to VirtualBox how
how to optimize the virtual hardware to work best with that operating to optimize the virtual hardware to work best with that operating system.
system.
* `hard_drive_interface` (string) - The type of controller that the primary - `hard_drive_interface` (string) - The type of controller that the primary
hard drive is attached to, defaults to "ide". When set to "sata", the hard drive is attached to, defaults to "ide". When set to "sata", the drive
drive is attached to an AHCI SATA controller. When set to "scsi", the drive is attached to an AHCI SATA controller. When set to "scsi", the drive is
is attached to an LsiLogic SCSI controller. attached to an LsiLogic SCSI controller.
* `headless` (boolean) - Packer defaults to building VirtualBox - `headless` (boolean) - Packer defaults to building VirtualBox virtual
virtual machines by launching a GUI that shows the console of the machines by launching a GUI that shows the console of the machine
machine being built. When this value is set to true, the machine will being built. When this value is set to true, the machine will start without
start without a console. a console.
* `http_directory` (string) - Path to a directory to serve using an HTTP - `http_directory` (string) - Path to a directory to serve using an
server. The files in this directory will be available over HTTP that will HTTP server. The files in this directory will be available over HTTP that
be requestable from the virtual machine. This is useful for hosting will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP kickstart files and so on. By default this is "", which means no HTTP server
server will be started. The address and port of the HTTP server will be will be started. The address and port of the HTTP server will be available
available as variables in `boot_command`. This is covered in more detail as variables in `boot_command`. This is covered in more detail below.
below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and - `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`. maximum port to use for the HTTP server started to serve the
Because Packer often runs in parallel, Packer will choose a randomly available `http_directory`. Because Packer often runs in parallel, Packer will choose
port in this range to run the HTTP server. If you want to force the HTTP a randomly available port in this range to run the HTTP server. If you want
server to be on one port, make this minimum and maximum port the same. to force the HTTP server to be on one port, make this minimum and maximum
By default the values are 8000 and 9000, respectively. port the same. By default the values are 8000 and 9000, respectively.
* `iso_interface` (string) - The type of controller that the ISO is attached - `iso_interface` (string) - The type of controller that the ISO is attached
to, defaults to "ide". When set to "sata", the drive is attached to an to, defaults to "ide". When set to "sata", the drive is attached to an AHCI
AHCI SATA controller. SATA controller.
* `iso_urls` (array of strings) - Multiple URLs for the ISO to download. - `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to download Packer will try these in order. If anything goes wrong attempting to
or while downloading a single URL, it will move on to the next. All URLs download or while downloading a single URL, it will move on to the next. All
must point to the same file (same checksum). By default this is empty URLs must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified. and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
* `output_directory` (string) - This is the path to the directory where the - `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute. resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer` If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder. is executed. This directory must not exist or be empty prior to running
By default this is "output-BUILDNAME" where "BUILDNAME" is the name the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
of the build. name of the build.
* `shutdown_command` (string) - The command to use to gracefully shut down the machine once all - `shutdown_command` (string) - The command to use to gracefully shut down the
the provisioning is done. By default this is an empty string, which tells Packer to just machine once all the provisioning is done. By default this is an empty
forcefully shut down the machine unless a shutdown command takes place inside script so this may string, which tells Packer to just forcefully shut down the machine unless a
safely be omitted. If one or more scripts require a reboot it is suggested to leave this blank shutdown command takes place inside script so this may safely be omitted. If
since reboots may fail and specify the final shutdown command in your last script. one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your
last script.
* `shutdown_timeout` (string) - The amount of time to wait after executing - `shutdown_timeout` (string) - The amount of time to wait after executing the
the `shutdown_command` for the virtual machine to actually shut down. `shutdown_command` for the virtual machine to actually shut down. If it
If it doesn't shut down in this time, it is an error. By default, the timeout doesn't shut down in this time, it is an error. By default, the timeout is
is "5m", or five minutes. "5m", or five minutes.
* `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and - `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and
maximum port to use for the SSH port on the host machine which is forwarded maximum port to use for the SSH port on the host machine which is forwarded
to the SSH port on the guest machine. Because Packer often runs in parallel, to the SSH port on the guest machine. Because Packer often runs in parallel,
Packer will choose a randomly available port in this range to use as the Packer will choose a randomly available port in this range to use as the
host port. host port.
* `ssh_skip_nat_mapping` (boolean) - Defaults to false. When enabled, Packer does - `ssh_skip_nat_mapping` (boolean) - Defaults to false. When enabled, Packer
not setup forwarded port mapping for SSH requests and uses `ssh_port` on the does not setup forwarded port mapping for SSH requests and uses `ssh_port`
host to communicate to the virtual machine on the host to communicate to the virtual machine
* `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to - `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to
execute in order to further customize the virtual machine being created. execute in order to further customize the virtual machine being created. The
The value of this is an array of commands to execute. The commands are executed value of this is an array of commands to execute. The commands are executed
in the order defined in the template. For each command, the command is in the order defined in the template. For each command, the command is
defined itself as an array of strings, where each string represents a single defined itself as an array of strings, where each string represents a single
argument on the command-line to `VBoxManage` (but excluding `VBoxManage` argument on the command-line to `VBoxManage` (but excluding
itself). Each arg is treated as a [configuration template](/docs/templates/configuration-templates.html), `VBoxManage` itself). Each arg is treated as a [configuration
where the `Name` variable is replaced with the VM name. More details on how template](/docs/templates/configuration-templates.html), where the `Name`
to use `VBoxManage` are below. variable is replaced with the VM name. More details on how to use
`VBoxManage` are below.
* `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`, - `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`,
except that it is run after the virtual machine is shutdown, and before the except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported. virtual machine is exported.
* `virtualbox_version_file` (string) - The path within the virtual machine - `virtualbox_version_file` (string) - The path within the virtual machine to
to upload a file that contains the VirtualBox version that was used to upload a file that contains the VirtualBox version that was used to create
create the machine. This information can be useful for provisioning. the machine. This information can be useful for provisioning. By default
By default this is ".vbox_version", which will generally be upload it into this is ".vbox\_version", which will generally be upload it into the
the home directory. home directory.
* `vm_name` (string) - This is the name of the OVF file for the new virtual - `vm_name` (string) - This is the name of the OVF file for the new virtual
machine, without the file extension. By default this is "packer-BUILDNAME", machine, without the file extension. By default this is "packer-BUILDNAME",
where "BUILDNAME" is the name of the build. where "BUILDNAME" is the name of the build.
## Boot Command ## Boot Command
The `boot_command` configuration is very important: it specifies the keys The `boot_command` configuration is very important: it specifies the keys to
to type when the virtual machine is first booted in order to start the type when the virtual machine is first booted in order to start the OS
OS installer. This command is typed after `boot_wait`, which gives the installer. This command is typed after `boot_wait`, which gives the virtual
virtual machine some time to actually load the ISO. machine some time to actually load the ISO.
As documented above, the `boot_command` is an array of strings. The As documented above, the `boot_command` is an array of strings. The strings are
strings are all typed in sequence. It is an array only to improve readability all typed in sequence. It is an array only to improve readability within the
within the template. template.
The boot command is "typed" character for character over a VNC connection The boot command is "typed" character for character over a VNC connection to the
to the machine, simulating a human actually typing the keyboard. There are machine, simulating a human actually typing the keyboard. There are a set of
a set of special keys available. If these are in your boot command, they special keys available. If these are in your boot command, they will be replaced
will be replaced by the proper key: by the proper key:
* `<bs>` - Backspace - `<bs>` - Backspace
* `<del>` - Delete - `<del>` - Delete
* `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress. - `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
* `<esc>` - Simulates pressing the escape key. - `<esc>` - Simulates pressing the escape key.
* `<tab>` - Simulates pressing the tab key. - `<tab>` - Simulates pressing the tab key.
* `<f1>` - `<f12>` - Simulates pressing a function key. - `<f1>` - `<f12>` - Simulates pressing a function key.
* `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key. - `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
* `<spacebar>` - Simulates pressing the spacebar. - `<spacebar>` - Simulates pressing the spacebar.
* `<insert>` - Simulates pressing the insert key. - `<insert>` - Simulates pressing the insert key.
* `<home>` `<end>` - Simulates pressing the home and end keys. - `<home>` `<end>` - Simulates pressing the home and end keys.
* `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys. - `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
* `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before sending any additional keys. This - `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before
is useful if you have to generally wait for the UI to update before typing more. sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
In addition to the special keys, each command to type is treated as a In addition to the special keys, each command to type is treated as a
[configuration template](/docs/templates/configuration-templates.html). [configuration template](/docs/templates/configuration-templates.html). The
The available variables are: available variables are:
* `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory` that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will configuration parameter. If `http_directory` isn't specified, these will be
be blank! blank!
Example boot command. This is actually a working boot command used to start Example boot command. This is actually a working boot command used to start an
an Ubuntu 12.04 installer: Ubuntu 12.04 installer:
```text ``` {.text}
[ [
"<esc><esc><enter><wait>", "<esc><esc><enter><wait>",
"/install/vmlinuz noapic ", "/install/vmlinuz noapic ",
@ -296,31 +301,32 @@ an Ubuntu 12.04 installer:
## Guest Additions ## Guest Additions
Packer will automatically download the proper guest additions for the Packer will automatically download the proper guest additions for the version of
version of VirtualBox that is running and upload those guest additions into VirtualBox that is running and upload those guest additions into the virtual
the virtual machine so that provisioners can easily install them. machine so that provisioners can easily install them.
Packer downloads the guest additions from the official VirtualBox website, Packer downloads the guest additions from the official VirtualBox website, and
and verifies the file with the official checksums released by VirtualBox. verifies the file with the official checksums released by VirtualBox.
After the virtual machine is up and the operating system is installed, After the virtual machine is up and the operating system is installed, Packer
Packer uploads the guest additions into the virtual machine. The path where uploads the guest additions into the virtual machine. The path where they are
they are uploaded is controllable by `guest_additions_path`, and defaults uploaded is controllable by `guest_additions_path`, and defaults to
to "VBoxGuestAdditions.iso". Without an absolute path, it is uploaded to the "VBoxGuestAdditions.iso". Without an absolute path, it is uploaded to the home
home directory of the SSH user. directory of the SSH user.
## VBoxManage Commands ## VBoxManage Commands
In order to perform extra customization of the virtual machine, a template In order to perform extra customization of the virtual machine, a template can
can define extra calls to `VBoxManage` to perform. [VBoxManage](http://www.virtualbox.org/manual/ch08.html) define extra calls to `VBoxManage` to perform.
is the command-line interface to VirtualBox where you can completely control [VBoxManage](http://www.virtualbox.org/manual/ch08.html) is the command-line
VirtualBox. It can be used to do things such as set RAM, CPUs, etc. interface to VirtualBox where you can completely control VirtualBox. It can be
used to do things such as set RAM, CPUs, etc.
Extra VBoxManage commands are defined in the template in the `vboxmanage` section. Extra VBoxManage commands are defined in the template in the `vboxmanage`
An example is shown below that sets the memory and number of CPUs within the section. An example is shown below that sets the memory and number of CPUs
virtual machine: within the virtual machine:
```javascript ``` {.javascript}
{ {
"vboxmanage": [ "vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "1024"], ["modifyvm", "{{.Name}}", "--memory", "1024"],
@ -329,12 +335,12 @@ virtual machine:
} }
``` ```
The value of `vboxmanage` is an array of commands to execute. These commands The value of `vboxmanage` is an array of commands to execute. These commands are
are executed in the order defined. So in the above example, the memory will be executed in the order defined. So in the above example, the memory will be set
set followed by the CPUs. followed by the CPUs.
Each command itself is an array of strings, where each string is an argument Each command itself is an array of strings, where each string is an argument to
to `VBoxManage`. Each argument is treated as a `VBoxManage`. Each argument is treated as a [configuration
[configuration template](/docs/templates/configuration-templates.html). template](/docs/templates/configuration-templates.html). The only available
The only available variable is `Name` which is replaced with the unique variable is `Name` which is replaced with the unique name of the VM, which is
name of the VM, which is required for many VBoxManage calls. required for many VBoxManage calls.

View File

@ -1,39 +1,41 @@
--- ---
layout: "docs" description: |
page_title: "VirtualBox Builder (from an OVF/OVA)" This VirtualBox Packer builder is able to create VirtualBox virtual machines and
description: |- export them in the OVF format, starting from an existing OVF/OVA (exported
This VirtualBox Packer builder is able to create VirtualBox virtual machines and export them in the OVF format, starting from an existing OVF/OVA (exported virtual machine image). virtual machine image).
--- layout: docs
page_title: 'VirtualBox Builder (from an OVF/OVA)'
...
# VirtualBox Builder (from an OVF/OVA) # VirtualBox Builder (from an OVF/OVA)
Type: `virtualbox-ovf` Type: `virtualbox-ovf`
This VirtualBox Packer builder is able to create [VirtualBox](https://www.virtualbox.org/) This VirtualBox Packer builder is able to create
virtual machines and export them in the OVF format, starting from an [VirtualBox](https://www.virtualbox.org/) virtual machines and export them in
existing OVF/OVA (exported virtual machine image). the OVF format, starting from an existing OVF/OVA (exported virtual machine
image).
When exporting from VirtualBox make sure to choose OVF Version 2, since Version 1 is not compatible and will generate errors like this: When exporting from VirtualBox make sure to choose OVF Version 2, since Version
1 is not compatible and will generate errors like this:
``` ==> virtualbox-ovf: Progress state: VBOX_E_FILE_ERROR
==> virtualbox-ovf: Progress state: VBOX_E_FILE_ERROR ==> virtualbox-ovf: VBoxManage: error: Appliance read failed
==> virtualbox-ovf: VBoxManage: error: Appliance read failed ==> virtualbox-ovf: VBoxManage: error: Error reading "source.ova": element "Section" has no "type" attribute, line 21
==> virtualbox-ovf: VBoxManage: error: Error reading "source.ova": element "Section" has no "type" attribute, line 21 ==> virtualbox-ovf: VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Appliance, interface IAppliance
==> virtualbox-ovf: VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Appliance, interface IAppliance ==> virtualbox-ovf: VBoxManage: error: Context: "int handleImportAppliance(HandlerArg*)" at line 304 of file VBoxManageAppliance.cpp
==> virtualbox-ovf: VBoxManage: error: Context: "int handleImportAppliance(HandlerArg*)" at line 304 of file VBoxManageAppliance.cpp
```
The builder builds a virtual machine by importing an existing OVF or OVA The builder builds a virtual machine by importing an existing OVF or OVA file.
file. It then boots this image, runs provisioners on this new VM, and It then boots this image, runs provisioners on this new VM, and exports that VM
exports that VM to create the image. The imported machine is deleted prior to create the image. The imported machine is deleted prior to finishing the
to finishing the build. build.
## Basic Example ## Basic Example
Here is a basic example. This example is functional if you have an OVF matching Here is a basic example. This example is functional if you have an OVF matching
the settings here. the settings here.
```javascript ``` {.javascript}
{ {
"type": "virtualbox-ovf", "type": "virtualbox-ovf",
"source_path": "source.ovf", "source_path": "source.ovf",
@ -43,193 +45,196 @@ the settings here.
} }
``` ```
It is important to add a `shutdown_command`. By default Packer halts the It is important to add a `shutdown_command`. By default Packer halts the virtual
virtual machine and the file system may not be sync'd. Thus, changes made in a machine and the file system may not be sync'd. Thus, changes made in a
provisioner might not be saved. provisioner might not be saved.
## Configuration Reference ## Configuration Reference
There are many configuration options available for the VirtualBox builder. There are many configuration options available for the VirtualBox builder. They
They are organized below into two categories: required and optional. Within are organized below into two categories: required and optional. Within each
each category, the available options are alphabetized and described. category, the available options are alphabetized and described.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `source_path` (string) - The path to an OVF or OVA file that acts as - `source_path` (string) - The path to an OVF or OVA file that acts as the
the source of this build. source of this build.
* `ssh_username` (string) - The username to use to SSH into the machine - `ssh_username` (string) - The username to use to SSH into the machine once
once the OS is installed. the OS is installed.
### Optional: ### Optional:
* `boot_command` (array of strings) - This is an array of commands to type - `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot keys can be typed as well, and are covered in the section below on the
command. If this is not specified, it is assumed the installer will start boot command. If this is not specified, it is assumed the installer will
itself. start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual - `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified, five seconds and one minute 30 seconds, respectively. If this isn't
the default is 10 seconds. specified, the default is 10 seconds.
* `export_opts` (array of strings) - Additional options to pass to the `VBoxManage export`. - `export_opts` (array of strings) - Additional options to pass to the
This can be useful for passing product information to include in the resulting `VBoxManage export`. This can be useful for passing product information to
appliance file. include in the resulting appliance file.
* `floppy_files` (array of strings) - A list of files to place onto a floppy - `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful disk that is attached when the VM is booted. This is most useful for
for unattended Windows installs, which look for an `Autounattend.xml` file unattended Windows installs, which look for an `Autounattend.xml` file on
on removable media. By default, no floppy will be attached. All files removable media. By default, no floppy will be attached. All files listed in
listed in this setting get placed into the root directory of the floppy this setting get placed into the root directory of the floppy and the floppy
and the floppy is attached as the first floppy device. Currently, no is attached as the first floppy device. Currently, no support exists for
support exists for creating sub-directories on the floppy. Wildcard creating sub-directories on the floppy. Wildcard characters (\*, ?,
characters (*, ?, and []) are allowed. Directory names are also allowed, and \[\]) are allowed. Directory names are also allowed, which will add all
which will add all the files found in the directory to the floppy. the files found in the directory to the floppy.
* `format` (string) - Either "ovf" or "ova", this specifies the output - `format` (string) - Either "ovf" or "ova", this specifies the output format
format of the exported virtual machine. This defaults to "ovf". of the exported virtual machine. This defaults to "ovf".
* `guest_additions_mode` (string) - The method by which guest additions - `guest_additions_mode` (string) - The method by which guest additions are
are made available to the guest for installation. Valid options are made available to the guest for installation. Valid options are "upload",
"upload", "attach", or "disable". If the mode is "attach" the guest "attach", or "disable". If the mode is "attach" the guest additions ISO will
additions ISO will be attached as a CD device to the virtual machine. be attached as a CD device to the virtual machine. If the mode is "upload"
If the mode is "upload" the guest additions ISO will be uploaded to the guest additions ISO will be uploaded to the path specified by
the path specified by `guest_additions_path`. The default value is `guest_additions_path`. The default value is "upload". If "disable" is used,
"upload". If "disable" is used, guest additions won't be downloaded, guest additions won't be downloaded, either.
either.
* `guest_additions_path` (string) - The path on the guest virtual machine - `guest_additions_path` (string) - The path on the guest virtual machine
where the VirtualBox guest additions ISO will be uploaded. By default this where the VirtualBox guest additions ISO will be uploaded. By default this
is "VBoxGuestAdditions.iso" which should upload into the login directory is "VBoxGuestAdditions.iso" which should upload into the login directory of
of the user. This is a [configuration template](/docs/templates/configuration-templates.html) the user. This is a [configuration
where the `Version` variable is replaced with the VirtualBox version. template](/docs/templates/configuration-templates.html) where the `Version`
variable is replaced with the VirtualBox version.
* `guest_additions_sha256` (string) - The SHA256 checksum of the guest - `guest_additions_sha256` (string) - The SHA256 checksum of the guest
additions ISO that will be uploaded to the guest VM. By default the additions ISO that will be uploaded to the guest VM. By default the
checksums will be downloaded from the VirtualBox website, so this only checksums will be downloaded from the VirtualBox website, so this only needs
needs to be set if you want to be explicit about the checksum. to be set if you want to be explicit about the checksum.
* `guest_additions_url` (string) - The URL to the guest additions ISO - `guest_additions_url` (string) - The URL to the guest additions ISO
to upload. This can also be a file URL if the ISO is at a local path. to upload. This can also be a file URL if the ISO is at a local path. By
By default the VirtualBox builder will go and download the proper default the VirtualBox builder will go and download the proper guest
guest additions ISO from the internet. additions ISO from the internet.
* `headless` (boolean) - Packer defaults to building VirtualBox - `headless` (boolean) - Packer defaults to building VirtualBox virtual
virtual machines by launching a GUI that shows the console of the machines by launching a GUI that shows the console of the machine
machine being built. When this value is set to true, the machine will being built. When this value is set to true, the machine will start without
start without a console. a console.
* `http_directory` (string) - Path to a directory to serve using an HTTP - `http_directory` (string) - Path to a directory to serve using an
server. The files in this directory will be available over HTTP that will HTTP server. The files in this directory will be available over HTTP that
be requestable from the virtual machine. This is useful for hosting will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP kickstart files and so on. By default this is "", which means no HTTP server
server will be started. The address and port of the HTTP server will be will be started. The address and port of the HTTP server will be available
available as variables in `boot_command`. This is covered in more detail as variables in `boot_command`. This is covered in more detail below.
below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and - `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`. maximum port to use for the HTTP server started to serve the
Because Packer often runs in parallel, Packer will choose a randomly available `http_directory`. Because Packer often runs in parallel, Packer will choose
port in this range to run the HTTP server. If you want to force the HTTP a randomly available port in this range to run the HTTP server. If you want
server to be on one port, make this minimum and maximum port the same. to force the HTTP server to be on one port, make this minimum and maximum
By default the values are 8000 and 9000, respectively. port the same. By default the values are 8000 and 9000, respectively.
* `import_flags` (array of strings) - Additional flags to pass to - `import_flags` (array of strings) - Additional flags to pass to
`VBoxManage import`. This can be used to add additional command-line flags `VBoxManage import`. This can be used to add additional command-line flags
such as `--eula-accept` to accept a EULA in the OVF. such as `--eula-accept` to accept a EULA in the OVF.
* `import_opts` (string) - Additional options to pass to the `VBoxManage import`. - `import_opts` (string) - Additional options to pass to the
This can be useful for passing "keepallmacs" or "keepnatmacs" options for existing `VBoxManage import`. This can be useful for passing "keepallmacs" or
ovf images. "keepnatmacs" options for existing ovf images.
* `output_directory` (string) - This is the path to the directory where the - `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute. resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer` If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder. is executed. This directory must not exist or be empty prior to running
By default this is "output-BUILDNAME" where "BUILDNAME" is the name the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
of the build. name of the build.
* `shutdown_command` (string) - The command to use to gracefully shut down the machine once all - `shutdown_command` (string) - The command to use to gracefully shut down the
the provisioning is done. By default this is an empty string, which tells Packer to just machine once all the provisioning is done. By default this is an empty
forcefully shut down the machine unless a shutdown command takes place inside script so this may string, which tells Packer to just forcefully shut down the machine unless a
safely be omitted. If one or more scripts require a reboot it is suggested to leave this blank shutdown command takes place inside script so this may safely be omitted. If
since reboots may fail and specify the final shutdown command in your last script. one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your
last script.
* `shutdown_timeout` (string) - The amount of time to wait after executing - `shutdown_timeout` (string) - The amount of time to wait after executing the
the `shutdown_command` for the virtual machine to actually shut down. `shutdown_command` for the virtual machine to actually shut down. If it
If it doesn't shut down in this time, it is an error. By default, the timeout doesn't shut down in this time, it is an error. By default, the timeout is
is "5m", or five minutes. "5m", or five minutes.
* `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and - `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and
maximum port to use for the SSH port on the host machine which is forwarded maximum port to use for the SSH port on the host machine which is forwarded
to the SSH port on the guest machine. Because Packer often runs in parallel, to the SSH port on the guest machine. Because Packer often runs in parallel,
Packer will choose a randomly available port in this range to use as the Packer will choose a randomly available port in this range to use as the
host port. host port.
* `ssh_skip_nat_mapping` (boolean) - Defaults to false. When enabled, Packer does - `ssh_skip_nat_mapping` (boolean) - Defaults to false. When enabled, Packer
not setup forwarded port mapping for SSH requests and uses `ssh_port` on the does not setup forwarded port mapping for SSH requests and uses `ssh_port`
host to communicate to the virtual machine on the host to communicate to the virtual machine
* `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to - `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to
execute in order to further customize the virtual machine being created. execute in order to further customize the virtual machine being created. The
The value of this is an array of commands to execute. The commands are executed value of this is an array of commands to execute. The commands are executed
in the order defined in the template. For each command, the command is in the order defined in the template. For each command, the command is
defined itself as an array of strings, where each string represents a single defined itself as an array of strings, where each string represents a single
argument on the command-line to `VBoxManage` (but excluding `VBoxManage` argument on the command-line to `VBoxManage` (but excluding
itself). Each arg is treated as a [configuration template](/docs/templates/configuration-templates.html), `VBoxManage` itself). Each arg is treated as a [configuration
where the `Name` variable is replaced with the VM name. More details on how template](/docs/templates/configuration-templates.html), where the `Name`
to use `VBoxManage` are below. variable is replaced with the VM name. More details on how to use
`VBoxManage` are below.
* `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`, - `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`,
except that it is run after the virtual machine is shutdown, and before the except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported. virtual machine is exported.
* `virtualbox_version_file` (string) - The path within the virtual machine - `virtualbox_version_file` (string) - The path within the virtual machine to
to upload a file that contains the VirtualBox version that was used to upload a file that contains the VirtualBox version that was used to create
create the machine. This information can be useful for provisioning. the machine. This information can be useful for provisioning. By default
By default this is ".vbox_version", which will generally be upload it into this is ".vbox\_version", which will generally be upload it into the
the home directory. home directory.
* `vm_name` (string) - This is the name of the virtual machine when it is - `vm_name` (string) - This is the name of the virtual machine when it is
imported as well as the name of the OVF file when the virtual machine is imported as well as the name of the OVF file when the virtual machine
exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is is exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is the
the name of the build. name of the build.
## Guest Additions ## Guest Additions
Packer will automatically download the proper guest additions for the Packer will automatically download the proper guest additions for the version of
version of VirtualBox that is running and upload those guest additions into VirtualBox that is running and upload those guest additions into the virtual
the virtual machine so that provisioners can easily install them. machine so that provisioners can easily install them.
Packer downloads the guest additions from the official VirtualBox website, Packer downloads the guest additions from the official VirtualBox website, and
and verifies the file with the official checksums released by VirtualBox. verifies the file with the official checksums released by VirtualBox.
After the virtual machine is up and the operating system is installed, After the virtual machine is up and the operating system is installed, Packer
Packer uploads the guest additions into the virtual machine. The path where uploads the guest additions into the virtual machine. The path where they are
they are uploaded is controllable by `guest_additions_path`, and defaults uploaded is controllable by `guest_additions_path`, and defaults to
to "VBoxGuestAdditions.iso". Without an absolute path, it is uploaded to the "VBoxGuestAdditions.iso". Without an absolute path, it is uploaded to the home
home directory of the SSH user. directory of the SSH user.
## VBoxManage Commands ## VBoxManage Commands
In order to perform extra customization of the virtual machine, a template In order to perform extra customization of the virtual machine, a template can
can define extra calls to `VBoxManage` to perform. [VBoxManage](http://www.virtualbox.org/manual/ch08.html) define extra calls to `VBoxManage` to perform.
is the command-line interface to VirtualBox where you can completely control [VBoxManage](http://www.virtualbox.org/manual/ch08.html) is the command-line
VirtualBox. It can be used to do things such as set RAM, CPUs, etc. interface to VirtualBox where you can completely control VirtualBox. It can be
used to do things such as set RAM, CPUs, etc.
Extra VBoxManage commands are defined in the template in the `vboxmanage` section. Extra VBoxManage commands are defined in the template in the `vboxmanage`
An example is shown below that sets the memory and number of CPUs within the section. An example is shown below that sets the memory and number of CPUs
virtual machine: within the virtual machine:
```javascript ``` {.javascript}
{ {
"vboxmanage": [ "vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "1024"], ["modifyvm", "{{.Name}}", "--memory", "1024"],
@ -238,12 +243,12 @@ virtual machine:
} }
``` ```
The value of `vboxmanage` is an array of commands to execute. These commands The value of `vboxmanage` is an array of commands to execute. These commands are
are executed in the order defined. So in the above example, the memory will be executed in the order defined. So in the above example, the memory will be set
set followed by the CPUs. followed by the CPUs.
Each command itself is an array of strings, where each string is an argument Each command itself is an array of strings, where each string is an argument to
to `VBoxManage`. Each argument is treated as a `VBoxManage`. Each argument is treated as a [configuration
[configuration template](/docs/templates/configuration-templates.html). template](/docs/templates/configuration-templates.html). The only available
The only available variable is `Name` which is replaced with the unique variable is `Name` which is replaced with the unique name of the VM, which is
name of the VM, which is required for many VBoxManage calls. required for many VBoxManage calls.

View File

@ -1,27 +1,29 @@
--- ---
layout: "docs" description: |
page_title: "VirtualBox Builder" The VirtualBox Packer builder is able to create VirtualBox virtual machines and
description: |- export them in the OVA or OVF format.
The VirtualBox Packer builder is able to create VirtualBox virtual machines and export them in the OVA or OVF format. layout: docs
--- page_title: VirtualBox Builder
...
# VirtualBox Builder # VirtualBox Builder
The VirtualBox Packer builder is able to create [VirtualBox](http://www.virtualbox.org) The VirtualBox Packer builder is able to create
virtual machines and export them in the OVA or OVF format. [VirtualBox](http://www.virtualbox.org) virtual machines and export them in the
OVA or OVF format.
Packer actually comes with multiple builders able to create VirtualBox Packer actually comes with multiple builders able to create VirtualBox machines,
machines, depending on the strategy you want to use to build the image. depending on the strategy you want to use to build the image. Packer supports
Packer supports the following VirtualBox builders: the following VirtualBox builders:
* [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from - [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from an ISO
an ISO file, creates a brand new VirtualBox VM, installs an OS, file, creates a brand new VirtualBox VM, installs an OS, provisions software
provisions software within the OS, then exports that machine to create within the OS, then exports that machine to create an image. This is best
an image. This is best for people who want to start from scratch. for people who want to start from scratch.
* [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder - [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports
imports an existing OVF/OVA file, runs provisioners on top of that VM, an existing OVF/OVA file, runs provisioners on top of that VM, and exports
and exports that machine to create an image. This is best if you have that machine to create an image. This is best if you have an existing
an existing VirtualBox VM export you want to use as the source. As an VirtualBox VM export you want to use as the source. As an additional
additional benefit, you can feed the artifact of this builder back into benefit, you can feed the artifact of this builder back into itself to
itself to iterate on a machine. iterate on a machine.

View File

@ -1,37 +1,40 @@
--- ---
layout: "docs" description: |
page_title: "VMware Builder from ISO" This VMware Packer builder is able to create VMware virtual machines from an ISO
description: |- file as a source. It currently supports building virtual machines on hosts
This VMware Packer builder is able to create VMware virtual machines from an ISO file as a source. It currently supports building virtual machines on hosts running VMware Fusion for OS X, VMware Workstation for Linux and Windows, and VMware Player on Linux. It can also build machines directly on VMware vSphere Hypervisor using SSH as opposed to the vSphere API. running VMware Fusion for OS X, VMware Workstation for Linux and Windows, and
--- VMware Player on Linux. It can also build machines directly on VMware vSphere
Hypervisor using SSH as opposed to the vSphere API.
layout: docs
page_title: VMware Builder from ISO
...
# VMware Builder (from ISO) # VMware Builder (from ISO)
Type: `vmware-iso` Type: `vmware-iso`
This VMware Packer builder is able to create VMware virtual machines from an This VMware Packer builder is able to create VMware virtual machines from an ISO
ISO file as a source. It currently file as a source. It currently supports building virtual machines on hosts
supports building virtual machines on hosts running running [VMware Fusion](http://www.vmware.com/products/fusion/overview.html) for
[VMware Fusion](http://www.vmware.com/products/fusion/overview.html) for OS X, OS X, [VMware
[VMware Workstation](http://www.vmware.com/products/workstation/overview.html) Workstation](http://www.vmware.com/products/workstation/overview.html) for Linux
for Linux and Windows, and and Windows, and [VMware Player](http://www.vmware.com/products/player/) on
[VMware Player](http://www.vmware.com/products/player/) on Linux. It can Linux. It can also build machines directly on [VMware vSphere
also build machines directly on Hypervisor](http://www.vmware.com/products/vsphere-hypervisor/) using SSH as
[VMware vSphere Hypervisor](http://www.vmware.com/products/vsphere-hypervisor/) opposed to the vSphere API.
using SSH as opposed to the vSphere API.
The builder builds a virtual machine by creating a new virtual machine The builder builds a virtual machine by creating a new virtual machine from
from scratch, booting it, installing an OS, provisioning software within scratch, booting it, installing an OS, provisioning software within the OS, then
the OS, then shutting it down. The result of the VMware builder is a directory shutting it down. The result of the VMware builder is a directory containing all
containing all the files necessary to run the virtual machine. the files necessary to run the virtual machine.
## Basic Example ## Basic Example
Here is a basic example. This example is not functional. It will start the Here is a basic example. This example is not functional. It will start the OS
OS installer but then fail because we don't provide the preseed file for installer but then fail because we don't provide the preseed file for Ubuntu to
Ubuntu to self-install. Still, the example serves to show the basic configuration: self-install. Still, the example serves to show the basic configuration:
```javascript ``` {.javascript}
{ {
"type": "vmware-iso", "type": "vmware-iso",
"iso_url": "http://old-releases.ubuntu.com/releases/precise/ubuntu-12.04.2-server-amd64.iso", "iso_url": "http://old-releases.ubuntu.com/releases/precise/ubuntu-12.04.2-server-amd64.iso",
@ -44,261 +47,265 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
## Configuration Reference ## Configuration Reference
There are many configuration options available for the VMware builder. There are many configuration options available for the VMware builder. They are
They are organized below into two categories: required and optional. Within organized below into two categories: required and optional. Within each
each category, the available options are alphabetized and described. category, the available options are alphabetized and described.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO - `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior files are so large, this is required and Packer will verify it prior to
to booting a virtual machine with the ISO attached. The type of the booting a virtual machine with the ISO attached. The type of the checksum is
checksum is specified with `iso_checksum_type`, documented below. specified with `iso_checksum_type`, documented below.
* `iso_checksum_type` (string) - The type of the checksum specified in - `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
"sha512" currently. While "none" will skip checksumming, this is not "sha512" currently. While "none" will skip checksumming, this is not
recommended since ISO files are generally large and corruption does happen recommended since ISO files are generally large and corruption does happen
from time to time. from time to time.
* `iso_url` (string) - A URL to the ISO containing the installation image. - `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file). This URL can be either an HTTP URL or a file URL (or path to a file). If
If this is an HTTP URL, Packer will download it and cache it between this is an HTTP URL, Packer will download it and cache it between runs.
runs.
* `ssh_username` (string) - The username to use to SSH into the machine - `ssh_username` (string) - The username to use to SSH into the machine once
once the OS is installed. the OS is installed.
### Optional: ### Optional:
* `disk_additional_size` (array of integers) - The size(s) of any additional - `disk_additional_size` (array of integers) - The size(s) of any additional
hard disks for the VM in megabytes. If this is not specified then the VM will hard disks for the VM in megabytes. If this is not specified then the VM
only contain a primary hard disk. The builder uses expandable, not fixed-size will only contain a primary hard disk. The builder uses expandable, not
virtual hard disks, so the actual file representing the disk will not use the fixed-size virtual hard disks, so the actual file representing the disk will
full size unless it is full. not use the full size unless it is full.
* `boot_command` (array of strings) - This is an array of commands to type - `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot keys can be typed as well, and are covered in the section below on the
command. If this is not specified, it is assumed the installer will start boot command. If this is not specified, it is assumed the installer will
itself. start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual - `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified, five seconds and one minute 30 seconds, respectively. If this isn't
the default is 10 seconds. specified, the default is 10 seconds.
* `disk_size` (integer) - The size of the hard disk for the VM in megabytes. - `disk_size` (integer) - The size of the hard disk for the VM in megabytes.
The builder uses expandable, not fixed-size virtual hard disks, so the The builder uses expandable, not fixed-size virtual hard disks, so the
actual file representing the disk will not use the full size unless it is full. actual file representing the disk will not use the full size unless it
By default this is set to 40,000 (about 40 GB). is full. By default this is set to 40,000 (about 40 GB).
* `disk_type_id` (string) - The type of VMware virtual disk to create. - `disk_type_id` (string) - The type of VMware virtual disk to create. The
The default is "1", which corresponds to a growable virtual disk split in default is "1", which corresponds to a growable virtual disk split in
2GB files. This option is for advanced usage, modify only if you 2GB files. This option is for advanced usage, modify only if you know what
know what you're doing. For more information, please consult the you're doing. For more information, please consult the [Virtual Disk Manager
[Virtual Disk Manager User's Guide](http://www.vmware.com/pdf/VirtualDiskManager.pdf) User's Guide](http://www.vmware.com/pdf/VirtualDiskManager.pdf) for desktop
for desktop VMware clients. For ESXi, refer to the proper ESXi documentation. VMware clients. For ESXi, refer to the proper ESXi documentation.
* `floppy_files` (array of strings) - A list of files to place onto a floppy - `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful disk that is attached when the VM is booted. This is most useful for
for unattended Windows installs, which look for an `Autounattend.xml` file unattended Windows installs, which look for an `Autounattend.xml` file on
on removable media. By default, no floppy will be attached. All files removable media. By default, no floppy will be attached. All files listed in
listed in this setting get placed into the root directory of the floppy this setting get placed into the root directory of the floppy and the floppy
and the floppy is attached as the first floppy device. Currently, no is attached as the first floppy device. Currently, no support exists for
support exists for creating sub-directories on the floppy. Wildcard creating sub-directories on the floppy. Wildcard characters (\*, ?,
characters (*, ?, and []) are allowed. Directory names are also allowed, and \[\]) are allowed. Directory names are also allowed, which will add all
which will add all the files found in the directory to the floppy. the files found in the directory to the floppy.
* `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this - `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is
is "/Applications/VMware Fusion.app" but this setting allows you to "/Applications/VMware Fusion.app" but this setting allows you to
customize this. customize this.
* `guest_os_type` (string) - The guest OS type being installed. This will be - `guest_os_type` (string) - The guest OS type being installed. This will be
set in the VMware VMX. By default this is "other". By specifying a more specific set in the VMware VMX. By default this is "other". By specifying a more
OS type, VMware may perform some optimizations or virtual hardware changes specific OS type, VMware may perform some optimizations or virtual hardware
to better support the operating system running in the virtual machine. changes to better support the operating system running in the
virtual machine.
* `headless` (boolean) - Packer defaults to building VMware - `headless` (boolean) - Packer defaults to building VMware virtual machines
virtual machines by launching a GUI that shows the console of the by launching a GUI that shows the console of the machine being built. When
machine being built. When this value is set to true, the machine will this value is set to true, the machine will start without a console. For
start without a console. For VMware machines, Packer will output VNC VMware machines, Packer will output VNC connection information in case you
connection information in case you need to connect to the console to need to connect to the console to debug the build process.
debug the build process.
* `http_directory` (string) - Path to a directory to serve using an HTTP - `http_directory` (string) - Path to a directory to serve using an
server. The files in this directory will be available over HTTP that will HTTP server. The files in this directory will be available over HTTP that
be requestable from the virtual machine. This is useful for hosting will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP kickstart files and so on. By default this is "", which means no HTTP server
server will be started. The address and port of the HTTP server will be will be started. The address and port of the HTTP server will be available
available as variables in `boot_command`. This is covered in more detail as variables in `boot_command`. This is covered in more detail below.
below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and - `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`. maximum port to use for the HTTP server started to serve the
Because Packer often runs in parallel, Packer will choose a randomly available `http_directory`. Because Packer often runs in parallel, Packer will choose
port in this range to run the HTTP server. If you want to force the HTTP a randomly available port in this range to run the HTTP server. If you want
server to be on one port, make this minimum and maximum port the same. to force the HTTP server to be on one port, make this minimum and maximum
By default the values are 8000 and 9000, respectively. port the same. By default the values are 8000 and 9000, respectively.
* `iso_urls` (array of strings) - Multiple URLs for the ISO to download. - `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to download Packer will try these in order. If anything goes wrong attempting to
or while downloading a single URL, it will move on to the next. All URLs download or while downloading a single URL, it will move on to the next. All
must point to the same file (same checksum). By default this is empty URLs must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified. and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
* `output_directory` (string) - This is the path to the directory where the - `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute. resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer` If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder. is executed. This directory must not exist or be empty prior to running
By default this is "output-BUILDNAME" where "BUILDNAME" is the name the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
of the build. name of the build.
* `remote_cache_datastore` (string) - The path to the datastore where - `remote_cache_datastore` (string) - The path to the datastore where
supporting files will be stored during the build on the remote machine. supporting files will be stored during the build on the remote machine. By
By default this is the same as the `remote_datastore` option. This only default this is the same as the `remote_datastore` option. This only has an
has an effect if `remote_type` is enabled. effect if `remote_type` is enabled.
* `remote_cache_directory` (string) - The path where the ISO and/or floppy - `remote_cache_directory` (string) - The path where the ISO and/or floppy
files will be stored during the build on the remote machine. The path is files will be stored during the build on the remote machine. The path is
relative to the `remote_cache_datastore` on the remote machine. By default relative to the `remote_cache_datastore` on the remote machine. By default
this is "packer_cache". This only has an effect if `remote_type` is enabled. this is "packer\_cache". This only has an effect if `remote_type`
is enabled.
* `remote_datastore` (string) - The path to the datastore where the resulting - `remote_datastore` (string) - The path to the datastore where the resulting
VM will be stored when it is built on the remote machine. By default this VM will be stored when it is built on the remote machine. By default this
is "datastore1". This only has an effect if `remote_type` is enabled. is "datastore1". This only has an effect if `remote_type` is enabled.
* `remote_host` (string) - The host of the remote machine used for access. - `remote_host` (string) - The host of the remote machine used for access.
This is only required if `remote_type` is enabled. This is only required if `remote_type` is enabled.
* `remote_password` (string) - The SSH password for the user used to - `remote_password` (string) - The SSH password for the user used to access
access the remote machine. By default this is empty. This only has an the remote machine. By default this is empty. This only has an effect if
effect if `remote_type` is enabled. `remote_type` is enabled.
* `remote_type` (string) - The type of remote machine that will be used to - `remote_type` (string) - The type of remote machine that will be used to
build this VM rather than a local desktop product. The only value accepted build this VM rather than a local desktop product. The only value accepted
for this currently is "esx5". If this is not set, a desktop product will be for this currently is "esx5". If this is not set, a desktop product will
used. By default, this is not set. be used. By default, this is not set.
* `remote_username` (string) - The username for the SSH user that will access - `remote_username` (string) - The username for the SSH user that will access
the remote machine. This is required if `remote_type` is enabled. the remote machine. This is required if `remote_type` is enabled.
* `shutdown_command` (string) - The command to use to gracefully shut down - `shutdown_command` (string) - The command to use to gracefully shut down the
the machine once all the provisioning is done. By default this is an empty machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine. string, which tells Packer to just forcefully shut down the machine.
* `shutdown_timeout` (string) - The amount of time to wait after executing - `shutdown_timeout` (string) - The amount of time to wait after executing the
the `shutdown_command` for the virtual machine to actually shut down. `shutdown_command` for the virtual machine to actually shut down. If it
If it doesn't shut down in this time, it is an error. By default, the timeout doesn't shut down in this time, it is an error. By default, the timeout is
is "5m", or five minutes. "5m", or five minutes.
* `skip_compaction` (boolean) - VMware-created disks are defragmented - `skip_compaction` (boolean) - VMware-created disks are defragmented and
and compacted at the end of the build process using `vmware-vdiskmanager`. compacted at the end of the build process using `vmware-vdiskmanager`. In
In certain rare cases, this might actually end up making the resulting disks certain rare cases, this might actually end up making the resulting disks
slightly larger. If you find this to be the case, you can disable compaction slightly larger. If you find this to be the case, you can disable compaction
using this configuration value. using this configuration value.
* `tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to - `tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to
upload into the VM. Valid values are "darwin", "linux", and "windows". upload into the VM. Valid values are "darwin", "linux", and "windows". By
By default, this is empty, which means VMware tools won't be uploaded. default, this is empty, which means VMware tools won't be uploaded.
* `tools_upload_path` (string) - The path in the VM to upload the VMware - `tools_upload_path` (string) - The path in the VM to upload the
tools. This only takes effect if `tools_upload_flavor` is non-empty. VMware tools. This only takes effect if `tools_upload_flavor` is non-empty.
This is a [configuration template](/docs/templates/configuration-templates.html) This is a [configuration
that has a single valid variable: `Flavor`, which will be the value of template](/docs/templates/configuration-templates.html) that has a single
`tools_upload_flavor`. By default the upload path is set to valid variable: `Flavor`, which will be the value of `tools_upload_flavor`.
`{{.Flavor}}.iso`. This setting is not used when `remote_type` is "esx5". By default the upload path is set to `{{.Flavor}}.iso`. This setting is not
used when `remote_type` is "esx5".
* `version` (string) - The [vmx hardware version](http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003746) for the new virtual machine. Only the default value has been tested, any other value is experimental. Default value is '9'. - `version` (string) - The [vmx hardware
version](http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003746)
for the new virtual machine. Only the default value has been tested, any
other value is experimental. Default value is '9'.
* `vm_name` (string) - This is the name of the VMX file for the new virtual - `vm_name` (string) - This is the name of the VMX file for the new virtual
machine, without the file extension. By default this is "packer-BUILDNAME", machine, without the file extension. By default this is "packer-BUILDNAME",
where "BUILDNAME" is the name of the build. where "BUILDNAME" is the name of the build.
* `vmdk_name` (string) - The filename of the virtual disk that'll be created, - `vmdk_name` (string) - The filename of the virtual disk that'll be created,
without the extension. This defaults to "packer". without the extension. This defaults to "packer".
* `vmx_data` (object of key/value strings) - Arbitrary key/values - `vmx_data` (object of key/value strings) - Arbitrary key/values to enter
to enter into the virtual machine VMX file. This is for advanced users into the virtual machine VMX file. This is for advanced users who want to
who want to set properties such as memory, CPU, etc. set properties such as memory, CPU, etc.
* `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`, - `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`,
except that it is run after the virtual machine is shutdown, and before the except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported. virtual machine is exported.
* `vmx_template_path` (string) - Path to a - `vmx_template_path` (string) - Path to a [configuration
[configuration template](/docs/templates/configuration-templates.html) that template](/docs/templates/configuration-templates.html) that defines the
defines the contents of the virtual machine VMX file for VMware. This is contents of the virtual machine VMX file for VMware. This is for **advanced
for **advanced users only** as this can render the virtual machine users only** as this can render the virtual machine non-functional. See
non-functional. See below for more information. For basic VMX modifications, below for more information. For basic VMX modifications, try
try `vmx_data` first. `vmx_data` first.
* `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port to - `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port
use for VNC access to the virtual machine. The builder uses VNC to type to use for VNC access to the virtual machine. The builder uses VNC to type
the initial `boot_command`. Because Packer generally runs in parallel, Packer the initial `boot_command`. Because Packer generally runs in parallel,
uses a randomly chosen port in this range that appears available. By default Packer uses a randomly chosen port in this range that appears available. By
this is 5900 to 6000. The minimum and maximum ports are inclusive. default this is 5900 to 6000. The minimum and maximum ports are inclusive.
## Boot Command ## Boot Command
The `boot_command` configuration is very important: it specifies the keys The `boot_command` configuration is very important: it specifies the keys to
to type when the virtual machine is first booted in order to start the type when the virtual machine is first booted in order to start the OS
OS installer. This command is typed after `boot_wait`, which gives the installer. This command is typed after `boot_wait`, which gives the virtual
virtual machine some time to actually load the ISO. machine some time to actually load the ISO.
As documented above, the `boot_command` is an array of strings. The As documented above, the `boot_command` is an array of strings. The strings are
strings are all typed in sequence. It is an array only to improve readability all typed in sequence. It is an array only to improve readability within the
within the template. template.
The boot command is "typed" character for character over a VNC connection The boot command is "typed" character for character over a VNC connection to the
to the machine, simulating a human actually typing the keyboard. There are machine, simulating a human actually typing the keyboard. There are a set of
a set of special keys available. If these are in your boot command, they special keys available. If these are in your boot command, they will be replaced
will be replaced by the proper key: by the proper key:
* `<bs>` - Backspace - `<bs>` - Backspace
* `<del>` - Delete - `<del>` - Delete
* `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress. - `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
* `<esc>` - Simulates pressing the escape key. - `<esc>` - Simulates pressing the escape key.
* `<tab>` - Simulates pressing the tab key. - `<tab>` - Simulates pressing the tab key.
* `<f1>` - `<f12>` - Simulates pressing a function key. - `<f1>` - `<f12>` - Simulates pressing a function key.
* `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key. - `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
* `<spacebar>` - Simulates pressing the spacebar. - `<spacebar>` - Simulates pressing the spacebar.
* `<insert>` - Simulates pressing the insert key. - `<insert>` - Simulates pressing the insert key.
* `<home>` `<end>` - Simulates pressing the home and end keys. - `<home>` `<end>` - Simulates pressing the home and end keys.
* `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys. - `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
* `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before sending any additional keys. This - `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before
is useful if you have to generally wait for the UI to update before typing more. sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
In addition to the special keys, each command to type is treated as a In addition to the special keys, each command to type is treated as a
[configuration template](/docs/templates/configuration-templates.html). [configuration template](/docs/templates/configuration-templates.html). The
The available variables are: available variables are:
* `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory` that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will configuration parameter. If `http_directory` isn't specified, these will be
be blank! blank!
Example boot command. This is actually a working boot command used to start Example boot command. This is actually a working boot command used to start an
an Ubuntu 12.04 installer: Ubuntu 12.04 installer:
```text ``` {.text}
[ [
"<esc><esc><enter><wait>", "<esc><esc><enter><wait>",
"/install/vmlinuz noapic ", "/install/vmlinuz noapic ",
@ -314,71 +321,75 @@ an Ubuntu 12.04 installer:
## VMX Template ## VMX Template
The heart of a VMware machine is the "vmx" file. This contains all the The heart of a VMware machine is the "vmx" file. This contains all the virtual
virtual hardware metadata necessary for the VM to function. Packer by default hardware metadata necessary for the VM to function. Packer by default uses a
uses a [safe, flexible VMX file](https://github.com/mitchellh/packer/blob/20541a7eda085aa5cf35bfed5069592ca49d106e/builder/vmware/step_create_vmx.go#L84). [safe, flexible VMX
But for advanced users, this template can be customized. This allows file](https://github.com/mitchellh/packer/blob/20541a7eda085aa5cf35bfed5069592ca49d106e/builder/vmware/step_create_vmx.go#L84).
Packer to build virtual machines of effectively any guest operating system But for advanced users, this template can be customized. This allows Packer to
type. build virtual machines of effectively any guest operating system type.
~> **This is an advanced feature.** Modifying the VMX template \~&gt; **This is an advanced feature.** Modifying the VMX template can easily
can easily cause your virtual machine to not boot properly. Please only cause your virtual machine to not boot properly. Please only modify the template
modify the template if you know what you're doing. if you know what you're doing.
Within the template, a handful of variables are available so that your Within the template, a handful of variables are available so that your template
template can continue working with the rest of the Packer machinery. Using can continue working with the rest of the Packer machinery. Using these
these variables isn't required, however. variables isn't required, however.
* `Name` - The name of the virtual machine. - `Name` - The name of the virtual machine.
* `GuestOS` - The VMware-valid guest OS type. - `GuestOS` - The VMware-valid guest OS type.
* `DiskName` - The filename (without the suffix) of the main virtual disk. - `DiskName` - The filename (without the suffix) of the main virtual disk.
* `ISOPath` - The path to the ISO to use for the OS installation. - `ISOPath` - The path to the ISO to use for the OS installation.
* `Version` - The Hardware version VMWare will execute this vm under. Also known as the `virtualhw.version`. - `Version` - The Hardware version VMWare will execute this vm under. Also
known as the `virtualhw.version`.
## Building on a Remote vSphere Hypervisor ## Building on a Remote vSphere Hypervisor
In addition to using the desktop products of VMware locally to build In addition to using the desktop products of VMware locally to build virtual
virtual machines, Packer can use a remote VMware Hypervisor to build machines, Packer can use a remote VMware Hypervisor to build the virtual
the virtual machine. machine.
-> **Note:** Packer supports ESXi 5.1 and above. -&gt; **Note:** Packer supports ESXi 5.1 and above.
Before using a remote vSphere Hypervisor, you need to enable GuestIPHack by running the following command: Before using a remote vSphere Hypervisor, you need to enable GuestIPHack by
running the following command:
```text ``` {.text}
esxcli system settings advanced set -o /Net/GuestIPHack -i 1 esxcli system settings advanced set -o /Net/GuestIPHack -i 1
``` ```
When using a remote VMware Hypervisor, the builder still downloads the When using a remote VMware Hypervisor, the builder still downloads the ISO and
ISO and various files locally, and uploads these to the remote machine. various files locally, and uploads these to the remote machine. Packer currently
Packer currently uses SSH to communicate to the ESXi machine rather than uses SSH to communicate to the ESXi machine rather than the vSphere API. At some
the vSphere API. At some point, the vSphere API may be used. point, the vSphere API may be used.
Packer also requires VNC to issue boot commands during a build, Packer also requires VNC to issue boot commands during a build, which may be
which may be disabled on some remote VMware Hypervisors. Please consult disabled on some remote VMware Hypervisors. Please consult the appropriate
the appropriate documentation on how to update VMware Hypervisor's firewall documentation on how to update VMware Hypervisor's firewall to allow these
to allow these connections. connections.
To use a remote VMware vSphere Hypervisor to build your virtual machine, To use a remote VMware vSphere Hypervisor to build your virtual machine, fill in
fill in the required `remote_*` configurations: the required `remote_*` configurations:
* `remote_type` - This must be set to "esx5". - `remote_type` - This must be set to "esx5".
* `remote_host` - The host of the remote machine. - `remote_host` - The host of the remote machine.
Additionally, there are some optional configurations that you'll likely Additionally, there are some optional configurations that you'll likely have to
have to modify as well: modify as well:
* `remote_datastore` - The path to the datastore where the VM will be - `remote_port` - The SSH port of the remote machine
stored on the ESXi machine.
* `remote_cache_datastore` - The path to the datastore where - `remote_datastore` - The path to the datastore where the VM will be stored
supporting files will be stored during the build on the remote machine. on the ESXi machine.
* `remote_cache_directory` - The path where the ISO and/or floppy - `remote_cache_datastore` - The path to the datastore where supporting files
files will be stored during the build on the remote machine. The path is will be stored during the build on the remote machine.
relative to the `remote_cache_datastore` on the remote machine.
* `remote_username` - The SSH username used to access the remote machine. - `remote_cache_directory` - The path where the ISO and/or floppy files will
be stored during the build on the remote machine. The path is relative to
the `remote_cache_datastore` on the remote machine.
* `remote_password` - The SSH password for access to the remote machine. - `remote_username` - The SSH username used to access the remote machine.
- `remote_password` - The SSH password for access to the remote machine.

View File

@ -1,34 +1,37 @@
--- ---
layout: "docs" description: |
page_title: "VMware Builder from VMX" This VMware Packer builder is able to create VMware virtual machines from an
description: |- existing VMware virtual machine (a VMX file). It currently supports building
This VMware Packer builder is able to create VMware virtual machines from an existing VMware virtual machine (a VMX file). It currently supports building virtual machines on hosts running VMware Fusion Professional for OS X, VMware Workstation for Linux and Windows, and VMware Player on Linux. virtual machines on hosts running VMware Fusion Professional for OS X, VMware
--- Workstation for Linux and Windows, and VMware Player on Linux.
layout: docs
page_title: VMware Builder from VMX
...
# VMware Builder (from VMX) # VMware Builder (from VMX)
Type: `vmware-vmx` Type: `vmware-vmx`
This VMware Packer builder is able to create VMware virtual machines from an This VMware Packer builder is able to create VMware virtual machines from an
existing VMware virtual machine (a VMX file). It currently existing VMware virtual machine (a VMX file). It currently supports building
supports building virtual machines on hosts running virtual machines on hosts running [VMware Fusion
[VMware Fusion Professional](http://www.vmware.com/products/fusion-professional/) for OS X, Professional](http://www.vmware.com/products/fusion-professional/) for OS X,
[VMware Workstation](http://www.vmware.com/products/workstation/overview.html) [VMware Workstation](http://www.vmware.com/products/workstation/overview.html)
for Linux and Windows, and for Linux and Windows, and [VMware
[VMware Player](http://www.vmware.com/products/player/) on Linux. Player](http://www.vmware.com/products/player/) on Linux.
The builder builds a virtual machine by cloning the VMX file using The builder builds a virtual machine by cloning the VMX file using the clone
the clone capabilities introduced in VMware Fusion Professional 6, Workstation 10, capabilities introduced in VMware Fusion Professional 6, Workstation 10, and
and Player 6. After cloning the VM, it provisions software within the Player 6. After cloning the VM, it provisions software within the new machine,
new machine, shuts it down, and compacts the disks. The resulting folder shuts it down, and compacts the disks. The resulting folder contains a new
contains a new VMware virtual machine. VMware virtual machine.
## Basic Example ## Basic Example
Here is an example. This example is fully functional as long as the source Here is an example. This example is fully functional as long as the source path
path points to a real VMX file with the proper settings: points to a real VMX file with the proper settings:
```javascript ``` {.javascript}
{ {
"type": "vmware-vmx", "type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx", "source_path": "/path/to/a/vm.vmx",
@ -40,110 +43,110 @@ path points to a real VMX file with the proper settings:
## Configuration Reference ## Configuration Reference
There are many configuration options available for the VMware builder. There are many configuration options available for the VMware builder. They are
They are organized below into two categories: required and optional. Within organized below into two categories: required and optional. Within each
each category, the available options are alphabetized and described. category, the available options are alphabetized and described.
In addition to the options listed here, a In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) [communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder. builder.
### Required: ### Required:
* `source_path` (string) - Path to the source VMX file to clone. - `source_path` (string) - Path to the source VMX file to clone.
* `ssh_username` (string) - The username to use to SSH into the machine - `ssh_username` (string) - The username to use to SSH into the machine once
once the OS is installed. the OS is installed.
### Optional: ### Optional:
* `boot_command` (array of strings) - This is an array of commands to type - `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot keys can be typed as well, and are covered in the section below on the
command. If this is not specified, it is assumed the installer will start boot command. If this is not specified, it is assumed the installer will
itself. start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual - `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified, five seconds and one minute 30 seconds, respectively. If this isn't
the default is 10 seconds. specified, the default is 10 seconds.
* `floppy_files` (array of strings) - A list of files to place onto a floppy - `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful disk that is attached when the VM is booted. This is most useful for
for unattended Windows installs, which look for an `Autounattend.xml` file unattended Windows installs, which look for an `Autounattend.xml` file on
on removable media. By default, no floppy will be attached. All files removable media. By default, no floppy will be attached. All files listed in
listed in this setting get placed into the root directory of the floppy this setting get placed into the root directory of the floppy and the floppy
and the floppy is attached as the first floppy device. Currently, no is attached as the first floppy device. Currently, no support exists for
support exists for creating sub-directories on the floppy. Wildcard creating sub-directories on the floppy. Wildcard characters (\*, ?,
characters (*, ?, and []) are allowed. Directory names are also allowed, and \[\]) are allowed. Directory names are also allowed, which will add all
which will add all the files found in the directory to the floppy. the files found in the directory to the floppy.
* `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this - `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is
is "/Applications/VMware Fusion.app" but this setting allows you to "/Applications/VMware Fusion.app" but this setting allows you to
customize this. customize this.
* `headless` (boolean) - Packer defaults to building VMware - `headless` (boolean) - Packer defaults to building VMware virtual machines
virtual machines by launching a GUI that shows the console of the by launching a GUI that shows the console of the machine being built. When
machine being built. When this value is set to true, the machine will this value is set to true, the machine will start without a console. For
start without a console. For VMware machines, Packer will output VNC VMware machines, Packer will output VNC connection information in case you
connection information in case you need to connect to the console to need to connect to the console to debug the build process.
debug the build process.
* `http_directory` (string) - Path to a directory to serve using an HTTP - `http_directory` (string) - Path to a directory to serve using an
server. The files in this directory will be available over HTTP that will HTTP server. The files in this directory will be available over HTTP that
be requestable from the virtual machine. This is useful for hosting will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP kickstart files and so on. By default this is "", which means no HTTP server
server will be started. The address and port of the HTTP server will be will be started. The address and port of the HTTP server will be available
available as variables in `boot_command`. This is covered in more detail as variables in `boot_command`. This is covered in more detail below.
below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and - `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`. maximum port to use for the HTTP server started to serve the
Because Packer often runs in parallel, Packer will choose a randomly available `http_directory`. Because Packer often runs in parallel, Packer will choose
port in this range to run the HTTP server. If you want to force the HTTP a randomly available port in this range to run the HTTP server. If you want
server to be on one port, make this minimum and maximum port the same. to force the HTTP server to be on one port, make this minimum and maximum
By default the values are 8000 and 9000, respectively. port the same. By default the values are 8000 and 9000, respectively.
* `output_directory` (string) - This is the path to the directory where the - `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute. resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer` If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder. is executed. This directory must not exist or be empty prior to running
By default this is "output-BUILDNAME" where "BUILDNAME" is the name the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
of the build. name of the build.
* `shutdown_command` (string) - The command to use to gracefully shut down the machine once all - `shutdown_command` (string) - The command to use to gracefully shut down the
the provisioning is done. By default this is an empty string, which tells Packer to just machine once all the provisioning is done. By default this is an empty
forcefully shut down the machine unless a shutdown command takes place inside script so this may string, which tells Packer to just forcefully shut down the machine unless a
safely be omitted. If one or more scripts require a reboot it is suggested to leave this blank shutdown command takes place inside script so this may safely be omitted. If
since reboots may fail and specify the final shutdown command in your last script. one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your
last script.
* `shutdown_timeout` (string) - The amount of time to wait after executing - `shutdown_timeout` (string) - The amount of time to wait after executing the
the `shutdown_command` for the virtual machine to actually shut down. `shutdown_command` for the virtual machine to actually shut down. If it
If it doesn't shut down in this time, it is an error. By default, the timeout doesn't shut down in this time, it is an error. By default, the timeout is
is "5m", or five minutes. "5m", or five minutes.
* `skip_compaction` (boolean) - VMware-created disks are defragmented - `skip_compaction` (boolean) - VMware-created disks are defragmented and
and compacted at the end of the build process using `vmware-vdiskmanager`. compacted at the end of the build process using `vmware-vdiskmanager`. In
In certain rare cases, this might actually end up making the resulting disks certain rare cases, this might actually end up making the resulting disks
slightly larger. If you find this to be the case, you can disable compaction slightly larger. If you find this to be the case, you can disable compaction
using this configuration value. using this configuration value.
* `vm_name` (string) - This is the name of the VMX file for the new virtual - `vm_name` (string) - This is the name of the VMX file for the new virtual
machine, without the file extension. By default this is "packer-BUILDNAME", machine, without the file extension. By default this is "packer-BUILDNAME",
where "BUILDNAME" is the name of the build. where "BUILDNAME" is the name of the build.
* `vmx_data` (object of key/value strings) - Arbitrary key/values - `vmx_data` (object of key/value strings) - Arbitrary key/values to enter
to enter into the virtual machine VMX file. This is for advanced users into the virtual machine VMX file. This is for advanced users who want to
who want to set properties such as memory, CPU, etc. set properties such as memory, CPU, etc.
* `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`, - `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`,
except that it is run after the virtual machine is shutdown, and before the except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported. virtual machine is exported.
* `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port to - `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port
use for VNC access to the virtual machine. The builder uses VNC to type to use for VNC access to the virtual machine. The builder uses VNC to type
the initial `boot_command`. Because Packer generally runs in parallel, Packer the initial `boot_command`. Because Packer generally runs in parallel,
uses a randomly chosen port in this range that appears available. By default Packer uses a randomly chosen port in this range that appears available. By
this is 5900 to 6000. The minimum and maximum ports are inclusive. default this is 5900 to 6000. The minimum and maximum ports are inclusive.

View File

@ -1,27 +1,28 @@
--- ---
layout: "docs" description: |
page_title: "VMware Builder" The VMware Packer builder is able to create VMware virtual machines for use with
description: |- any VMware product.
The VMware Packer builder is able to create VMware virtual machines for use with any VMware product. layout: docs
--- page_title: VMware Builder
...
# VMware Builder # VMware Builder
The VMware Packer builder is able to create VMware virtual machines for use The VMware Packer builder is able to create VMware virtual machines for use with
with any VMware product. any VMware product.
Packer actually comes with multiple builders able to create VMware Packer actually comes with multiple builders able to create VMware machines,
machines, depending on the strategy you want to use to build the image. depending on the strategy you want to use to build the image. Packer supports
Packer supports the following VMware builders: the following VMware builders:
* [vmware-iso](/docs/builders/vmware-iso.html) - Starts from - [vmware-iso](/docs/builders/vmware-iso.html) - Starts from an ISO file,
an ISO file, creates a brand new VMware VM, installs an OS, creates a brand new VMware VM, installs an OS, provisions software within
provisions software within the OS, then exports that machine to create the OS, then exports that machine to create an image. This is best for
an image. This is best for people who want to start from scratch. people who want to start from scratch.
* [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder - [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder imports an
imports an existing VMware machine (from a VMX file), runs provisioners existing VMware machine (from a VMX file), runs provisioners on top of that
on top of that VM, and exports that machine to create an image. VM, and exports that machine to create an image. This is best if you have an
This is best if you have an existing VMware VM you want to use as the existing VMware VM you want to use as the source. As an additional benefit,
source. As an additional benefit, you can feed the artifact of this you can feed the artifact of this builder back into Packer to iterate on
builder back into Packer to iterate on a machine. a machine.

View File

@ -1,37 +1,42 @@
--- ---
layout: "docs" description: |
page_title: "Build - Command-Line" The `packer build` Packer command takes a template and runs all the builds
description: |- within it in order to generate a set of artifacts. The various builds specified
The `packer build` Packer command takes a template and runs all the builds within it in order to generate a set of artifacts. The various builds specified within a template are executed in parallel, unless otherwise specified. And the artifacts that are created will be outputted at the end of the build. within a template are executed in parallel, unless otherwise specified. And the
--- artifacts that are created will be outputted at the end of the build.
layout: docs
page_title: 'Build - Command-Line'
...
# Command-Line: Build # Command-Line: Build
The `packer build` Packer command takes a template and runs all the builds within The `packer build` Packer command takes a template and runs all the builds
it in order to generate a set of artifacts. The various builds specified within within it in order to generate a set of artifacts. The various builds specified
a template are executed in parallel, unless otherwise specified. And the within a template are executed in parallel, unless otherwise specified. And the
artifacts that are created will be outputted at the end of the build. artifacts that are created will be outputted at the end of the build.
## Options ## Options
* `-color=false` - Disables colorized output. Enabled by default. - `-color=false` - Disables colorized output. Enabled by default.
* `-debug` - Disables parallelization and enables debug mode. Debug mode flags - `-debug` - Disables parallelization and enables debug mode. Debug mode flags
the builders that they should output debugging information. The exact behavior the builders that they should output debugging information. The exact
of debug mode is left to the builder. In general, builders usually will stop behavior of debug mode is left to the builder. In general, builders usually
between each step, waiting for keyboard input before continuing. This will allow will stop between each step, waiting for keyboard input before continuing.
the user to inspect state and so on. This will allow the user to inspect state and so on.
* `-except=foo,bar,baz` - Builds all the builds except those with the given - `-except=foo,bar,baz` - Builds all the builds except those with the given
comma-separated names. Build names by default are the names of their builders, comma-separated names. Build names by default are the names of their
unless a specific `name` attribute is specified within the configuration. builders, unless a specific `name` attribute is specified within
the configuration.
* `-force` - Forces a builder to run when artifacts from a previous build prevent - `-force` - Forces a builder to run when artifacts from a previous build
a build from running. The exact behavior of a forced build is left to the builder. prevent a build from running. The exact behavior of a forced build is left
In general, a builder supporting the forced build will remove the artifacts from to the builder. In general, a builder supporting the forced build will
the previous build. This will allow the user to repeat a build without having to remove the artifacts from the previous build. This will allow the user to
manually clean these artifacts beforehand. repeat a build without having to manually clean these artifacts beforehand.
* `-only=foo,bar,baz` - Only build the builds with the given comma-separated - `-only=foo,bar,baz` - Only build the builds with the given
names. Build names by default are the names of their builders, unless a comma-separated names. Build names by default are the names of their
specific `name` attribute is specified within the configuration. builders, unless a specific `name` attribute is specified within
the configuration.

View File

@ -1,33 +1,34 @@
--- ---
layout: "docs" description: |
page_title: "Fix - Command-Line" The `packer fix` Packer command takes a template and finds backwards
description: |- incompatible parts of it and brings it up to date so it can be used with the
The `packer fix` Packer command takes a template and finds backwards incompatible parts of it and brings it up to date so it can be used with the latest version of Packer. After you update to a new Packer release, you should run the fix command to make sure your templates work with the new release. latest version of Packer. After you update to a new Packer release, you should
--- run the fix command to make sure your templates work with the new release.
layout: docs
page_title: 'Fix - Command-Line'
...
# Command-Line: Fix # Command-Line: Fix
The `packer fix` Packer command takes a template and finds backwards incompatible The `packer fix` Packer command takes a template and finds backwards
parts of it and brings it up to date so it can be used with the latest version incompatible parts of it and brings it up to date so it can be used with the
of Packer. After you update to a new Packer release, you should run the latest version of Packer. After you update to a new Packer release, you should
fix command to make sure your templates work with the new release. run the fix command to make sure your templates work with the new release.
The fix command will output the changed template to standard out, so you The fix command will output the changed template to standard out, so you should
should redirect standard using standard OS-specific techniques if you want to redirect standard using standard OS-specific techniques if you want to save it
save it to a file. For example, on Linux systems, you may want to do this: to a file. For example, on Linux systems, you may want to do this:
``` \$ packer fix old.json &gt; new.json
$ packer fix old.json > new.json
```
If fixing fails for any reason, the fix command will exit with a non-zero If fixing fails for any reason, the fix command will exit with a non-zero exit
exit status. Error messages appear on standard error, so if you're redirecting status. Error messages appear on standard error, so if you're redirecting
output, you'll still see error messages. output, you'll still see error messages.
-> **Even when Packer fix doesn't do anything** to the template, -&gt; **Even when Packer fix doesn't do anything** to the template, the template
the template will be outputted to standard out. Things such as configuration will be outputted to standard out. Things such as configuration key ordering and
key ordering and indentation may be changed. The output format however, is indentation may be changed. The output format however, is pretty-printed for
pretty-printed for human readability. human readability.
The full list of fixes that the fix command performs is visible in the The full list of fixes that the fix command performs is visible in the help
help output, which can be seen via `packer fix -h`. output, which can be seen via `packer fix -h`.

View File

@ -1,33 +1,35 @@
--- ---
layout: "docs" description: |
page_title: "Inspect - Command-Line" The `packer inspect` Packer command takes a template and outputs the various
description: |- components a template defines. This can help you quickly learn about a template
The `packer inspect` Packer command takes a template and outputs the various components a template defines. This can help you quickly learn about a template without having to dive into the JSON itself. The command will tell you things like what variables a template accepts, the builders it defines, the provisioners it defines and the order they'll run, and more. without having to dive into the JSON itself. The command will tell you things
--- like what variables a template accepts, the builders it defines, the
provisioners it defines and the order they'll run, and more.
layout: docs
page_title: 'Inspect - Command-Line'
...
# Command-Line: Inspect # Command-Line: Inspect
The `packer inspect` Packer command takes a template and outputs the various components The `packer inspect` Packer command takes a template and outputs the various
a template defines. This can help you quickly learn about a template without components a template defines. This can help you quickly learn about a template
having to dive into the JSON itself. without having to dive into the JSON itself. The command will tell you things
The command will tell you things like what variables a template accepts, like what variables a template accepts, the builders it defines, the
the builders it defines, the provisioners it defines and the order they'll provisioners it defines and the order they'll run, and more.
run, and more.
This command is extra useful when used with This command is extra useful when used with [machine-readable
[machine-readable output](/docs/command-line/machine-readable.html) enabled. output](/docs/command-line/machine-readable.html) enabled. The command outputs
The command outputs the components in a way that is parseable by machines. the components in a way that is parseable by machines.
The command doesn't validate the actual configuration of the various The command doesn't validate the actual configuration of the various components
components (that is what the `validate` command is for), but it will (that is what the `validate` command is for), but it will validate the syntax of
validate the syntax of your template by necessity. your template by necessity.
## Usage Example ## Usage Example
Given a basic template, here is an example of what the output might Given a basic template, here is an example of what the output might look like:
look like:
```text ``` {.text}
$ packer inspect template.json $ packer inspect template.json
Variables and their defaults: Variables and their defaults:

View File

@ -1,24 +1,27 @@
--- ---
layout: "docs" description: |
page_title: "Packer Command-Line" Packer is controlled using a command-line interface. All interaction with Packer
description: |- is done via the `packer` tool. Like many other command-line tools, the `packer`
Packer is controlled using a command-line interface. All interaction with Packer is done via the `packer` tool. Like many other command-line tools, the `packer` tool takes a subcommand to execute, and that subcommand may have additional options as well. Subcommands are executed with `packer SUBCOMMAND`, where "SUBCOMMAND" is obviously the actual command you wish to execute. tool takes a subcommand to execute, and that subcommand may have additional
--- options as well. Subcommands are executed with `packer SUBCOMMAND`, where
"SUBCOMMAND" is obviously the actual command you wish to execute.
layout: docs
page_title: 'Packer Command-Line'
...
# Packer Command-Line # Packer Command-Line
Packer is controlled using a command-line interface. All interaction with Packer is controlled using a command-line interface. All interaction with Packer
Packer is done via the `packer` tool. Like many other command-line tools, is done via the `packer` tool. Like many other command-line tools, the `packer`
the `packer` tool takes a subcommand to execute, and that subcommand may tool takes a subcommand to execute, and that subcommand may have additional
have additional options as well. Subcommands are executed with options as well. Subcommands are executed with `packer SUBCOMMAND`, where
`packer SUBCOMMAND`, where "SUBCOMMAND" is obviously the actual command you wish "SUBCOMMAND" is obviously the actual command you wish to execute.
to execute.
If you run `packer` by itself, help will be displayed showing all available If you run `packer` by itself, help will be displayed showing all available
subcommands and a brief synopsis of what they do. In addition to this, you can subcommands and a brief synopsis of what they do. In addition to this, you can
run any `packer` command with the `-h` flag to output more detailed help for run any `packer` command with the `-h` flag to output more detailed help for a
a specific subcommand. specific subcommand.
In addition to the documentation available on the command-line, each command In addition to the documentation available on the command-line, each command is
is documented on this website. You can find the documentation for a specific documented on this website. You can find the documentation for a specific
subcommand using the navigation to the left. subcommand using the navigation to the left.

View File

@ -1,30 +1,33 @@
--- ---
layout: "docs" description: |
page_title: "Machine-Readable Output - Command-Line" By default, the output of Packer is very human-readable. It uses nice
description: |- formatting, spacing, and colors in order to make Packer a pleasure to use.
By default, the output of Packer is very human-readable. It uses nice formatting, spacing, and colors in order to make Packer a pleasure to use. However, Packer was built with automation in mind. To that end, Packer supports a fully machine-readable output setting, allowing you to use Packer in automated environments. However, Packer was built with automation in mind. To that end, Packer supports
--- a fully machine-readable output setting, allowing you to use Packer in automated
environments.
layout: docs
page_title: 'Machine-Readable Output - Command-Line'
...
# Machine-Readable Output # Machine-Readable Output
By default, the output of Packer is very human-readable. It uses nice By default, the output of Packer is very human-readable. It uses nice
formatting, spacing, and colors in order to make Packer a pleasure to use. formatting, spacing, and colors in order to make Packer a pleasure to use.
However, Packer was built with automation in mind. To that end, Packer However, Packer was built with automation in mind. To that end, Packer supports
supports a fully machine-readable output setting, allowing you to use a fully machine-readable output setting, allowing you to use Packer in automated
Packer in automated environments. environments.
The machine-readable output format is easy to use and read and was made The machine-readable output format is easy to use and read and was made with
with Unix tools in mind, so it is awk/sed/grep/etc. friendly. Unix tools in mind, so it is awk/sed/grep/etc. friendly.
## Enabling ## Enabling
The machine-readable output format can be enabled by passing the The machine-readable output format can be enabled by passing the
`-machine-readable` flag to any Packer command. This immediately enables `-machine-readable` flag to any Packer command. This immediately enables all
all output to become machine-readable on stdout. Logging, if enabled, output to become machine-readable on stdout. Logging, if enabled, continues to
continues to appear on stderr. An example of the output is shown appear on stderr. An example of the output is shown below:
below:
```text ``` {.text}
$ packer -machine-readable version $ packer -machine-readable version
1376289459,,version,0.2.4 1376289459,,version,0.2.4
1376289459,,version-prerelease, 1376289459,,version-prerelease,
@ -32,54 +35,52 @@ $ packer -machine-readable version
1376289459,,ui,say,Packer v0.2.4.dev (eed6ece+CHANGES) 1376289459,,ui,say,Packer v0.2.4.dev (eed6ece+CHANGES)
``` ```
The format will be covered in more detail later. But as you can see, The format will be covered in more detail later. But as you can see, the output
the output immediately becomes machine-friendly. Try some other commands immediately becomes machine-friendly. Try some other commands with the
with the `-machine-readable` flag to see! `-machine-readable` flag to see!
## Format ## Format
The machine readable format is a line-oriented, comma-delimited text The machine readable format is a line-oriented, comma-delimited text format.
format. This makes it extremely easy to parse using standard Unix tools such This makes it extremely easy to parse using standard Unix tools such as awk or
as awk or grep in addition to full programming languages like Ruby or grep in addition to full programming languages like Ruby or Python.
Python.
The format is: The format is:
```text ``` {.text}
timestamp,target,type,data... timestamp,target,type,data...
``` ```
Each component is explained below: Each component is explained below:
* **timestamp** is a Unix timestamp in UTC of when the message was - **timestamp** is a Unix timestamp in UTC of when the message was printed.
printed.
* **target** is the target of the following output. This is empty if - **target** is the target of the following output. This is empty if the
the message is related to Packer globally. Otherwise, this is generally message is related to Packer globally. Otherwise, this is generally a build
a build name so you can relate output to a specific build while parallel name so you can relate output to a specific build while parallel builds
builds are running. are running.
* **type** is the type of machine-readable message being outputted. There - **type** is the type of machine-readable message being outputted. There are
are a set of standard types which are covered later, but each component a set of standard types which are covered later, but each component of
of Packer (builders, provisioners, etc.) may output their own custom types Packer (builders, provisioners, etc.) may output their own custom types as
as well, allowing the machine-readable output to be infinitely flexible. well, allowing the machine-readable output to be infinitely flexible.
* **data** is zero or more comma-seperated values associated with the prior - **data** is zero or more comma-seperated values associated with the
type. The exact amount and meaning of this data is type-dependent, so you prior type. The exact amount and meaning of this data is type-dependent, so
must read the documentation associated with the type to understand fully. you must read the documentation associated with the type to
understand fully.
Within the format, if data contains a comma, it is replaced with Within the format, if data contains a comma, it is replaced with
`%!(PACKER_COMMA)`. This was preferred over an escape character such as `%!(PACKER_COMMA)`. This was preferred over an escape character such as `\'`
`\'` because it is more friendly to tools like awk. because it is more friendly to tools like awk.
Newlines within the format are replaced with their respective standard Newlines within the format are replaced with their respective standard escape
escape sequence. Newlines become a literal `\n` within the output. Carriage sequence. Newlines become a literal `\n` within the output. Carriage returns
returns become a literal `\r`. become a literal `\r`.
## Message Types ## Message Types
The set of machine-readable message types can be found in the The set of machine-readable message types can be found in the [machine-readable
[machine-readable format](/docs/machine-readable/index.html) format](/docs/machine-readable/index.html) complete documentation section. This
complete documentation section. This section contains documentation section contains documentation on all the message types exposed by Packer core
on all the message types exposed by Packer core as well as all the as well as all the components that ship with Packer by default.
components that ship with Packer by default.

View File

@ -1,51 +1,98 @@
--- ---
layout: "docs" description: |
page_title: "Push - Command-Line" The `packer push` command uploads a template and other required files to the
description: |- Atlas build service, which will run your packer build for you.
The `packer push` Packer command takes a template and pushes it to a build service that will automatically build this Packer template. layout: docs
--- page_title: 'Push - Command-Line'
...
# Command-Line: Push # Command-Line: Push
The `packer push` Packer command takes a template and pushes it to a Packer The `packer push` command uploads a template and other required files to the
build service such as [HashiCorp's Atlas](https://atlas.hashicorp.com). The Atlas service, which will run your packer build for you. [Learn more about
build service will automatically build your Packer template and expose the Packer in Atlas.](https://atlas.hashicorp.com/help/packer/features)
artifacts.
External build services such as HashiCorp's Atlas make it easy to iterate on Running builds remotely makes it easier to iterate on packer builds that are not
Packer templates, especially when the builder you are running may not be easily supported on your operating system, for example, building docker or QEMU while
accessable (such as developing `qemu` builders on Mac or Windows). developing on Mac or Windows. Also, the hard work of building VMs is offloaded
to dedicated servers with more CPU, memory, and network resources.
!> The Packer build service will receive the raw copy of your Packer template When you use push to run a build in Atlas, you may also want to store your build
when you push. **If you have sensitive data in your Packer template, you should artifacts in Atlas. In order to do that you will also need to configure the
move that data into Packer variables or environment variables!** [Atlas post-processor](/docs/post-processors/atlas.html). This is optional, and
both the post-processor and push commands can be used independently.
For the `push` command to work, the [push configuration](/docs/templates/push.html) !&gt; The push command uploads your template and other files, like provisioning
must be completed within the template. scripts, to Atlas. Take care not to upload files that you don't intend to, like
secrets or large binaries. **If you have secrets in your Packer template, you
should [move them into environment
variables](https://packer.io/docs/templates/user-variables.html).**
Most push behavior is [configured in your packer
template](/docs/templates/push.html). You can override or supplement your
configuration using the options below.
## Options ## Options
* `-message` - A message to identify the purpose or changes in this Packer - `-message` - A message to identify the purpose or changes in this Packer
template much like a VCS commit message. This message will be passed to the template much like a VCS commit message. This message will be passed to the
Packer build service. This option is also available as a short option `-m`. Packer build service. This option is also available as a short option `-m`.
* `-token` - An access token for authenticating the push to the Packer build - `-token` - Your access token for the Atlas API.
service such as Atlas. This can also be specified within the push
configuration in the template.
* `-name` - The name of the build in the service. This typically -&gt; Login to Atlas to [generate an Atlas
looks like `hashicorp/precise64`. Token](https://atlas.hashicorp.com/settings/tokens). The most convenient way to
configure your token is to set it to the `ATLAS_TOKEN` environment variable, but
you can also use `-token` on the command line.
- `-name` - The name of the build in the service. This typically looks like
`hashicorp/precise64`, which follows the form `<username>/<buildname>`. This
must be specified here or in your template.
- `-var` - Set a variable in your packer template. This option can be used
multiple times. This is useful for setting version numbers for your build.
- `-var-file` - Set template variables from a file.
## Examples ## Examples
Push a Packer template: Push a Packer template:
```shell ``` {.shell}
$ packer push -m "Updating the apache version" template.json $ packer push -m "Updating the apache version" template.json
``` ```
Push a Packer template with a custom token: Push a Packer template with a custom token:
```shell ``` {.shell}
$ packer push -token ABCD1234 template.json $ packer push -token ABCD1234 template.json
``` ```
## Limits
`push` is limited to 5gb upload when pushing to Atlas. To be clear, packer *can*
build artifacts larger than 5gb, and Atlas *can* store artifacts larger than
5gb. However, the initial payload you push to *start* the build cannot exceed
5gb. If your boot ISO is larger than 5gb (for example if you are building OSX
images), you will need to put your boot ISO in an external web service and
download it during the packer run.
## Building Private `.iso` and `.dmg` Files
If you want to build a private `.iso` file you can upload the `.iso` to a secure
file hosting service like [Amazon
S3](http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html),
[Google Cloud
Storage](https://cloud.google.com/storage/docs/gsutil/commands/signurl), or
[Azure File
Service](https://msdn.microsoft.com/en-us/library/azure/dn194274.aspx) and
download it at build time using a signed URL. You should convert `.dmg` files to
`.iso` and follow a similar procedure.
Once you have added [variables in your packer
template](/docs/templates/user-variables.html) you can specify credentials or
signed URLs using Atlas environment variables, or via the `-var` flag when you
run `push`.
![Configure your signed URL in the Atlas build variables
menu](/assets/images/packer-signed-urls.png)

View File

@ -1,20 +1,24 @@
--- ---
layout: "docs" description: |
page_title: "Validate - Command-Line" The `packer validate` Packer command is used to validate the syntax and
description: |- configuration of a template. The command will return a zero exit status on
The `packer validate` Packer command is used to validate the syntax and configuration of a template. The command will return a zero exit status on success, and a non-zero exit status on failure. Additionally, if a template doesn't validate, any error messages will be outputted. success, and a non-zero exit status on failure. Additionally, if a template
--- doesn't validate, any error messages will be outputted.
layout: docs
page_title: 'Validate - Command-Line'
...
# Command-Line: Validate # Command-Line: Validate
The `packer validate` Packer command is used to validate the syntax and configuration The `packer validate` Packer command is used to validate the syntax and
of a [template](/docs/templates/introduction.html). The command will return configuration of a [template](/docs/templates/introduction.html). The command
a zero exit status on success, and a non-zero exit status on failure. Additionally, will return a zero exit status on success, and a non-zero exit status on
if a template doesn't validate, any error messages will be outputted. failure. Additionally, if a template doesn't validate, any error messages will
be outputted.
Example usage: Example usage:
```text ``` {.text}
$ packer validate my-template.json $ packer validate my-template.json
Template validation failed. Errors are shown below. Template validation failed. Errors are shown below.
@ -25,5 +29,5 @@ Errors validating build 'vmware'. 1 error(s) occurred:
## Options ## Options
* `-syntax-only` - Only the syntax of the template is checked. The configuration - `-syntax-only` - Only the syntax of the template is checked. The
is not validated. configuration is not validated.

View File

@ -1,167 +1,170 @@
--- ---
layout: "docs" description: |
page_title: "Custom Builder - Extend Packer" Packer Builders are the components of Packer responsible for creating a machine,
description: |- bringing it to a point where it can be provisioned, and then turning that
Packer Builders are the components of Packer responsible for creating a machine, bringing it to a point where it can be provisioned, and then turning that provisioned machine into some sort of machine image. Several builders are officially distributed with Packer itself, such as the AMI builder, the VMware builder, etc. However, it is possible to write custom builders using the Packer plugin interface, and this page documents how to do that. provisioned machine into some sort of machine image. Several builders are
--- officially distributed with Packer itself, such as the AMI builder, the VMware
builder, etc. However, it is possible to write custom builders using the Packer
plugin interface, and this page documents how to do that.
layout: docs
page_title: 'Custom Builder - Extend Packer'
...
# Custom Builder Development # Custom Builder Development
Packer Builders are the components of Packer responsible for creating a machine, Packer Builders are the components of Packer responsible for creating a machine,
bringing it to a point where it can be provisioned, and then turning bringing it to a point where it can be provisioned, and then turning that
that provisioned machine into some sort of machine image. Several builders provisioned machine into some sort of machine image. Several builders are
are officially distributed with Packer itself, such as the AMI builder, the officially distributed with Packer itself, such as the AMI builder, the VMware
VMware builder, etc. However, it is possible to write custom builders using builder, etc. However, it is possible to write custom builders using the Packer
the Packer plugin interface, and this page documents how to do that. plugin interface, and this page documents how to do that.
Prior to reading this page, it is assumed you have read the page on Prior to reading this page, it is assumed you have read the page on [plugin
[plugin development basics](/docs/extend/developing-plugins.html). development basics](/docs/extend/developing-plugins.html).
~> **Warning!** This is an advanced topic. If you're new to Packer, we \~&gt; **Warning!** This is an advanced topic. If you're new to Packer, we
recommend getting a bit more comfortable before you dive into writing plugins. recommend getting a bit more comfortable before you dive into writing plugins.
## The Interface ## The Interface
The interface that must be implemented for a builder is the `packer.Builder` The interface that must be implemented for a builder is the `packer.Builder`
interface. It is reproduced below for easy reference. The actual interface interface. It is reproduced below for easy reference. The actual interface in
in the source code contains some basic documentation as well explaining the source code contains some basic documentation as well explaining what each
what each method should do. method should do.
```go ``` {.go}
type Builder interface { type Builder interface {
Prepare(...interface{}) error Prepare(...interface{}) error
Run(ui Ui, hook Hook, cache Cache) (Artifact, error) Run(ui Ui, hook Hook, cache Cache) (Artifact, error)
Cancel() Cancel()
} }
``` ```
### The "Prepare" Method ### The "Prepare" Method
The `Prepare` method for each builder is called prior to any runs with The `Prepare` method for each builder is called prior to any runs with the
the configuration that was given in the template. This is passed in as configuration that was given in the template. This is passed in as an array of
an array of `interface{}` types, but is generally `map[string]interface{}`. The prepare `interface{}` types, but is generally `map[string]interface{}`. The prepare
method is responsible for translating this configuration into an internal method is responsible for translating this configuration into an internal
structure, validating it, and returning any errors. structure, validating it, and returning any errors.
For multiple parameters, they should be merged together into the final For multiple parameters, they should be merged together into the final
configuration, with later parameters overwriting any previous configuration. configuration, with later parameters overwriting any previous configuration. The
The exact semantics of the merge are left to the builder author. exact semantics of the merge are left to the builder author.
For decoding the `interface{}` into a meaningful structure, the For decoding the `interface{}` into a meaningful structure, the
[mapstructure](https://github.com/mitchellh/mapstructure) library is recommended. [mapstructure](https://github.com/mitchellh/mapstructure) library is
Mapstructure will take an `interface{}` and decode it into an arbitrarily recommended. Mapstructure will take an `interface{}` and decode it into an
complex struct. If there are any errors, it generates very human friendly arbitrarily complex struct. If there are any errors, it generates very human
errors that can be returned directly from the prepare method. friendly errors that can be returned directly from the prepare method.
While it is not actively enforced, **no side effects** should occur from While it is not actively enforced, **no side effects** should occur from running
running the `Prepare` method. Specifically, don't create files, don't launch the `Prepare` method. Specifically, don't create files, don't launch virtual
virtual machines, etc. Prepare's purpose is solely to configure the builder machines, etc. Prepare's purpose is solely to configure the builder and validate
and validate the configuration. the configuration.
In addition to normal configuration, Packer will inject a `map[string]interface{}` In addition to normal configuration, Packer will inject a
with a key of `packer.DebugConfigKey` set to boolean `true` if debug mode `map[string]interface{}` with a key of `packer.DebugConfigKey` set to boolean
is enabled for the build. If this is set to true, then the builder `true` if debug mode is enabled for the build. If this is set to true, then the
should enable a debug mode which assists builder developers and advanced builder should enable a debug mode which assists builder developers and advanced
users to introspect what is going on during a build. During debug users to introspect what is going on during a build. During debug builds,
builds, parallelism is strictly disabled, so it is safe to request input parallelism is strictly disabled, so it is safe to request input from stdin and
from stdin and so on. so on.
### The "Run" Method ### The "Run" Method
`Run` is where all the interesting stuff happens. Run is executed, often `Run` is where all the interesting stuff happens. Run is executed, often in
in parallel for multiple builders, to actually build the machine, provision parallel for multiple builders, to actually build the machine, provision it, and
it, and create the resulting machine image, which is returned as an create the resulting machine image, which is returned as an implementation of
implementation of the `packer.Artifact` interface. the `packer.Artifact` interface.
The `Run` method takes three parameters. These are all very useful. The The `Run` method takes three parameters. These are all very useful. The
`packer.Ui` object is used to send output to the console. `packer.Hook` is `packer.Ui` object is used to send output to the console. `packer.Hook` is used
used to execute hooks, which are covered in more detail in the hook section to execute hooks, which are covered in more detail in the hook section below.
below. And `packer.Cache` is used to store files between multiple Packer And `packer.Cache` is used to store files between multiple Packer runs, and is
runs, and is covered in more detail in the cache section below. covered in more detail in the cache section below.
Because builder runs are typically a complex set of many steps, the Because builder runs are typically a complex set of many steps, the
[multistep](https://github.com/mitchellh/multistep) library is recommended [multistep](https://github.com/mitchellh/multistep) library is recommended to
to bring order to the complexity. Multistep is a library which allows you to bring order to the complexity. Multistep is a library which allows you to
separate your logic into multiple distinct "steps" and string them together. separate your logic into multiple distinct "steps" and string them together. It
It fully supports cancellation mid-step and so on. Please check it out, it is fully supports cancellation mid-step and so on. Please check it out, it is how
how the built-in builders are all implemented. the built-in builders are all implemented.
Finally, as a result of `Run`, an implementation of `packer.Artifact` should Finally, as a result of `Run`, an implementation of `packer.Artifact` should be
be returned. More details on creating a `packer.Artifact` are covered in the returned. More details on creating a `packer.Artifact` are covered in the
artifact section below. If something goes wrong during the build, an error artifact section below. If something goes wrong during the build, an error can
can be returned, as well. Note that it is perfectly fine to produce no artifact be returned, as well. Note that it is perfectly fine to produce no artifact and
and no error, although this is rare. no error, although this is rare.
### The "Cancel" Method ### The "Cancel" Method
The `Run` method is often run in parallel. The `Cancel` method can be The `Run` method is often run in parallel. The `Cancel` method can be called at
called at any time and requests cancellation of any builder run in progress. any time and requests cancellation of any builder run in progress. This method
This method should block until the run actually stops. should block until the run actually stops.
Cancels are most commonly triggered by external interrupts, such as the Cancels are most commonly triggered by external interrupts, such as the user
user pressing `Ctrl-C`. Packer will only exit once all the builders clean up, pressing `Ctrl-C`. Packer will only exit once all the builders clean up, so it
so it is important that you architect your builder in a way that it is quick is important that you architect your builder in a way that it is quick to
to respond to these cancellations and clean up after itself. respond to these cancellations and clean up after itself.
## Creating an Artifact ## Creating an Artifact
The `Run` method is expected to return an implementation of the The `Run` method is expected to return an implementation of the
`packer.Artifact` interface. Each builder must create their own `packer.Artifact` interface. Each builder must create their own implementation.
implementation. The interface is very simple and the documentation on the The interface is very simple and the documentation on the interface is quite
interface is quite clear. clear.
The only part of an artifact that may be confusing is the `BuilderId` The only part of an artifact that may be confusing is the `BuilderId` method.
method. This method must return an absolutely unique ID for the builder. This method must return an absolutely unique ID for the builder. In general, I
In general, I follow the practice of making the ID contain my GitHub username follow the practice of making the ID contain my GitHub username and then the
and then the platform it is building for. For example, the builder ID of platform it is building for. For example, the builder ID of the VMware builder
the VMware builder is "mitchellh.vmware" or something similar. is "mitchellh.vmware" or something similar.
Post-processors use the builder ID value in order to make some assumptions Post-processors use the builder ID value in order to make some assumptions about
about the artifact results, so it is important it never changes. the artifact results, so it is important it never changes.
Other than the builder ID, the rest should be self-explanatory by reading Other than the builder ID, the rest should be self-explanatory by reading the
the [packer.Artifact interface documentation](#). [packer.Artifact interface documentation](#).
## Provisioning ## Provisioning
Packer has built-in support for provisioning, but the moment when provisioning Packer has built-in support for provisioning, but the moment when provisioning
runs must be invoked by the builder itself, since only the builder knows runs must be invoked by the builder itself, since only the builder knows when
when the machine is running and ready for communication. the machine is running and ready for communication.
When the machine is ready to be provisioned, run the `packer.HookProvision` When the machine is ready to be provisioned, run the `packer.HookProvision`
hook, making sure the communicator is not nil, since this is required for hook, making sure the communicator is not nil, since this is required for
provisioners. An example of calling the hook is shown below: provisioners. An example of calling the hook is shown below:
```go ``` {.go}
hook.Run(packer.HookProvision, ui, comm, nil) hook.Run(packer.HookProvision, ui, comm, nil)
``` ```
At this point, Packer will run the provisioners and no additional work At this point, Packer will run the provisioners and no additional work is
is necessary. necessary.
-> **Note:** Hooks are still undergoing thought around their -&gt; **Note:** Hooks are still undergoing thought around their general design
general design and will likely change in a future version. They aren't and will likely change in a future version. They aren't fully "baked" yet, so
fully "baked" yet, so they aren't documented here other than to tell you they aren't documented here other than to tell you how to hook in provisioners.
how to hook in provisioners.
## Caching Files ## Caching Files
It is common for some builders to deal with very large files, or files that It is common for some builders to deal with very large files, or files that take
take a long time to generate. For example, the VMware builder has the capability a long time to generate. For example, the VMware builder has the capability to
to download the operating system ISO from the internet. This is timely process, download the operating system ISO from the internet. This is timely process, so
so it would be convenient to cache the file. This sort of caching is a core it would be convenient to cache the file. This sort of caching is a core part of
part of Packer that is exposed to builders. Packer that is exposed to builders.
The cache interface is `packer.Cache`. It behaves much like a Go The cache interface is `packer.Cache`. It behaves much like a Go
[RWMutex](http://golang.org/pkg/sync/#RWMutex). The builder requests a "lock" [RWMutex](http://golang.org/pkg/sync/#RWMutex). The builder requests a "lock" on
on certain cache keys, and is given exclusive access to that key for the certain cache keys, and is given exclusive access to that key for the duration
duration of the lock. This locking mechanism allows multiple builders to of the lock. This locking mechanism allows multiple builders to share cache data
share cache data even though they're running in parallel. even though they're running in parallel.
For example, both the VMware and VirtualBox builders support downloading an For example, both the VMware and VirtualBox builders support downloading an
operating system ISO from the internet. Most of the time, this ISO is identical. operating system ISO from the internet. Most of the time, this ISO is identical.
The locking mechanisms of the cache allow one of the builders to download it The locking mechanisms of the cache allow one of the builders to download it
only once, but allow both builders to share the downloaded file. only once, but allow both builders to share the downloaded file.
The [documentation for packer.Cache](#) is The [documentation for packer.Cache](#) is very detailed in how it works.
very detailed in how it works.

Some files were not shown because too many files have changed in this diff Show More