Merge branch 'master' into custom-targetpath

This commit is contained in:
Olivier Tremblay 2015-08-20 07:26:22 -04:00
commit 661552dfd5
171 changed files with 7044 additions and 5727 deletions

View File

@ -1,3 +1,53 @@
## (Unreleased)
IMPROVEMENTS:
* builder/docker: Now supports Download so it can be used with the file
provisioner to download a file from a container. [GH-2585]
* post-processor/vagrant: Like the compress post-processor, vagrant now uses a
parallel gzip algorithm to compress vagrant boxes. [GH-2590]
BUG FIXES:
* builded/parallels: Fix interpolation in parallels_tools_guest_path [GH-2543]
## 0.8.5 (Aug 10, 2015)
FEATURES:
* **[Beta]** Artifice post-processor: Override packer artifacts during post-
processing. This allows you to extract artifacts from a packer builder
and use them with other post-processors like compress, docker, and Atlas.
IMPROVEMENTS:
* Many docs have been updated and corrected; big thanks to our contributors!
* builder/openstack: Add debug logging for IP addresses used for SSH [GH-2513]
* builder/openstack: Add option to use existing SSH keypair [GH-2512]
* builder/openstack: Add support for Glance metadata [GH-2434]
* builder/qemu and builder/vmware: Packer's VNC connection no longer asks for
an exclusive connection [GH-2522]
* provisioner/salt-masterless: Can now customize salt remote directories [GH-2519]
BUG FIXES:
* builder/amazon: Improve instance cleanup by storing id sooner [GH-2404]
* builder/amazon: Only fetch windows password when using WinRM communicator [GH-2538]
* builder/openstack: Support IPv6 SSH address [GH-2450]
* builder/openstack: Track new IP address discovered during RackConnect [GH-2514]
* builder/qemu: Add 100ms delay between VNC key events. [GH-2415]
* post-processor/atlas: atlas_url configuration option works now [GH-2478]
* post-processor/compress: Now supports interpolation in output config [GH-2414]
* provisioner/powershell: Elevated runs now receive environment variables [GH-2378]
* provisioner/salt-masterless: Clarify error messages when we can't create or
write to the temp directory [GH-2518]
* provisioner/salt-masterless: Copy state even if /srv/salt exists already [GH-1699]
* provisioner/salt-masterless: Make sure /etc/salt exists before writing to it [GH-2520]
* provisioner/winrm: Connect to the correct port when using NAT with
VirtualBox / VMware [GH-2399]
Note: 0.8.3 was pulled and 0.8.4 was skipped.
## 0.8.2 (July 17, 2015)
IMPROVEMENTS:

View File

@ -1,8 +1,6 @@
TEST?=./...
VETARGS?=-asmdecl -atomic -bool -buildtags -copylocks -methods \
-nilfunc -printf -rangeloops -shift -structtags -unsafeptr
default: test
default: test vet dev
bin:
@sh -c "$(CURDIR)/scripts/build.sh"
@ -16,6 +14,7 @@ generate:
go generate ./...
test:
@echo "Running tests on:"; git symbolic-ref HEAD; git rev-parse HEAD
go test $(TEST) $(TESTARGS) -timeout=10s
@$(MAKE) vet
@ -31,19 +30,23 @@ testrace:
go test -race $(TEST) $(TESTARGS)
updatedeps:
@echo "Updating deps on:"; git symbolic-ref HEAD; git rev-parse HEAD
go get -u github.com/mitchellh/gox
go get -u golang.org/x/tools/cmd/stringer
go list ./... \
| xargs go list -f '{{join .Deps "\n"}}' \
| grep -v github.com/mitchellh/packer \
| grep -v '/internal/' \
| sort -u \
| xargs go get -f -u -v
@echo "Finished updating deps, now on:"; git symbolic-ref HEAD; git rev-parse HEAD
vet:
@go tool vet 2>/dev/null ; if [ $$? -eq 3 ]; then \
@echo "Running go vet on:"; git symbolic-ref HEAD; git rev-parse HEAD
@go vet 2>/dev/null ; if [ $$? -eq 3 ]; then \
go get golang.org/x/tools/cmd/vet; \
fi
@go tool vet $(VETARGS) . ; if [ $$? -eq 1 ]; then \
@go vet ./... ; if [ $$? -eq 1 ]; then \
echo ""; \
echo "Vet found suspicious constructs. Please check the reported constructs"; \
echo "and fix them if necessary before submitting the code for reviewal."; \

View File

@ -31,6 +31,8 @@ install:
build_script:
- go test -v ./...
- go vet ./...
- git rev-parse HEAD
test: off

View File

@ -5,7 +5,6 @@ import (
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awsutil"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/mitchellh/multistep"
awscommon "github.com/mitchellh/packer/builder/amazon/common"
@ -52,12 +51,12 @@ func (s *StepCreateVolume) Run(state multistep.StateBag) multistep.StepAction {
}
createVolume := &ec2.CreateVolumeInput{
AvailabilityZone: instance.Placement.AvailabilityZone,
Size: aws.Long(vs),
Size: aws.Int64(vs),
SnapshotID: rootDevice.EBS.SnapshotID,
VolumeType: rootDevice.EBS.VolumeType,
IOPS: rootDevice.EBS.IOPS,
}
log.Printf("Create args: %s", awsutil.StringValue(createVolume))
log.Printf("Create args: %s", createVolume)
createVolumeResp, err := ec2conn.CreateVolume(createVolume)
if err != nil {

View File

@ -34,7 +34,7 @@ func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction {
}
if s.RootVolumeSize > *newDevice.EBS.VolumeSize {
newDevice.EBS.VolumeSize = aws.Long(s.RootVolumeSize)
newDevice.EBS.VolumeSize = aws.Int64(s.RootVolumeSize)
}
}
@ -64,7 +64,7 @@ func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction {
// Set the AMI ID in the state
ui.Say(fmt.Sprintf("AMI: %s", *registerResp.ImageID))
amis := make(map[string]string)
amis[ec2conn.Config.Region] = *registerResp.ImageID
amis[*ec2conn.Config.Region] = *registerResp.ImageID
state.Put("amis", amis)
// Wait for the image to become ready

View File

@ -9,6 +9,7 @@ import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/mitchellh/packer/template/interpolate"
)
@ -31,7 +32,7 @@ func (c *AccessConfig) Config() (*aws.Config, error) {
}},
&credentials.EnvProvider{},
&credentials.SharedCredentialsProvider{Filename: "", Profile: ""},
&credentials.EC2RoleProvider{},
&ec2rolecreds.EC2RoleProvider{},
})
region, err := c.Region()
@ -40,9 +41,9 @@ func (c *AccessConfig) Config() (*aws.Config, error) {
}
return &aws.Config{
Region: region,
Region: aws.String(region),
Credentials: creds,
MaxRetries: 11,
MaxRetries: aws.Int(11),
}, nil
}

View File

@ -70,7 +70,7 @@ func (a *Artifact) Destroy() error {
regionConfig := &aws.Config{
Credentials: a.Conn.Config.Credentials,
Region: region,
Region: aws.String(region),
}
regionConn := ec2.New(regionConfig)
@ -88,7 +88,7 @@ func (a *Artifact) Destroy() error {
if len(errors) == 1 {
return errors[0]
} else {
return &packer.MultiError{errors}
return &packer.MultiError{Errors: errors}
}
}

View File

@ -32,20 +32,20 @@ func buildBlockDevices(b []BlockDevice) []*ec2.BlockDeviceMapping {
for _, blockDevice := range b {
ebsBlockDevice := &ec2.EBSBlockDevice{
VolumeType: aws.String(blockDevice.VolumeType),
VolumeSize: aws.Long(blockDevice.VolumeSize),
DeleteOnTermination: aws.Boolean(blockDevice.DeleteOnTermination),
VolumeSize: aws.Int64(blockDevice.VolumeSize),
DeleteOnTermination: aws.Bool(blockDevice.DeleteOnTermination),
}
// IOPS is only valid for SSD Volumes
if blockDevice.VolumeType != "" && blockDevice.VolumeType != "standard" && blockDevice.VolumeType != "gp2" {
ebsBlockDevice.IOPS = aws.Long(blockDevice.IOPS)
ebsBlockDevice.IOPS = aws.Int64(blockDevice.IOPS)
}
// You cannot specify Encrypted if you specify a Snapshot ID
if blockDevice.SnapshotId != "" {
ebsBlockDevice.SnapshotID = aws.String(blockDevice.SnapshotId)
} else if blockDevice.Encrypted {
ebsBlockDevice.Encrypted = aws.Boolean(blockDevice.Encrypted)
ebsBlockDevice.Encrypted = aws.Bool(blockDevice.Encrypted)
}
mapping := &ec2.BlockDeviceMapping{

View File

@ -5,7 +5,6 @@ import (
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awsutil"
"github.com/aws/aws-sdk-go/service/ec2"
)
@ -29,8 +28,8 @@ func TestBlockDevice(t *testing.T) {
EBS: &ec2.EBSBlockDevice{
SnapshotID: aws.String("snap-1234"),
VolumeType: aws.String("standard"),
VolumeSize: aws.Long(8),
DeleteOnTermination: aws.Boolean(true),
VolumeSize: aws.Int64(8),
DeleteOnTermination: aws.Bool(true),
},
},
},
@ -45,8 +44,8 @@ func TestBlockDevice(t *testing.T) {
VirtualName: aws.String(""),
EBS: &ec2.EBSBlockDevice{
VolumeType: aws.String(""),
VolumeSize: aws.Long(8),
DeleteOnTermination: aws.Boolean(false),
VolumeSize: aws.Int64(8),
DeleteOnTermination: aws.Bool(false),
},
},
},
@ -64,9 +63,9 @@ func TestBlockDevice(t *testing.T) {
VirtualName: aws.String(""),
EBS: &ec2.EBSBlockDevice{
VolumeType: aws.String("io1"),
VolumeSize: aws.Long(8),
DeleteOnTermination: aws.Boolean(true),
IOPS: aws.Long(1000),
VolumeSize: aws.Int64(8),
DeleteOnTermination: aws.Bool(true),
IOPS: aws.Int64(1000),
},
},
},
@ -93,13 +92,13 @@ func TestBlockDevice(t *testing.T) {
got := blockDevices.BuildAMIDevices()
if !reflect.DeepEqual(expected, got) {
t.Fatalf("Bad block device, \nexpected: %s\n\ngot: %s",
awsutil.StringValue(expected), awsutil.StringValue(got))
expected, got)
}
if !reflect.DeepEqual(expected, blockDevices.BuildLaunchDevices()) {
t.Fatalf("Bad block device, \nexpected: %s\n\ngot: %s",
awsutil.StringValue(expected),
awsutil.StringValue(blockDevices.BuildLaunchDevices()))
expected,
blockDevices.BuildLaunchDevices())
}
}
}

View File

@ -181,8 +181,6 @@ func WaitForState(conf *StateChangeConf) (i interface{}, err error) {
time.Sleep(time.Duration(sleepSeconds) * time.Second)
}
return
}
func isTransientNetworkError(err error) bool {

View File

@ -5,6 +5,7 @@ import (
"sync"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/mitchellh/multistep"
@ -21,7 +22,7 @@ func (s *StepAMIRegionCopy) Run(state multistep.StateBag) multistep.StepAction {
ec2conn := state.Get("ec2").(*ec2.EC2)
ui := state.Get("ui").(packer.Ui)
amis := state.Get("amis").(map[string]string)
ami := amis[ec2conn.Config.Region]
ami := amis[*ec2conn.Config.Region]
if len(s.Regions) == 0 {
return multistep.ActionContinue
@ -33,7 +34,7 @@ func (s *StepAMIRegionCopy) Run(state multistep.StateBag) multistep.StepAction {
var wg sync.WaitGroup
errs := new(packer.MultiError)
for _, region := range s.Regions {
if region == ec2conn.Config.Region {
if region == *ec2conn.Config.Region {
ui.Message(fmt.Sprintf(
"Avoiding copying AMI to duplicate region %s", region))
continue
@ -44,7 +45,7 @@ func (s *StepAMIRegionCopy) Run(state multistep.StateBag) multistep.StepAction {
go func(region string) {
defer wg.Done()
id, err := amiRegionCopy(state, s.AccessConfig, s.Name, ami, region, ec2conn.Config.Region)
id, err := amiRegionCopy(state, s.AccessConfig, s.Name, ami, region, *ec2conn.Config.Region)
lock.Lock()
defer lock.Unlock()
@ -84,7 +85,7 @@ func amiRegionCopy(state multistep.StateBag, config *AccessConfig, name string,
if err != nil {
return "", err
}
awsConfig.Region = target
awsConfig.Region = aws.String(target)
regionconn := ec2.New(awsConfig)
resp, err := regionconn.CopyImage(&ec2.CopyImageInput{

View File

@ -36,7 +36,7 @@ func (s *StepCreateTags) Run(state multistep.StateBag) multistep.StepAction {
regionconn := ec2.New(&aws.Config{
Credentials: ec2conn.Config.Credentials,
Region: region,
Region: aws.String(region),
})
// Retrieve image list for given AMI

View File

@ -26,11 +26,10 @@ type StepGetPassword struct {
func (s *StepGetPassword) Run(state multistep.StateBag) multistep.StepAction {
ui := state.Get("ui").(packer.Ui)
image := state.Get("source_image").(*ec2.Image)
// Skip if we're not Windows...
if image.Platform == nil || *image.Platform != "windows" {
log.Printf("[INFO] Not Windows, skipping get password...")
// Skip if we're not using winrm
if s.Comm.Type != "winrm" {
log.Printf("[INFO] Not using winrm communicator, skipping get password...")
return multistep.ActionContinue
}

View File

@ -90,7 +90,7 @@ func (s *StepModifyAMIAttributes) Run(state multistep.StateBag) multistep.StepAc
ui.Say(fmt.Sprintf("Modifying attributes on AMI (%s)...", ami))
regionconn := ec2.New(&aws.Config{
Credentials: ec2conn.Config.Credentials,
Region: region,
Region: aws.String(region),
})
for name, input := range options {
ui.Message(fmt.Sprintf("Modifying: %s", name))

View File

@ -31,7 +31,7 @@ type StepRunSourceInstance struct {
UserData string
UserDataFile string
instance *ec2.Instance
instanceId string
spotRequest *ec2.SpotInstanceRequest
}
@ -141,8 +141,8 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
ImageID: &s.SourceAMI,
InstanceType: &s.InstanceType,
UserData: &userData,
MaxCount: aws.Long(1),
MinCount: aws.Long(1),
MaxCount: aws.Int64(1),
MinCount: aws.Int64(1),
IAMInstanceProfile: &ec2.IAMInstanceProfileSpecification{Name: &s.IamInstanceProfile},
BlockDeviceMappings: s.BlockDevices.BuildLaunchDevices(),
Placement: &ec2.Placement{AvailabilityZone: &s.AvailabilityZone},
@ -151,11 +151,11 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
if s.SubnetId != "" && s.AssociatePublicIpAddress {
runOpts.NetworkInterfaces = []*ec2.InstanceNetworkInterfaceSpecification{
&ec2.InstanceNetworkInterfaceSpecification{
DeviceIndex: aws.Long(0),
DeviceIndex: aws.Int64(0),
AssociatePublicIPAddress: &s.AssociatePublicIpAddress,
SubnetID: &s.SubnetId,
Groups: securityGroupIds,
DeleteOnTermination: aws.Boolean(true),
DeleteOnTermination: aws.Bool(true),
},
}
} else {
@ -185,11 +185,11 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
IAMInstanceProfile: &ec2.IAMInstanceProfileSpecification{Name: &s.IamInstanceProfile},
NetworkInterfaces: []*ec2.InstanceNetworkInterfaceSpecification{
&ec2.InstanceNetworkInterfaceSpecification{
DeviceIndex: aws.Long(0),
DeviceIndex: aws.Int64(0),
AssociatePublicIPAddress: &s.AssociatePublicIpAddress,
SubnetID: &s.SubnetId,
Groups: securityGroupIds,
DeleteOnTermination: aws.Boolean(true),
DeleteOnTermination: aws.Bool(true),
},
},
Placement: &ec2.SpotPlacement{
@ -235,6 +235,9 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
instanceId = *spotResp.SpotInstanceRequests[0].InstanceID
}
// Set the instance ID so that the cleanup works properly
s.instanceId = instanceId
ui.Message(fmt.Sprintf("Instance ID: %s", instanceId))
ui.Say(fmt.Sprintf("Waiting for instance (%v) to become ready...", instanceId))
stateChange := StateChangeConf{
@ -251,7 +254,7 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
return multistep.ActionHalt
}
s.instance = latestInstance.(*ec2.Instance)
instance := latestInstance.(*ec2.Instance)
ec2Tags := make([]*ec2.Tag, 1, len(s.Tags)+1)
ec2Tags[0] = &ec2.Tag{Key: aws.String("Name"), Value: aws.String("Packer Builder")}
@ -261,7 +264,7 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
_, err = ec2conn.CreateTags(&ec2.CreateTagsInput{
Tags: ec2Tags,
Resources: []*string{s.instance.InstanceID},
Resources: []*string{instance.InstanceID},
})
if err != nil {
ui.Message(
@ -269,20 +272,20 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
}
if s.Debug {
if s.instance.PublicDNSName != nil && *s.instance.PublicDNSName != "" {
ui.Message(fmt.Sprintf("Public DNS: %s", *s.instance.PublicDNSName))
if instance.PublicDNSName != nil && *instance.PublicDNSName != "" {
ui.Message(fmt.Sprintf("Public DNS: %s", *instance.PublicDNSName))
}
if s.instance.PublicIPAddress != nil && *s.instance.PublicIPAddress != "" {
ui.Message(fmt.Sprintf("Public IP: %s", *s.instance.PublicIPAddress))
if instance.PublicIPAddress != nil && *instance.PublicIPAddress != "" {
ui.Message(fmt.Sprintf("Public IP: %s", *instance.PublicIPAddress))
}
if s.instance.PrivateIPAddress != nil && *s.instance.PrivateIPAddress != "" {
ui.Message(fmt.Sprintf("Private IP: %s", *s.instance.PrivateIPAddress))
if instance.PrivateIPAddress != nil && *instance.PrivateIPAddress != "" {
ui.Message(fmt.Sprintf("Private IP: %s", *instance.PrivateIPAddress))
}
}
state.Put("instance", s.instance)
state.Put("instance", instance)
return multistep.ActionContinue
}
@ -313,16 +316,15 @@ func (s *StepRunSourceInstance) Cleanup(state multistep.StateBag) {
}
// Terminate the source instance if it exists
if s.instance != nil {
if s.instanceId != "" {
ui.Say("Terminating the source AWS instance...")
if _, err := ec2conn.TerminateInstances(&ec2.TerminateInstancesInput{InstanceIDs: []*string{s.instance.InstanceID}}); err != nil {
if _, err := ec2conn.TerminateInstances(&ec2.TerminateInstancesInput{InstanceIDs: []*string{&s.instanceId}}); err != nil {
ui.Error(fmt.Sprintf("Error terminating instance, may still be around: %s", err))
return
}
stateChange := StateChangeConf{
Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"},
Refresh: InstanceStateRefreshFunc(ec2conn, *s.instance.InstanceID),
Refresh: InstanceStateRefreshFunc(ec2conn, s.instanceId),
Target: "terminated",
}

View File

@ -59,8 +59,8 @@ func (s *StepSecurityGroup) Run(state multistep.StateBag) multistep.StepAction {
req := &ec2.AuthorizeSecurityGroupIngressInput{
GroupID: groupResp.GroupID,
IPProtocol: aws.String("tcp"),
FromPort: aws.Long(int64(port)),
ToPort: aws.Long(int64(port)),
FromPort: aws.Int64(int64(port)),
ToPort: aws.Int64(int64(port)),
CIDRIP: aws.String("0.0.0.0/0"),
}

View File

@ -38,7 +38,7 @@ func (s *stepCreateAMI) Run(state multistep.StateBag) multistep.StepAction {
// Set the AMI ID in the state
ui.Message(fmt.Sprintf("AMI: %s", *createResp.ImageID))
amis := make(map[string]string)
amis[ec2conn.Config.Region] = *createResp.ImageID
amis[*ec2conn.Config.Region] = *createResp.ImageID
state.Put("amis", amis)
// Wait for the image to become ready

View File

@ -44,7 +44,7 @@ func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction {
// Set the AMI ID in the state
ui.Say(fmt.Sprintf("AMI: %s", *registerResp.ImageID))
amis := make(map[string]string)
amis[ec2conn.Config.Region] = *registerResp.ImageID
amis[*ec2conn.Config.Region] = *registerResp.ImageID
state.Put("amis", amis)
// Wait for the image to become ready

View File

@ -10,11 +10,11 @@ import (
"os"
"runtime"
"code.google.com/p/gosshold/ssh"
"github.com/digitalocean/godo"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/common/uuid"
"github.com/mitchellh/packer/packer"
"golang.org/x/crypto/ssh"
)
type stepCreateSSHKey struct {

View File

@ -1,6 +1,7 @@
package docker
import (
"archive/tar"
"bytes"
"fmt"
"io"
@ -24,8 +25,8 @@ type Communicator struct {
HostDir string
ContainerDir string
Version *version.Version
Config *Config
lock sync.Mutex
Config *Config
lock sync.Mutex
}
func (c *Communicator) Start(remote *packer.RemoteCmd) error {
@ -194,8 +195,42 @@ func (c *Communicator) UploadDir(dst string, src string, exclude []string) error
return nil
}
// Download pulls a file out of a container using `docker cp`. We have a source
// path and want to write to an io.Writer, not a file. We use - to make docker
// cp to write to stdout, and then copy the stream to our destination io.Writer.
func (c *Communicator) Download(src string, dst io.Writer) error {
panic("not implemented")
log.Printf("Downloading file from container: %s:%s", c.ContainerId, src)
localCmd := exec.Command("docker", "cp", fmt.Sprintf("%s:%s", c.ContainerId, src), "-")
pipe, err := localCmd.StdoutPipe()
if err != nil {
return fmt.Errorf("Failed to open pipe: %s", err)
}
if err = localCmd.Start(); err != nil {
return fmt.Errorf("Failed to start download: %s", err)
}
// When you use - to send docker cp to stdout it is streamed as a tar; this
// enables it to work with directories. We don't actually support
// directories in Download() but we still need to handle the tar format.
archive := tar.NewReader(pipe)
_, err = archive.Next()
if err != nil {
return fmt.Errorf("Failed to read header from tar stream: %s", err)
}
numBytes, err := io.Copy(dst, archive)
if err != nil {
return fmt.Errorf("Failed to pipe download: %s", err)
}
log.Printf("Copied %d bytes for %s", numBytes, src)
if err = localCmd.Wait(); err != nil {
return fmt.Errorf("Failed to download '%s' from container: %s", src, err)
}
return nil
}
// canExec tells us whether `docker exec` is supported

View File

@ -1,10 +1,129 @@
package docker
import (
"github.com/mitchellh/packer/packer"
"crypto/sha256"
"io/ioutil"
"os"
"os/exec"
"runtime"
"strings"
"testing"
"github.com/mitchellh/packer/packer"
"github.com/mitchellh/packer/provisioner/file"
"github.com/mitchellh/packer/template"
)
func TestCommunicator_impl(t *testing.T) {
var _ packer.Communicator = new(Communicator)
}
func TestUploadDownload(t *testing.T) {
ui := packer.TestUi(t)
cache := &packer.FileCache{CacheDir: os.TempDir()}
tpl, err := template.Parse(strings.NewReader(dockerBuilderConfig))
if err != nil {
t.Fatalf("Unable to parse config: %s", err)
}
// Make sure we only run this on linux hosts
if os.Getenv("PACKER_ACC") == "" {
t.Skip("This test is only run with PACKER_ACC=1")
}
if runtime.GOOS != "linux" {
t.Skip("This test is only supported on linux")
}
cmd := exec.Command("docker", "-v")
cmd.Run()
if !cmd.ProcessState.Success() {
t.Error("docker command not found; please make sure docker is installed")
}
// Setup the builder
builder := &Builder{}
warnings, err := builder.Prepare(tpl.Builders["docker"].Config)
if err != nil {
t.Fatalf("Error preparing configuration %s", err)
}
if len(warnings) > 0 {
t.Fatal("Encountered configuration warnings; aborting")
}
// Setup the provisioners
upload := &file.Provisioner{}
err = upload.Prepare(tpl.Provisioners[0].Config)
if err != nil {
t.Fatalf("Error preparing upload: %s", err)
}
download := &file.Provisioner{}
err = download.Prepare(tpl.Provisioners[1].Config)
if err != nil {
t.Fatalf("Error preparing download: %s", err)
}
// Preemptive cleanup. Honestly I don't know why you would want to get rid
// of my strawberry cake. It's so tasty! Do you not like cake? Are you a
// cake-hater? Or are you keeping all the cake all for yourself? So selfish!
defer os.Remove("my-strawberry-cake")
// Add hooks so the provisioners run during the build
hooks := map[string][]packer.Hook{}
hooks[packer.HookProvision] = []packer.Hook{
&packer.ProvisionHook{
Provisioners: []packer.Provisioner{
upload,
download,
},
},
}
hook := &packer.DispatchHook{Mapping: hooks}
// Run things
artifact, err := builder.Run(ui, hook, cache)
if err != nil {
t.Fatalf("Error running build %s", err)
}
// Preemptive cleanup
defer artifact.Destroy()
// Verify that the thing we downloaded is the same thing we sent up.
// Complain loudly if it isn't.
inputFile, err := ioutil.ReadFile("test-fixtures/onecakes/strawberry")
if err != nil {
t.Fatalf("Unable to read input file: %s", err)
}
outputFile, err := ioutil.ReadFile("my-strawberry-cake")
if err != nil {
t.Fatalf("Unable to read output file: %s", err)
}
if sha256.Sum256(inputFile) != sha256.Sum256(outputFile) {
t.Fatalf("Input and output files do not match\n"+
"Input:\n%s\nOutput:\n%s\n", inputFile, outputFile)
}
}
const dockerBuilderConfig = `
{
"builders": [
{
"type": "docker",
"image": "alpine",
"export_path": "alpine.tar",
"run_command": ["-d", "-i", "-t", "{{.Image}}", "/bin/sh"]
}
],
"provisioners": [
{
"type": "file",
"source": "test-fixtures/onecakes/strawberry",
"destination": "/strawberry-cake"
},
{
"type": "file",
"source": "/strawberry-cake",
"destination": "my-strawberry-cake",
"direction": "download"
}
]
}
`

View File

@ -26,7 +26,7 @@ func (s *StepConnectDocker) Run(state multistep.StateBag) multistep.StepAction {
HostDir: tempDir,
ContainerDir: "/packer-files",
Version: version,
Config: config,
Config: config,
}
state.Put("communicator", comm)

View File

@ -0,0 +1 @@
chocolate!

View File

@ -0,0 +1 @@
vanilla!

View File

@ -0,0 +1 @@
strawberry!

View File

@ -1,15 +1,16 @@
package googlecompute
import (
"code.google.com/p/gosshold/ssh"
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"encoding/pem"
"fmt"
"os"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"os"
"golang.org/x/crypto/ssh"
)
// StepCreateSSHKey represents a Packer build step that generates SSH key pairs.

View File

@ -75,8 +75,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
Flavor: b.config.Flavor,
},
&StepKeyPair{
Debug: b.config.PackerDebug,
DebugKeyPath: fmt.Sprintf("os_%s.pem", b.config.PackerBuildName),
Debug: b.config.PackerDebug,
DebugKeyPath: fmt.Sprintf("os_%s.pem", b.config.PackerBuildName),
KeyPairName: b.config.SSHKeyPairName,
PrivateKeyFile: b.config.RunConfig.Comm.SSHPrivateKey,
},
&StepRunSourceServer{
Name: b.config.ImageName,

View File

@ -8,7 +8,8 @@ import (
// ImageConfig is for common configuration related to creating Images.
type ImageConfig struct {
ImageName string `mapstructure:"image_name"`
ImageName string `mapstructure:"image_name"`
ImageMetadata map[string]string `mapstructure:"metadata"`
}
func (c *ImageConfig) Prepare(ctx *interpolate.Context) []error {

View File

@ -10,8 +10,9 @@ import (
// RunConfig contains configuration for running an instance from a source
// image and details on how to access that launched image.
type RunConfig struct {
Comm communicator.Config `mapstructure:",squash"`
SSHInterface string `mapstructure:"ssh_interface"`
Comm communicator.Config `mapstructure:",squash"`
SSHKeyPairName string `mapstructure:"ssh_keypair_name"`
SSHInterface string `mapstructure:"ssh_interface"`
SourceImage string `mapstructure:"source_image"`
Flavor string `mapstructure:"flavor"`

View File

@ -92,6 +92,4 @@ func WaitForState(conf *StateChangeConf) (i interface{}, err error) {
log.Printf("Waiting for state to become: %s currently %s (%d%%)", conf.Target, currentState, currentProgress)
time.Sleep(2 * time.Second)
}
return
}

View File

@ -23,6 +23,7 @@ func CommHost(
// If we have a specific interface, try that
if sshinterface != "" {
if addr := sshAddrFromPool(s, sshinterface); addr != "" {
log.Printf("[DEBUG] Using IP address %s from specified interface %s for SSH", addr, sshinterface)
return addr, nil
}
}
@ -30,15 +31,18 @@ func CommHost(
// If we have a floating IP, use that
ip := state.Get("access_ip").(*floatingip.FloatingIP)
if ip != nil && ip.IP != "" {
log.Printf("[DEBUG] Using floating IP %s for SSH", ip.IP)
return ip.IP, nil
}
if s.AccessIPv4 != "" {
log.Printf("[DEBUG] Using AccessIPv4 %s for SSH", s.AccessIPv4)
return s.AccessIPv4, nil
}
// Try to get it from the requested interface
if addr := sshAddrFromPool(s, sshinterface); addr != "" {
log.Printf("[DEBUG] Using IP address %s for SSH", addr)
return addr, nil
}
@ -101,11 +105,15 @@ func sshAddrFromPool(s *servers.Server, desired string) string {
if address["OS-EXT-IPS:type"] == "floating" {
addr = address["addr"].(string)
} else {
if address["version"].(float64) == 4 {
if address["version"].(float64) == 6 {
addr = fmt.Sprintf("[%s]", address["addr"].(string))
} else {
addr = address["addr"].(string)
}
}
if addr != "" {
log.Printf("[DEBUG] Detected address: %s", addr)
return addr
}
}

View File

@ -30,7 +30,8 @@ func (s *stepCreateImage) Run(state multistep.StateBag) multistep.StepAction {
// Create the image
ui.Say(fmt.Sprintf("Creating the image: %s", config.ImageName))
imageId, err := servers.CreateImage(client, server.ID, servers.CreateImageOpts{
Name: config.ImageName,
Name: config.ImageName,
Metadata: config.ImageMetadata,
}).ExtractImageID()
if err != nil {
err := fmt.Errorf("Error creating image: %s", err)

View File

@ -2,6 +2,7 @@ package openstack
import (
"fmt"
"io/ioutil"
"os"
"runtime"
@ -12,12 +13,29 @@ import (
)
type StepKeyPair struct {
Debug bool
DebugKeyPath string
keyName string
Debug bool
DebugKeyPath string
KeyPairName string
PrivateKeyFile string
keyName string
}
func (s *StepKeyPair) Run(state multistep.StateBag) multistep.StepAction {
if s.PrivateKeyFile != "" {
privateKeyBytes, err := ioutil.ReadFile(s.PrivateKeyFile)
if err != nil {
state.Put("error", fmt.Errorf(
"Error loading configured private key file: %s", err))
return multistep.ActionHalt
}
state.Put("keyPair", s.KeyPairName)
state.Put("privateKey", string(privateKeyBytes))
return multistep.ActionContinue
}
config := state.Get("config").(Config)
ui := state.Get("ui").(packer.Ui)
@ -81,6 +99,11 @@ func (s *StepKeyPair) Run(state multistep.StateBag) multistep.StepAction {
}
func (s *StepKeyPair) Cleanup(state multistep.StateBag) {
// If we used an SSH private key file, do not go about deleting
// keypairs
if s.PrivateKeyFile != "" {
return
}
// If no key name is set, then we never created it, so just return
if s.keyName == "" {
return

View File

@ -39,6 +39,7 @@ func (s *StepWaitForRackConnect) Run(state multistep.StateBag) multistep.StepAct
}
if server.Metadata["rackconnect_automation_status"] == "DEPLOYED" {
state.Put("server", server)
break
}

View File

@ -0,0 +1,86 @@
package common
import (
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"testing"
)
func TestStepUploadParallelsTools_impl(t *testing.T) {
var _ multistep.Step = new(StepUploadParallelsTools)
}
func TestStepUploadParallelsTools(t *testing.T) {
state := testState(t)
state.Put("parallels_tools_path", "./step_upload_parallels_tools_test.go")
step := new(StepUploadParallelsTools)
step.ParallelsToolsMode = "upload"
step.ParallelsToolsGuestPath = "/tmp/prl-lin.iso"
step.ParallelsToolsFlavor = "lin"
comm := new(packer.MockCommunicator)
state.Put("communicator", comm)
// Test the run
if action := step.Run(state); action != multistep.ActionContinue {
t.Fatalf("bad action: %#v", action)
}
if _, ok := state.GetOk("error"); ok {
t.Fatal("should NOT have error")
}
// Verify
if comm.UploadPath != "/tmp/prl-lin.iso" {
t.Fatalf("bad: %#v", comm.UploadPath)
}
}
func TestStepUploadParallelsTools_interpolate(t *testing.T) {
state := testState(t)
state.Put("parallels_tools_path", "./step_upload_parallels_tools_test.go")
step := new(StepUploadParallelsTools)
step.ParallelsToolsMode = "upload"
step.ParallelsToolsGuestPath = "/tmp/prl-{{ .Flavor }}.iso"
step.ParallelsToolsFlavor = "win"
comm := new(packer.MockCommunicator)
state.Put("communicator", comm)
// Test the run
if action := step.Run(state); action != multistep.ActionContinue {
t.Fatalf("bad action: %#v", action)
}
if _, ok := state.GetOk("error"); ok {
t.Fatal("should NOT have error")
}
// Verify
if comm.UploadPath != "/tmp/prl-win.iso" {
t.Fatalf("bad: %#v", comm.UploadPath)
}
}
func TestStepUploadParallelsTools_attach(t *testing.T) {
state := testState(t)
state.Put("parallels_tools_path", "./step_upload_parallels_tools_test.go")
step := new(StepUploadParallelsTools)
step.ParallelsToolsMode = "attach"
step.ParallelsToolsGuestPath = "/tmp/prl-lin.iso"
step.ParallelsToolsFlavor = "lin"
comm := new(packer.MockCommunicator)
state.Put("communicator", comm)
// Test the run
if action := step.Run(state); action != multistep.ActionContinue {
t.Fatalf("bad action: %#v", action)
}
if _, ok := state.GetOk("error"); ok {
t.Fatal("should NOT have error")
}
// Verify
if comm.UploadCalled {
t.Fatal("bad")
}
}

View File

@ -65,7 +65,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
Exclude: []string{
"boot_command",
"prlctl",
"parallel_tools_guest_path",
"parallels_tools_guest_path",
},
},
}, raws...)

View File

@ -41,7 +41,7 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) {
Exclude: []string{
"boot_command",
"prlctl",
"parallel_tools_guest_path",
"parallels_tools_guest_path",
},
},
}, raws...)

View File

@ -52,7 +52,7 @@ func (s *stepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction
}
defer nc.Close()
c, err := vnc.Client(nc, &vnc.ClientConfig{Exclusive: true})
c, err := vnc.Client(nc, &vnc.ClientConfig{Exclusive: false})
if err != nil {
err := fmt.Errorf("Error handshaking with VNC: %s", err)
state.Put("error", err)
@ -177,7 +177,9 @@ func vncSendString(c *vnc.ClientConn, original string) {
}
c.KeyEvent(keyCode, true)
time.Sleep(time.Second/10)
c.KeyEvent(keyCode, false)
time.Sleep(time.Second/10)
if keyShift {
c.KeyEvent(KeyLeftShift, false)

5
builder/vmware/common/step_clean_vmx.go Normal file → Executable file
View File

@ -2,11 +2,12 @@ package common
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log"
"regexp"
"strings"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
)
// This step cleans up the VMX by removing or changing this prior to

0
builder/vmware/common/step_clean_vmx_test.go Normal file → Executable file
View File

0
builder/vmware/common/step_configure_vmx.go Normal file → Executable file
View File

View File

@ -57,7 +57,7 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction
}
defer nc.Close()
c, err := vnc.Client(nc, &vnc.ClientConfig{Exclusive: true})
c, err := vnc.Client(nc, &vnc.ClientConfig{Exclusive: false})
if err != nil {
err := fmt.Errorf("Error handshaking with VNC: %s", err)
state.Put("error", err)

0
builder/vmware/common/vmx.go Normal file → Executable file
View File

0
builder/vmware/vmx/step_clone_vmx.go Normal file → Executable file
View File

View File

@ -64,7 +64,7 @@ type TestT interface {
// Test performs an acceptance test on a backend with the given test case.
//
// Tests are not run unless an environmental variable "TF_ACC" is
// Tests are not run unless an environmental variable "PACKER_ACC" is
// set to some non-empty value. This is to avoid test cases surprising
// a user by creating real resources.
//

View File

@ -53,6 +53,7 @@ func (s *StepConnect) Run(state multistep.StateBag) multistep.StepAction {
Config: s.Config,
Host: s.Host,
WinRMConfig: s.WinRMConfig,
WinRMPort: s.SSHPort,
},
}
for k, v := range s.CustomConnect {

View File

@ -25,6 +25,7 @@ type StepConnectWinRM struct {
Config *Config
Host func(multistep.StateBag) (string, error)
WinRMConfig func(multistep.StateBag) (*WinRMConfig, error)
WinRMPort func(multistep.StateBag) (int, error)
}
func (s *StepConnectWinRM) Run(state multistep.StateBag) multistep.StepAction {
@ -96,6 +97,13 @@ func (s *StepConnectWinRM) waitForWinRM(state multistep.StateBag, cancel <-chan
continue
}
port := s.Config.WinRMPort
if s.WinRMPort != nil {
port, err = s.WinRMPort(state)
if err != nil {
log.Printf("[DEBUG] Error getting WinRM port: %s", err)
continue
}
}
user := s.Config.WinRMUser
password := s.Config.WinRMPassword

View File

@ -1,13 +1,13 @@
package rpc
import (
"fmt"
"github.com/hashicorp/go-msgpack/codec"
"github.com/mitchellh/packer/packer"
"io"
"log"
"net/rpc"
"sync/atomic"
"github.com/hashicorp/go-msgpack/codec"
"github.com/mitchellh/packer/packer"
)
var endpointId uint64
@ -149,7 +149,7 @@ func (s *Server) Serve() {
func registerComponent(server *rpc.Server, name string, rcvr interface{}, id bool) string {
endpoint := name
if id {
fmt.Sprintf("%s.%d", endpoint, atomic.AddUint64(&endpointId, 1))
log.Printf("%s.%d", endpoint, atomic.AddUint64(&endpointId, 1))
}
server.RegisterName(endpoint, rcvr)

View File

@ -0,0 +1,15 @@
package main
import (
"github.com/mitchellh/packer/packer/plugin"
"github.com/mitchellh/packer/post-processor/artifice"
)
func main() {
server, err := plugin.Server()
if err != nil {
panic(err)
}
server.RegisterPostProcessor(new(artifice.PostProcessor))
server.Serve()
}

View File

@ -0,0 +1,56 @@
package artifice
import (
"fmt"
"os"
"strings"
)
const BuilderId = "packer.post-processor.artifice"
type Artifact struct {
files []string
}
func NewArtifact(files []string) (*Artifact, error) {
for _, f := range files {
if _, err := os.Stat(f); err != nil {
return nil, err
}
}
artifact := &Artifact{
files: files,
}
return artifact, nil
}
func (a *Artifact) BuilderId() string {
return BuilderId
}
func (a *Artifact) Files() []string {
return a.files
}
func (a *Artifact) Id() string {
return ""
}
func (a *Artifact) String() string {
files := strings.Join(a.files, ", ")
return fmt.Sprintf("Created artifact from files: %s", files)
}
func (a *Artifact) State(name string) interface{} {
return nil
}
func (a *Artifact) Destroy() error {
for _, f := range a.files {
err := os.RemoveAll(f)
if err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,60 @@
package artifice
import (
"fmt"
"strings"
"github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/helper/config"
"github.com/mitchellh/packer/packer"
"github.com/mitchellh/packer/template/interpolate"
)
// The artifact-override post-processor allows you to specify arbitrary files as
// artifacts. These will override any other artifacts created by the builder.
// This allows you to use a builder and provisioner to create some file, such as
// a compiled binary or tarball, extract it from the builder (VM or container)
// and then save that binary or tarball and throw away the builder.
type Config struct {
common.PackerConfig `mapstructure:",squash"`
Files []string `mapstructure:"files"`
Keep bool `mapstructure:"keep_input_artifact"`
ctx interpolate.Context
}
type PostProcessor struct {
config Config
}
func (p *PostProcessor) Configure(raws ...interface{}) error {
err := config.Decode(&p.config, &config.DecodeOpts{
Interpolate: true,
InterpolateContext: &p.config.ctx,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{},
},
}, raws...)
if err != nil {
return err
}
if len(p.config.Files) == 0 {
return fmt.Errorf("No files specified in artifice configuration")
}
return nil
}
func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) {
if len(artifact.Files()) > 0 {
ui.Say(fmt.Sprintf("Discarding artifact files: %s", strings.Join(artifact.Files(), ", ")))
}
artifact, err := NewArtifact(p.config.Files)
ui.Say(fmt.Sprintf("Using these artifact files: %s", strings.Join(artifact.Files(), ", ")))
return artifact, true, err
}

View File

@ -0,0 +1 @@
package artifice

View File

@ -55,9 +55,12 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
Interpolate: true,
InterpolateContext: &p.config.ctx,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{},
Exclude: []string{"output"},
},
}, raws...)
if err != nil {
return err
}
errs := new(packer.MultiError)
@ -67,16 +70,7 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
}
if p.config.OutputPath == "" {
p.config.OutputPath = "packer_{{.BuildName}}_{{.Provider}}"
}
if err = interpolate.Validate(p.config.OutputPath, &p.config.ctx); err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Error parsing target template: %s", err))
}
templates := map[string]*string{
"output": &p.config.OutputPath,
p.config.OutputPath = "packer_{{.BuildName}}_{{.BuilderType}}"
}
if p.config.CompressionLevel > pgzip.BestCompression {
@ -89,17 +83,9 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
p.config.CompressionLevel = pgzip.DefaultCompression
}
for key, ptr := range templates {
if *ptr == "" {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("%s must be set", key))
}
*ptr, err = interpolate.Render(p.config.OutputPath, &p.config.ctx)
if err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Error processing %s: %s", key, err))
}
if err = interpolate.Validate(p.config.OutputPath, &p.config.ctx); err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Error parsing target template: %s", err))
}
p.config.detectFromFilename()
@ -113,7 +99,19 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) {
target := p.config.OutputPath
// These are extra variables that will be made available for interpolation.
p.config.ctx.Data = map[string]string{
"BuildName": p.config.PackerBuildName,
"BuilderType": p.config.PackerBuilderType,
}
target, err := interpolate.Render(p.config.OutputPath, &p.config.ctx)
if err != nil {
return nil, false, fmt.Errorf("Error interpolating output value: %s", err)
} else {
fmt.Println(target)
}
keep := p.config.KeepInputArtifact
newArtifact := &Artifact{Path: target}

View File

@ -150,6 +150,37 @@ func TestCompressOptions(t *testing.T) {
}
}
func TestCompressInterpolation(t *testing.T) {
const config = `
{
"post-processors": [
{
"type": "compress",
"output": "{{ build_name}}-{{ .BuildName }}-{{.BuilderType}}.gz"
}
]
}
`
artifact := testArchive(t, config)
defer artifact.Destroy()
// You can interpolate using the .BuildName variable or build_name global
// function. We'll check both.
filename := "chocolate-vanilla-file.gz"
archive, err := os.Open(filename)
if err != nil {
t.Fatalf("Unable to read %s: %s", filename, err)
}
gzipReader, _ := gzip.NewReader(archive)
data, _ := ioutil.ReadAll(gzipReader)
if string(data) != expectedFileContents {
t.Errorf("Expected:\n%s\nFound:\n%s\n", expectedFileContents, data)
}
}
// Test Helpers
func setup(t *testing.T) (packer.Ui, packer.Artifact, error) {
@ -201,6 +232,13 @@ func testArchive(t *testing.T, config string) packer.Artifact {
compressor := PostProcessor{}
compressor.Configure(tpl.PostProcessors[0][0].Config)
// I get the feeling these should be automatically available somewhere, but
// some of the post-processors construct this manually.
compressor.config.ctx.BuildName = "chocolate"
compressor.config.PackerBuildName = "vanilla"
compressor.config.PackerBuilderType = "file"
artifactOut, _, err := compressor.PostProcess(ui, artifact)
if err != nil {
t.Fatalf("Failed to compress artifact: %s", err)

View File

@ -3,14 +3,23 @@ package vagrant
import (
"archive/tar"
"compress/flate"
"compress/gzip"
"encoding/json"
"fmt"
"github.com/mitchellh/packer/packer"
"io"
"log"
"os"
"path/filepath"
"runtime"
"github.com/klauspost/pgzip"
"github.com/mitchellh/packer/packer"
)
var (
// ErrInvalidCompressionLevel is returned when the compression level passed
// to gzip is not in the expected range. See compress/flate for details.
ErrInvalidCompressionLevel = fmt.Errorf(
"Invalid compression level. Expected an integer from -1 to 9.")
)
// Copies a file by copying the contents of the file to another place.
@ -60,10 +69,10 @@ func DirToBox(dst, dir string, ui packer.Ui, level int) error {
}
defer dstF.Close()
var dstWriter io.Writer = dstF
var dstWriter io.WriteCloser = dstF
if level != flate.NoCompression {
log.Printf("Compressing with gzip compression level: %d", level)
gzipWriter, err := gzip.NewWriterLevel(dstWriter, level)
gzipWriter, err := makePgzipWriter(dstWriter, level)
if err != nil {
return err
}
@ -143,3 +152,12 @@ func WriteMetadata(dir string, contents interface{}) error {
return nil
}
func makePgzipWriter(output io.WriteCloser, compressionLevel int) (io.WriteCloser, error) {
gzipWriter, err := pgzip.NewWriterLevel(output, compressionLevel)
if err != nil {
return nil, ErrInvalidCompressionLevel
}
gzipWriter.SetConcurrency(500000, runtime.GOMAXPROCS(-1))
return gzipWriter, nil
}

View File

@ -287,10 +287,10 @@ func (p *Provisioner) createKnifeConfig(ui packer.Ui, comm packer.Communicator,
ctx := p.config.ctx
ctx.Data = &ConfigTemplate{
NodeName: nodeName,
ServerUrl: serverUrl,
ClientKey: clientKey,
SslVerifyMode: sslVerifyMode,
NodeName: nodeName,
ServerUrl: serverUrl,
ClientKey: clientKey,
SslVerifyMode: sslVerifyMode,
}
configString, err := interpolate.Render(tpl, &ctx)
if err != nil {

View File

@ -399,7 +399,7 @@ func (p *Provisioner) createCommandText() (command string, err error) {
Vars: flattenedEnvVars,
Path: p.config.RemotePath,
}
command, err = interpolate.Render(p.config.ExecuteCommand, &p.config.ctx)
command, err = interpolate.Render(p.config.ElevatedExecuteCommand, &p.config.ctx)
if err != nil {
return "", fmt.Errorf("Error processing command: %s", err)
}

View File

@ -15,6 +15,8 @@ import (
)
const DefaultTempConfigDir = "/tmp/salt"
const DefaultStateTreeDir = "/srv/salt"
const DefaultPillarRootDir = "/srv/pillar"
type Config struct {
common.PackerConfig `mapstructure:",squash"`
@ -34,6 +36,12 @@ type Config struct {
// Local path to the salt pillar roots
LocalPillarRoots string `mapstructure:"local_pillar_roots"`
// Remote path to the salt state tree
RemoteStateTree string `mapstructure:"remote_state_tree"`
// Remote path to the salt pillar roots
RemotePillarRoots string `mapstructure:"remote_pillar_roots"`
// Where files will be copied before moving to the /srv/salt directory
TempConfigDir string `mapstructure:"temp_config_dir"`
@ -60,6 +68,14 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
p.config.TempConfigDir = DefaultTempConfigDir
}
if p.config.RemoteStateTree == "" {
p.config.RemoteStateTree = DefaultStateTreeDir
}
if p.config.RemotePillarRoots == "" {
p.config.RemotePillarRoots = DefaultPillarRootDir
}
var errs *packer.MultiError
// require a salt state tree
@ -116,9 +132,9 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
}
}
ui.Message(fmt.Sprintf("Creating remote directory: %s", p.config.TempConfigDir))
ui.Message(fmt.Sprintf("Creating remote temporary directory: %s", p.config.TempConfigDir))
if err := p.createDir(ui, comm, p.config.TempConfigDir); err != nil {
return fmt.Errorf("Error creating remote salt state directory: %s", err)
return fmt.Errorf("Error creating remote temporary directory: %s", err)
}
if p.config.MinionConfig != "" {
@ -130,6 +146,10 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
}
// move minion config into /etc/salt
ui.Message(fmt.Sprintf("Make sure directory %s exists", "/etc/salt"))
if err := p.createDir(ui, comm, "/etc/salt"); err != nil {
return fmt.Errorf("Error creating remote salt configuration directory: %s", err)
}
src = filepath.ToSlash(filepath.Join(p.config.TempConfigDir, "minion"))
dst = "/etc/salt/minion"
if err = p.moveFile(ui, comm, dst, src); err != nil {
@ -144,11 +164,14 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
return fmt.Errorf("Error uploading local state tree to remote: %s", err)
}
// move state tree into /srv/salt
// move state tree from temporary directory
src = filepath.ToSlash(filepath.Join(p.config.TempConfigDir, "states"))
dst = "/srv/salt"
dst = p.config.RemoteStateTree
if err = p.removeDir(ui, comm, dst); err != nil {
return fmt.Errorf("Unable to clear salt tree: %s", err)
}
if err = p.moveFile(ui, comm, dst, src); err != nil {
return fmt.Errorf("Unable to move %s/states to /srv/salt: %s", p.config.TempConfigDir, err)
return fmt.Errorf("Unable to move %s/states to %s: %s", p.config.TempConfigDir, dst, err)
}
if p.config.LocalPillarRoots != "" {
@ -159,16 +182,19 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
return fmt.Errorf("Error uploading local pillar roots to remote: %s", err)
}
// move pillar tree into /srv/pillar
// move pillar root from temporary directory
src = filepath.ToSlash(filepath.Join(p.config.TempConfigDir, "pillar"))
dst = "/srv/pillar"
dst = p.config.RemotePillarRoots
if err = p.removeDir(ui, comm, dst); err != nil {
return fmt.Errorf("Unable to clear pillat root: %s", err)
}
if err = p.moveFile(ui, comm, dst, src); err != nil {
return fmt.Errorf("Unable to move %s/pillar to /srv/pillar: %s", p.config.TempConfigDir, err)
return fmt.Errorf("Unable to move %s/pillar to %s: %s", p.config.TempConfigDir, dst, err)
}
}
ui.Message("Running highstate")
cmd := &packer.RemoteCmd{Command: p.sudo("salt-call --local state.highstate -l info --retcode-passthrough")}
cmd := &packer.RemoteCmd{Command: fmt.Sprintf(p.sudo("salt-call --local state.highstate --file-root=%s --pillar-root=%s -l info --retcode-passthrough"),p.config.RemoteStateTree, p.config.RemotePillarRoots)}
if err = cmd.StartWithUi(comm, ui); err != nil || cmd.ExitStatus != 0 {
if err == nil {
err = fmt.Errorf("Bad exit status: %d", cmd.ExitStatus)
@ -216,7 +242,7 @@ func (p *Provisioner) moveFile(ui packer.Ui, comm packer.Communicator, dst, src
err = fmt.Errorf("Bad exit status: %d", cmd.ExitStatus)
}
return fmt.Errorf("Unable to move %s/minion to /etc/salt/minion: %s", p.config.TempConfigDir, err)
return fmt.Errorf("Unable to move %s to %s: %s", src, dst, err)
}
return nil
}
@ -235,6 +261,20 @@ func (p *Provisioner) createDir(ui packer.Ui, comm packer.Communicator, dir stri
return nil
}
func (p *Provisioner) removeDir(ui packer.Ui, comm packer.Communicator, dir string) error {
ui.Message(fmt.Sprintf("Removing directory: %s", dir))
cmd := &packer.RemoteCmd{
Command: fmt.Sprintf("rm -rf '%s'", dir),
}
if err := cmd.StartWithUi(comm, ui); err != nil {
return err
}
if cmd.ExitStatus != 0 {
return fmt.Errorf("Non-zero exit status.")
}
return nil
}
func (p *Provisioner) uploadDir(ui packer.Ui, comm packer.Communicator, dst, src string, ignore []string) error {
if err := p.createDir(ui, comm, dst); err != nil {
return err

View File

@ -134,7 +134,6 @@ WaitLoop:
case <-p.cancel:
close(waitDone)
return fmt.Errorf("Interrupt detected, quitting waiting for machine to restart")
break WaitLoop
}
}

View File

@ -4,9 +4,9 @@ package main
var GitCommit string
// The main version number that is being run at the moment.
const Version = "0.8.2"
const Version = "0.8.6"
// A pre-release marker for the version. If this is "" (empty string)
// then it means that it is a final release. Otherwise, this is a pre-release
// such as "dev" (in development), "beta", "rc1", etc.
const VersionPrerelease = ""
const VersionPrerelease = "dev"

View File

@ -3,3 +3,5 @@ source "https://rubygems.org"
ruby "2.2.2"
gem "middleman-hashicorp", github: "hashicorp/middleman-hashicorp"
gem "middleman-breadcrumbs"
gem "htmlbeautifier"

View File

@ -69,6 +69,7 @@ GEM
hitimes (1.2.2)
hooks (0.4.0)
uber (~> 0.0.4)
htmlbeautifier (1.1.0)
htmlcompressor (0.2.0)
http_parser.rb (0.6.0)
i18n (0.7.0)
@ -92,6 +93,8 @@ GEM
middleman-sprockets (>= 3.1.2)
sass (>= 3.4.0, < 4.0)
uglifier (~> 2.5)
middleman-breadcrumbs (0.1.0)
middleman (>= 3.3.5)
middleman-core (3.3.12)
activesupport (~> 4.1.0)
bundler (~> 1.1)
@ -179,4 +182,6 @@ PLATFORMS
ruby
DEPENDENCIES
htmlbeautifier
middleman-breadcrumbs
middleman-hashicorp!

View File

@ -8,3 +8,10 @@ dev: init
build: init
PACKER_DISABLE_DOWNLOAD_FETCH=true PACKER_VERSION=1.0 bundle exec middleman build
format:
bundle exec htmlbeautifier -t 2 source/*.erb
bundle exec htmlbeautifier -t 2 source/layouts/*.erb
@pandoc -v > /dev/null || echo "pandoc must be installed in order to format markdown content"
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "pandoc -r markdown -w markdown --tab-stop=4 --atx-headers -s --columns=80 {} > {}.new"\; || true
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "mv {}.new {}"\; || true

View File

@ -21,3 +21,13 @@ make dev
Then open up `localhost:4567`. Note that some URLs you may need to append
".html" to make them work (in the navigation and such).
## Keeping Tidy
To keep the source code nicely formatted, there is a `make format` target. This
runs `htmlbeautify` and `pandoc` to reformat the source code so it's nicely formatted.
make format
Note that you will need to install pandoc yourself. `make format` will skip it
if you don't have it installed.

View File

@ -4,6 +4,8 @@
set :base_url, "https://www.packer.io/"
activate :breadcrumbs
activate :hashicorp do |h|
h.version = ENV["PACKER_VERSION"]
h.bintray_enabled = ENV["BINTRAY_ENABLED"]

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

View File

@ -12,45 +12,45 @@ footer {
margin-left: -20px;
}
ul {
margin-top: 40px;
@include respond-to(mobile) {
margin-left: $baseline;
margin-top: $baseline;
ul {
margin-top: 40px;
@include respond-to(mobile) {
margin-left: $baseline;
margin-top: $baseline;
}
li {
display: inline;
margin-right: 50px;
@include respond-to(mobile) {
margin-right: 20px;
display: list-item;
li {
display: inline;
margin-right: 50px;
@include respond-to(mobile) {
margin-right: 20px;
display: list-item;
}
}
.hashi-logo {
background: image-url('logo_footer.png') no-repeat center top;
height: 40px;
width: 40px;
background-size: 37px 40px;
text-indent: -999999px;
display: inline-block;
margin-top: -10px;
margin-right: 0;
@include respond-to(mobile) {
margin-top: -50px;
margin-right: $baseline;
}
}
}
.hashi-logo {
background: image-url('logo_footer.png') no-repeat center top;
height: 40px;
width: 40px;
background-size: 37px 40px;
text-indent: -999999px;
display: inline-block;
margin-top: -10px;
margin-right: 0;
@include respond-to(mobile) {
margin-top: -50px;
margin-right: $baseline;
}
}
}
.active {
.active {
color: $green;
}
}
button {
button {
margin-top: 20px;
}
}
}
.page-wrap {

View File

@ -70,17 +70,17 @@ $mono: 'Inconsolata', 'courier new', courier, mono-space;
background-color: #000;
color: $white;
a {
a {
color: inherit;
&:hover {
color: $green;
}
color: $green;
}
&:active {
color: darken($green, 30%);
}
}
&:active {
color: darken($green, 30%);
}
}
}
.white-background {
@ -102,9 +102,9 @@ $mono: 'Inconsolata', 'courier new', courier, mono-space;
color: $orange;
font-size: 20px;
a:hover, a:active, a:visited {
a:hover, a:active, a:visited {
color: inherit;
}
}
}
// media queries
@ -170,13 +170,13 @@ $break-lg: 980px;
@mixin transform-scale($value) {
-webkit-transform: scale($value);
-moz-transform: scale($value);
transform: scale($value);
-moz-transform: scale($value);
transform: scale($value);
}
@mixin transition($type, $speed, $easing) {
-webkit-transition: $type $speed $easing;
-moz-transition: $type $speed $easing;
-webkit-transition: $type $speed $easing;
-moz-transition: $type $speed $easing;
-o-transition: $type $speed $easing;
transition: $type $speed $easing;
}

View File

@ -14,10 +14,10 @@ form, input, textarea, button {
line-height: 1.0;
color: inherit;
&:focus {
line-height: 1.0;
box-shadow: none !important;
outline: none;
vertical-align: middle;
}
&:focus {
line-height: 1.0;
box-shadow: none !important;
outline: none;
vertical-align: middle;
}
}

View File

@ -1,22 +1,25 @@
---
layout: "community"
page_title: "Community"
description: |-
Packer is a new project with a growing community. Despite this, there are dedicated users willing to help through various mediums.
---
description: |
Packer is a new project with a growing community. Despite this, there are
dedicated users willing to help through various mediums.
layout: community
page_title: Community
...
# Community
Packer is a new project with a growing community. Despite this, there are
dedicated users willing to help through various mediums.
**IRC:**&nbsp;`#packer-tool` on Freenode.
**IRC:** `#packer-tool` on Freenode.
**Mailing List:**&nbsp;[Packer Google Group](http://groups.google.com/group/packer-tool)
**Mailing List:** [Packer Google
Group](http://groups.google.com/group/packer-tool)
**Bug Tracker:**&nbsp;[Issue tracker on GitHub](https://github.com/mitchellh/packer/issues).
Please only use this for reporting bugs. Do not ask for general help here. Use IRC
or the mailing list for that.
**Bug Tracker:** [Issue tracker on
GitHub](https://github.com/mitchellh/packer/issues). Please only use this for
reporting bugs. Do not ask for general help here. Use IRC or the mailing list
for that.
## People
@ -25,62 +28,82 @@ to Packer in some core way. Over time, faces may appear and disappear from this
list as contributors come and go.
<div class="people">
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/54079122b67de9677c1f93933ce8b63a.png?s=125">
<div class="bio">
<h3>Mitchell Hashimoto (<a href="https://github.com/mitchellh">@mitchellh</a>)</h3>
<p>
Mitchell Hashimoto is the creator of Packer. He developed the
core of Packer as well as the Amazon, VirtualBox, and VMware
builders. In addition to Packer, Mitchell is the creator of
<a href="http://www.vagrantup.com">Vagrant</a>. He is self
described as "automation obsessed."
</p>
</div>
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2acc31dd6370a54b18f6755cd0710ce6.png?s=125">
<div class="bio">
<h3>Jack Pearkes (<a href="https://github.com/pearkes">@pearkes</a>)</h3>
<p>
<a href="http://jack.ly/">Jack Pearkes</a> created and maintains the DigitalOcean builder
for Packer. Outside of Packer, Jack is an avid open source
contributor and software consultant.</p>
</div>
</div>
<div class="person">
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2f7fc9cb7558e3ea48f5a86fa90a78da.png?s=125">
<div class="bio">
<h3>Mark Peek (<a href="https://github.com/markpeek">@markpeek</a>)</h3>
<p>
In addition to Packer, Mark Peek helps maintain
various open source projects such as
<a href="https://github.com/cloudtools">cloudtools</a> and
<a href="https://github.com/ironport">IronPort Python libraries</a>.
Mark is also a <a href="https://FreeBSD.org">FreeBSD committer</a>.</p>
</div>
</div>
<img class="pull-left" src="http://www.gravatar.com/avatar/54079122b67de9677c1f93933ce8b63a.png?s=125">
<div class="bio">
<h3>Mitchell Hashimoto (<a href="https://github.com/mitchellh">@mitchellh</a>)</h3>
<p>
Mitchell Hashimoto is the creator of Packer. He developed the
core of Packer as well as the Amazon, VirtualBox, and VMware
builders. In addition to Packer, Mitchell is the creator of
<a href="http://www.vagrantup.com">Vagrant</a>. He is self
described as "automation obsessed."
</p>
</div>
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2acc31dd6370a54b18f6755cd0710ce6.png?s=125">
<div class="bio">
<h3>Jack Pearkes (<a href="https://github.com/pearkes">@pearkes</a>)</h3>
<p>
<a href="http://jack.ly/">Jack Pearkes</a> created and maintains the DigitalOcean builder
for Packer. Outside of Packer, Jack is an avid open source
contributor and software consultant.</p>
</div>
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2f7fc9cb7558e3ea48f5a86fa90a78da.png?s=125">
<div class="bio">
<h3>Mark Peek (<a href="https://github.com/markpeek">@markpeek</a>)</h3>
<p>
In addition to Packer, Mark Peek helps maintain
various open source projects such as
<a href="https://github.com/cloudtools">cloudtools</a> and
<a href="https://github.com/ironport">IronPort Python libraries</a>.
Mark is also a <a href="https://FreeBSD.org">FreeBSD committer</a>.</p>
</div>
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/1fca64df3d7db1e2f258a8956d2b0aff.png?s=125">
<div class="bio">
<h3>Ross Smith II (<a href="https://github.com/rasa" target="_blank">@rasa</a>)</h3>
<p>
<a href="http://smithii.com/" target="_blank">Ross Smith</a> maintains our
VMware builder on Windows, and provides other valuable assistance. Ross is an
open source enthusiast, published author, and freelance consultant.
</p>
</div>
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/c9f6bf7b5b865012be5eded656ebed7d.png?s=125">
<div class="bio">
<h3>Rickard von Essen<br/>(<a href="https://github.com/rickard-von-essen" target="_blank">@rickard-von-essen</a>)</h3>
<p>
Rickard von Essen maintains our Parallels Desktop builder. Rickard is an
polyglot programmer and consults on Continuous Delivery.
</p>
</div>
</div>
<div class="clearfix">
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/1fca64df3d7db1e2f258a8956d2b0aff.png?s=125">
<div class="bio">
<h3>Ross Smith II (<a href="https://github.com/rasa" target="_blank">@rasa</a>)</h3>
<p>
<a href="http://smithii.com/" target="_blank">Ross Smith</a> maintains our VMware builder on Windows, and provides other valuable assistance.
Ross is an open source enthusiast, published author, and freelance consultant.</p>
</div>
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/c9f6bf7b5b865012be5eded656ebed7d.png?s=125">
<div class="bio">
<h3>Rickard von Essen<br/>(<a href="https://github.com/rickard-von-essen" target="_blank">@rickard-von-essen</a>)</h3>
<p>
Rickard von Essen maintains our Parallels Desktop builder. Rickard is an polyglot programmer and consults on Continuous Delivery.</p>
</div>
</div>
<div class="clearfix"></div>
</div>

View File

@ -1,54 +1,57 @@
---
layout: "docs"
page_title: "Packer Terminology"
description: |-
There are a handful of terms used throughout the Packer documentation where the meaning may not be immediately obvious if you haven't used Packer before. Luckily, there are relatively few. This page documents all the terminology required to understand and use Packer. The terminology is in alphabetical order for easy referencing.
---
description: |
There are a handful of terms used throughout the Packer documentation where the
meaning may not be immediately obvious if you haven't used Packer before.
Luckily, there are relatively few. This page documents all the terminology
required to understand and use Packer. The terminology is in alphabetical order
for easy referencing.
layout: docs
page_title: Packer Terminology
...
# Packer Terminology
There are a handful of terms used throughout the Packer documentation where
the meaning may not be immediately obvious if you haven't used Packer before.
There are a handful of terms used throughout the Packer documentation where the
meaning may not be immediately obvious if you haven't used Packer before.
Luckily, there are relatively few. This page documents all the terminology
required to understand and use Packer. The terminology is in alphabetical
order for easy referencing.
required to understand and use Packer. The terminology is in alphabetical order
for easy referencing.
- `Artifacts` are the results of a single build, and are usually a set of IDs
or files to represent a machine image. Every builder produces a single
artifact. As an example, in the case of the Amazon EC2 builder, the artifact is
a set of AMI IDs (one per region). For the VMware builder, the artifact is a
directory of files comprising the created virtual machine.
- `Artifacts` are the results of a single build, and are usually a set of IDs
or files to represent a machine image. Every builder produces a
single artifact. As an example, in the case of the Amazon EC2 builder, the
artifact is a set of AMI IDs (one per region). For the VMware builder, the
artifact is a directory of files comprising the created virtual machine.
- `Builds` are a single task that eventually produces an image for a single
platform. Multiple builds run in parallel. Example usage in a
sentence: "The Packer build produced an AMI to run our web application."
Or: "Packer is running the builds now for VMware, AWS, and VirtualBox."
- `Builds` are a single task that eventually produces an image for a
single platform. Multiple builds run in parallel. Example usage in a
sentence: "The Packer build produced an AMI to run our web application." Or:
"Packer is running the builds now for VMware, AWS, and VirtualBox."
- `Builders` are components of Packer that are able to create a machine
image for a single platform. Builders read in some configuration and use
that to run and generate a machine image. A builder is invoked as part of a
build in order to create the actual resulting images. Example builders include
VirtualBox, VMware, and Amazon EC2. Builders can be created and added to
Packer in the form of plugins.
- `Builders` are components of Packer that are able to create a machine image
for a single platform. Builders read in some configuration and use that to
run and generate a machine image. A builder is invoked as part of a build in
order to create the actual resulting images. Example builders include
VirtualBox, VMware, and Amazon EC2. Builders can be created and added to
Packer in the form of plugins.
- `Commands` are sub-commands for the `packer` program that perform some
job. An example command is "build", which is invoked as `packer build`.
Packer ships with a set of commands out of the box in order to define
its command-line interface. Commands can also be created and added to
Packer in the form of plugins.
- `Commands` are sub-commands for the `packer` program that perform some job.
An example command is "build", which is invoked as `packer build`. Packer
ships with a set of commands out of the box in order to define its
command-line interface. Commands can also be created and added to Packer in
the form of plugins.
- `Post-processors` are components of Packer that take the result of
a builder or another post-processor and process that to
create a new artifact. Examples of post-processors are
compress to compress artifacts, upload to upload artifacts, etc.
- `Post-processors` are components of Packer that take the result of a builder
or another post-processor and process that to create a new artifact.
Examples of post-processors are compress to compress artifacts, upload to
upload artifacts, etc.
- `Provisioners` are components of Packer that install and configure
software within a running machine prior to that machine being turned
into a static image. They perform the major work of making the image contain
useful software. Example provisioners include shell scripts, Chef, Puppet,
etc.
- `Provisioners` are components of Packer that install and configure software
within a running machine prior to that machine being turned into a
static image. They perform the major work of making the image contain
useful software. Example provisioners include shell scripts, Chef,
Puppet, etc.
- `Templates` are JSON files which define one or more builds
by configuring the various components of Packer. Packer is able to read a
template and use that information to create multiple machine images in
parallel.
- `Templates` are JSON files which define one or more builds by configuring
the various components of Packer. Packer is able to read a template and use
that information to create multiple machine images in parallel.

View File

@ -1,49 +1,52 @@
---
layout: "docs"
page_title: "Amazon AMI Builder (chroot)"
description: |-
The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by an EBS volume as the root device. For more information on the difference between instance storage and EBS-backed instances, storage for the root device section in the EC2 documentation.
---
description: |
The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by an
EBS volume as the root device. For more information on the difference between
instance storage and EBS-backed instances, storage for the root device section
in the EC2 documentation.
layout: docs
page_title: 'Amazon AMI Builder (chroot)'
...
# AMI Builder (chroot)
Type: `amazon-chroot`
The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by
an EBS volume as the root device. For more information on the difference
between instance storage and EBS-backed instances, see the
["storage for the root device" section in the EC2 documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by an
EBS volume as the root device. For more information on the difference between
instance storage and EBS-backed instances, see the ["storage for the root
device" section in the EC2
documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
The difference between this builder and the `amazon-ebs` builder is that
this builder is able to build an EBS-backed AMI without launching a new
EC2 instance. This can dramatically speed up AMI builds for organizations
who need the extra fast build.
The difference between this builder and the `amazon-ebs` builder is that this
builder is able to build an EBS-backed AMI without launching a new EC2 instance.
This can dramatically speed up AMI builds for organizations who need the extra
fast build.
~> **This is an advanced builder** If you're just getting
started with Packer, we recommend starting with the
[amazon-ebs builder](/docs/builders/amazon-ebs.html), which is
much easier to use.
\~&gt; **This is an advanced builder** If you're just getting started with
Packer, we recommend starting with the [amazon-ebs
builder](/docs/builders/amazon-ebs.html), which is much easier to use.
The builder does _not_ manage AMIs. Once it creates an AMI and stores it
in your account, it is up to you to use, delete, etc. the AMI.
The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
account, it is up to you to use, delete, etc. the AMI.
## How Does it Work?
This builder works by creating a new EBS volume from an existing source AMI
and attaching it into an already-running EC2 instance. Once attached, a
[chroot](http://en.wikipedia.org/wiki/Chroot) is used to provision the
system within that volume. After provisioning, the volume is detached,
snapshotted, and an AMI is made.
This builder works by creating a new EBS volume from an existing source AMI and
attaching it into an already-running EC2 instance. Once attached, a
[chroot](http://en.wikipedia.org/wiki/Chroot) is used to provision the system
within that volume. After provisioning, the volume is detached, snapshotted, and
an AMI is made.
Using this process, minutes can be shaved off the AMI creation process
because a new EC2 instance doesn't need to be launched.
Using this process, minutes can be shaved off the AMI creation process because a
new EC2 instance doesn't need to be launched.
There are some restrictions, however. The host EC2 instance where the
volume is attached to must be a similar system (generally the same OS
version, kernel versions, etc.) as the AMI being built. Additionally,
this process is much more expensive because the EC2 instance must be kept
running persistently in order to build AMIs, whereas the other AMI builders
start instances on-demand to build AMIs as needed.
There are some restrictions, however. The host EC2 instance where the volume is
attached to must be a similar system (generally the same OS version, kernel
versions, etc.) as the AMI being built. Additionally, this process is much more
expensive because the EC2 instance must be kept running persistently in order to
build AMIs, whereas the other AMI builders start instances on-demand to build
AMIs as needed.
## Configuration Reference
@ -52,107 +55,101 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `access_key` (string) - The access key used to communicate with AWS.
If not specified, Packer will use the key from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
or fall back to environment variables `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` (in that order), if set.
If the environmental variables aren't set and Packer is running on
an EC2 instance, Packer will check the instance metadata for IAM role
keys.
- `access_key` (string) - The access key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
* `ami_name` (string) - The name of the resulting AMI that will appear
when managing AMIs in the AWS console or via APIs. This must be unique.
To help make this unique, use a function like `timestamp` (see
[configuration templates](/docs/templates/configuration-templates.html) for more info)
- `ami_name` (string) - The name of the resulting AMI that will appear when
managing AMIs in the AWS console or via APIs. This must be unique. To help
make this unique, use a function like `timestamp` (see [configuration
templates](/docs/templates/configuration-templates.html) for more info)
* `secret_key` (string) - The secret key used to communicate with AWS.
If not specified, Packer will use the secret from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
or fall back to environment variables `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` (in that order), if set.
If the environmental variables aren't set and Packer is running on
an EC2 instance, Packer will check the instance metadata for IAM role
keys.
- `secret_key` (string) - The secret key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
* `source_ami` (string) - The source AMI whose root volume will be copied
and provisioned on the currently running instance. This must be an
EBS-backed AMI with a root volume snapshot that you have access to.
- `source_ami` (string) - The source AMI whose root volume will be copied and
provisioned on the currently running instance. This must be an EBS-backed
AMI with a root volume snapshot that you have access to.
### Optional:
* `ami_description` (string) - The description to set for the resulting
AMI(s). By default this description is empty.
- `ami_description` (string) - The description to set for the
resulting AMI(s). By default this description is empty.
* `ami_groups` (array of strings) - A list of groups that have access
to launch the resulting AMI(s). By default no groups have permission
to launch the AMI. `all` will make the AMI publicly accessible.
- `ami_groups` (array of strings) - A list of groups that have access to
launch the resulting AMI(s). By default no groups have permission to launch
the AMI. `all` will make the AMI publicly accessible.
* `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
- `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
* `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
depending on the size of the AMI, but will generally take many minutes.
- `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
depending on the size of the AMI, but will generally take many minutes.
* `ami_users` (array of strings) - A list of account IDs that have access
to launch the resulting AMI(s). By default no additional users other than the user
creating the AMI has permissions to launch it.
- `ami_users` (array of strings) - A list of account IDs that have access to
launch the resulting AMI(s). By default no additional users other than the
user creating the AMI has permissions to launch it.
* `ami_virtualization_type` (string) - The type of virtualization for the AMI
you are building. This option is required to register HVM images. Can be
"paravirtual" (default) or "hvm".
- `ami_virtualization_type` (string) - The type of virtualization for the AMI
you are building. This option is required to register HVM images. Can be
"paravirtual" (default) or "hvm".
* `chroot_mounts` (array of array of strings) - This is a list of additional
devices to mount into the chroot environment. This configuration parameter
requires some additional documentation which is in the "Chroot Mounts" section
below. Please read that section for more information on how to use this.
- `chroot_mounts` (array of array of strings) - This is a list of additional
devices to mount into the chroot environment. This configuration parameter
requires some additional documentation which is in the "Chroot Mounts"
section below. Please read that section for more information on how to
use this.
* `command_wrapper` (string) - How to run shell commands. This
defaults to "{{.Command}}". This may be useful to set if you want to set
environmental variables or perhaps run it with `sudo` or so on. This is a
configuration template where the `.Command` variable is replaced with the
command to be run.
- `command_wrapper` (string) - How to run shell commands. This defaults
to "{{.Command}}". This may be useful to set if you want to set
environmental variables or perhaps run it with `sudo` or so on. This is a
configuration template where the `.Command` variable is replaced with the
command to be run.
* `copy_files` (array of strings) - Paths to files on the running EC2 instance
that will be copied into the chroot environment prior to provisioning.
This is useful, for example, to copy `/etc/resolv.conf` so that DNS lookups
work.
- `copy_files` (array of strings) - Paths to files on the running EC2 instance
that will be copied into the chroot environment prior to provisioning. This
is useful, for example, to copy `/etc/resolv.conf` so that DNS lookups work.
* `device_path` (string) - The path to the device where the root volume
of the source AMI will be attached. This defaults to "" (empty string),
which forces Packer to find an open device automatically.
- `device_path` (string) - The path to the device where the root volume of the
source AMI will be attached. This defaults to "" (empty string), which
forces Packer to find an open device automatically.
* `enhanced_networking` (boolean) - Enable enhanced networking (SriovNetSupport) on
HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy.
- `enhanced_networking` (boolean) - Enable enhanced
networking (SriovNetSupport) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy.
* `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
* `mount_path` (string) - The path where the volume will be mounted. This is
where the chroot environment will be. This defaults to
`packer-amazon-chroot-volumes/{{.Device}}`. This is a configuration
template where the `.Device` variable is replaced with the name of the
device where the volume is attached.
- `mount_path` (string) - The path where the volume will be mounted. This is
where the chroot environment will be. This defaults to
`packer-amazon-chroot-volumes/{{.Device}}`. This is a configuration template
where the `.Device` variable is replaced with the name of the device where
the volume is attached.
* `mount_options` (array of strings) - Options to supply the `mount` command
when mounting devices. Each option will be prefixed with `-o ` and supplied to
the `mount` command ran by Packer. Because this command is ran in a shell, user
discrestion is advised. See [this manual page for the mount command][1] for valid
file system specific options
- `mount_options` (array of strings) - Options to supply the `mount` command
when mounting devices. Each option will be prefixed with `-o` and supplied
to the `mount` command ran by Packer. Because this command is ran in a
shell, user discrestion is advised. See [this manual page for the mount
command](http://linuxcommand.org/man_pages/mount8.html) for valid file
system specific options
* `root_volume_size` (integer) - The size of the root volume for the chroot
environment, and the resulting AMI
- `root_volume_size` (integer) - The size of the root volume for the chroot
environment, and the resulting AMI
* `tags` (object of key/value strings) - Tags applied to the AMI.
- `tags` (object of key/value strings) - Tags applied to the AMI.
## Basic Example
Here is a basic example. It is completely valid except for the access keys:
```javascript
``` {.javascript}
{
"type": "amazon-chroot",
"access_key": "YOUR KEY HERE",
@ -164,21 +161,21 @@ Here is a basic example. It is completely valid except for the access keys:
## Chroot Mounts
The `chroot_mounts` configuration can be used to mount additional devices
within the chroot. By default, the following additional mounts are added
into the chroot by Packer:
The `chroot_mounts` configuration can be used to mount additional devices within
the chroot. By default, the following additional mounts are added into the
chroot by Packer:
* `/proc` (proc)
* `/sys` (sysfs)
* `/dev` (bind to real `/dev`)
* `/dev/pts` (devpts)
* `/proc/sys/fs/binfmt_misc` (binfmt_misc)
- `/proc` (proc)
- `/sys` (sysfs)
- `/dev` (bind to real `/dev`)
- `/dev/pts` (devpts)
- `/proc/sys/fs/binfmt_misc` (binfmt\_misc)
These default mounts are usually good enough for anyone and are sane
defaults. However, if you want to change or add the mount points, you may
using the `chroot_mounts` configuration. Here is an example configuration:
These default mounts are usually good enough for anyone and are sane defaults.
However, if you want to change or add the mount points, you may using the
`chroot_mounts` configuration. Here is an example configuration:
```javascript
``` {.javascript}
{
"chroot_mounts": [
["proc", "proc", "/proc"],
@ -187,25 +184,25 @@ using the `chroot_mounts` configuration. Here is an example configuration:
}
```
`chroot_mounts` is a list of a 3-tuples of strings. The three components
of the 3-tuple, in order, are:
`chroot_mounts` is a list of a 3-tuples of strings. The three components of the
3-tuple, in order, are:
* The filesystem type. If this is "bind", then Packer will properly bind
the filesystem to another mount point.
- The filesystem type. If this is "bind", then Packer will properly bind the
filesystem to another mount point.
* The source device.
- The source device.
* The mount directory.
- The mount directory.
## Parallelism
A quick note on parallelism: it is perfectly safe to run multiple
_separate_ Packer processes with the `amazon-chroot` builder on the same
EC2 instance. In fact, this is recommended as a way to push the most performance
out of your AMI builds.
A quick note on parallelism: it is perfectly safe to run multiple *separate*
Packer processes with the `amazon-chroot` builder on the same EC2 instance. In
fact, this is recommended as a way to push the most performance out of your AMI
builds.
Packer properly obtains a process lock for the parallelism-sensitive parts
of its internals such as finding an available device.
Packer properly obtains a process lock for the parallelism-sensitive parts of
its internals such as finding an available device.
## Gotchas
@ -213,10 +210,12 @@ One of the difficulties with using the chroot builder is that your provisioning
scripts must not leave any processes running or packer will be unable to unmount
the filesystem.
For debian based distributions you can setup a [policy-rc.d](http://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt) file which will
prevent packages installed by your provisioners from starting services:
For debian based distributions you can setup a
[policy-rc.d](http://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt)
file which will prevent packages installed by your provisioners from starting
services:
```javascript
``` {.javascript}
{
"type": "shell",
"inline": [
@ -235,6 +234,3 @@ prevent packages installed by your provisioners from starting services:
]
}
```
[1]: http://linuxcommand.org/man_pages/mount8.html

View File

@ -1,29 +1,32 @@
---
layout: "docs"
page_title: "Amazon AMI Builder (EBS backed)"
description: |-
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS volumes for use in EC2. For more information on the difference between EBS-backed instances and instance-store backed instances, see the storage for the root device section in the EC2 documentation.
---
description: |
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS
volumes for use in EC2. For more information on the difference between
EBS-backed instances and instance-store backed instances, see the storage for
the root device section in the EC2 documentation.
layout: docs
page_title: 'Amazon AMI Builder (EBS backed)'
...
# AMI Builder (EBS backed)
Type: `amazon-ebs`
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS
volumes for use in [EC2](http://aws.amazon.com/ec2/). For more information
on the difference between EBS-backed instances and instance-store backed
instances, see the
["storage for the root device" section in the EC2 documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
volumes for use in [EC2](http://aws.amazon.com/ec2/). For more information on
the difference between EBS-backed instances and instance-store backed instances,
see the ["storage for the root device" section in the EC2
documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
This builder builds an AMI by launching an EC2 instance from a source AMI,
provisioning that running machine, and then creating an AMI from that machine.
This is all done in your own AWS account. The builder will create temporary
keypairs, security group rules, etc. that provide it temporary access to
the instance while the image is being created. This simplifies configuration
quite a bit.
keypairs, security group rules, etc. that provide it temporary access to the
instance while the image is being created. This simplifies configuration quite a
bit.
The builder does _not_ manage AMIs. Once it creates an AMI and stores it
in your account, it is up to you to use, delete, etc. the AMI.
The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
account, it is up to you to use, delete, etc. the AMI.
## Configuration Reference
@ -32,170 +35,169 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `access_key` (string) - The access key used to communicate with AWS.
If not specified, Packer will use the key from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
or fall back to environment variables `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` (in that order), if set.
- `access_key` (string) - The access key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
* `ami_name` (string) - The name of the resulting AMI that will appear
when managing AMIs in the AWS console or via APIs. This must be unique.
To help make this unique, use a function like `timestamp` (see
[configuration templates](/docs/templates/configuration-templates.html) for more info)
- `ami_name` (string) - The name of the resulting AMI that will appear when
managing AMIs in the AWS console or via APIs. This must be unique. To help
make this unique, use a function like `timestamp` (see [configuration
templates](/docs/templates/configuration-templates.html) for more info)
* `instance_type` (string) - The EC2 instance type to use while building
the AMI, such as "m1.small".
- `instance_type` (string) - The EC2 instance type to use while building the
AMI, such as "m1.small".
* `region` (string) - The name of the region, such as "us-east-1", in which
to launch the EC2 instance to create the AMI.
- `region` (string) - The name of the region, such as "us-east-1", in which to
launch the EC2 instance to create the AMI.
* `secret_key` (string) - The secret key used to communicate with AWS.
If not specified, Packer will use the secret from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
or fall back to environment variables `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` (in that order), if set.
- `secret_key` (string) - The secret key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
* `source_ami` (string) - The initial AMI used as a base for the newly
created machine.
- `source_ami` (string) - The initial AMI used as a base for the newly
created machine.
* `ssh_username` (string) - The username to use in order to communicate
over SSH to the running machine.
- `ssh_username` (string) - The username to use in order to communicate over
SSH to the running machine.
### Optional:
* `ami_block_device_mappings` (array of block device mappings) - Add the block
device mappings to the AMI. The block device mappings allow for keys:
- `ami_block_device_mappings` (array of block device mappings) - Add the block
device mappings to the AMI. The block device mappings allow for keys:
- `device_name` (string) - The device name exposed to the instance (for
example, "/dev/sdh" or "xvdh")
- `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device Mapping][1] for more information
- `snapshot_id` (string) - The ID of the snapshot
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
volumes, io1 for Provisioned IOPS (SSD) volumes, and standard for Magnetic
volumes
- `volume_size` (integer) - The size of the volume, in GiB. Required if not
specifying a `snapshot_id`
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
deleted on instance termination
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `no_device` (boolean) - Suppresses the specified device included in the
block device mapping of the AMI
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on [IOPs][2] for more information
- `device_name` (string) - The device name exposed to the instance (for
example, "/dev/sdh" or "xvdh")
- `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device
Mapping](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
for more information
- `snapshot_id` (string) - The ID of the snapshot
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
volumes, io1 for Provisioned IOPS (SSD) volumes, and standard for Magnetic
volumes
- `volume_size` (integer) - The size of the volume, in GiB. Required if not
specifying a `snapshot_id`
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
deleted on instance termination
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `no_device` (boolean) - Suppresses the specified device included in the
block device mapping of the AMI
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on
[IOPs](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
for more information
- `ami_description` (string) - The description to set for the
resulting AMI(s). By default this description is empty.
- `ami_groups` (array of strings) - A list of groups that have access to
launch the resulting AMI(s). By default no groups have permission to launch
the AMI. `all` will make the AMI publicly accessible. AWS currently doesn't
accept any value other than "all".
* `ami_description` (string) - The description to set for the resulting
AMI(s). By default this description is empty.
- `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
* `ami_groups` (array of strings) - A list of groups that have access
to launch the resulting AMI(s). By default no groups have permission
to launch the AMI. `all` will make the AMI publicly accessible.
AWS currently doesn't accept any value other than "all".
- `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
depending on the size of the AMI, but will generally take many minutes.
* `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
- `ami_users` (array of strings) - A list of account IDs that have access to
launch the resulting AMI(s). By default no additional users other than the
user creating the AMI has permissions to launch it.
* `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
depending on the size of the AMI, but will generally take many minutes.
- `associate_public_ip_address` (boolean) - If using a non-default VPC, public
IP addresses are not provided by default. If this is toggled, your new
instance will get a Public IP.
* `ami_users` (array of strings) - A list of account IDs that have access
to launch the resulting AMI(s). By default no additional users other than the user
creating the AMI has permissions to launch it.
- `availability_zone` (string) - Destination availability zone to launch
instance in. Leave this empty to allow Amazon to auto-assign.
* `associate_public_ip_address` (boolean) - If using a non-default VPC, public
IP addresses are not provided by default. If this is toggled, your new
instance will get a Public IP.
- `enhanced_networking` (boolean) - Enable enhanced
networking (SriovNetSupport) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy.
* `availability_zone` (string) - Destination availability zone to launch instance in.
Leave this empty to allow Amazon to auto-assign.
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
* `enhanced_networking` (boolean) - Enable enhanced networking (SriovNetSupport) on
HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy.
- `iam_instance_profile` (string) - The name of an [IAM instance
profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
to launch the EC2 instance with.
* `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
- `launch_block_device_mappings` (array of block device mappings) - Add the
block device mappings to the launch instance. The block device mappings are
the same as `ami_block_device_mappings` above.
* `iam_instance_profile` (string) - The name of an
[IAM instance profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
to launch the EC2 instance with.
- `run_tags` (object of key/value strings) - Tags to apply to the instance
that is *launched* to create the AMI. These tags are *not* applied to the
resulting AMI unless they're duplicated in `tags`.
* `launch_block_device_mappings` (array of block device mappings) - Add the
block device mappings to the launch instance. The block device mappings are
the same as `ami_block_device_mappings` above.
- `security_group_id` (string) - The ID (*not* the name) of the security group
to assign to the instance. By default this is not set and Packer will
automatically create a new temporary security group to allow SSH access.
Note that if this is specified, you must be sure the security group allows
access to the `ssh_port` given below.
* `run_tags` (object of key/value strings) - Tags to apply to the instance
that is _launched_ to create the AMI. These tags are _not_ applied to
the resulting AMI unless they're duplicated in `tags`.
- `security_group_ids` (array of strings) - A list of security groups as
described above. Note that if this is specified, you must omit the
`security_group_id`.
* `security_group_id` (string) - The ID (_not_ the name) of the security
group to assign to the instance. By default this is not set and Packer
will automatically create a new temporary security group to allow SSH
access. Note that if this is specified, you must be sure the security
group allows access to the `ssh_port` given below.
- `spot_price` (string) - The maximum hourly price to pay for a spot instance
to create the AMI. Spot instances are a type of instance that EC2 starts
when the current spot price is less than the maximum price you specify. Spot
price will be updated based on available spot instance capacity and current
spot instance requests. It may save you some costs. You can set this to
"auto" for Packer to automatically discover the best spot price.
* `security_group_ids` (array of strings) - A list of security groups as
described above. Note that if this is specified, you must omit the
`security_group_id`.
- `spot_price_auto_product` (string) - Required if `spot_price` is set
to "auto". This tells Packer what sort of AMI you're launching to find the
best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)`
* `spot_price` (string) - The maximum hourly price to pay for a spot instance
to create the AMI. Spot instances are a type of instance that EC2 starts when
the current spot price is less than the maximum price you specify. Spot price
will be updated based on available spot instance capacity and current spot
instance requests. It may save you some costs. You can set this to "auto" for
Packer to automatically discover the best spot price.
- `ssh_keypair_name` (string) - If specified, this is the key that will be
used for SSH with the machine. By default, this is blank, and Packer will
generate a temporary keypair. `ssh_private_key_file` must be specified
with this.
* `spot_price_auto_product` (string) - Required if `spot_price` is set to
"auto". This tells Packer what sort of AMI you're launching to find the best
spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)`
- `ssh_private_ip` (boolean) - If true, then SSH will always use the private
IP if available.
* `ssh_keypair_name` (string) - If specified, this is the key that will be
used for SSH with the machine. By default, this is blank, and Packer will
generate a temporary keypair. `ssh_private_key_file` must be specified
with this.
- `subnet_id` (string) - If using VPC, the ID of the subnet, such as
"subnet-12345def", where Packer will launch the EC2 instance. This field is
required if you are using an non-default VPC.
* `ssh_private_ip` (boolean) - If true, then SSH will always use the private
IP if available.
- `tags` (object of key/value strings) - Tags applied to the AMI and
relevant snapshots.
* `subnet_id` (string) - If using VPC, the ID of the subnet, such as
"subnet-12345def", where Packer will launch the EC2 instance. This field is
required if you are using an non-default VPC.
- `temporary_key_pair_name` (string) - The name of the temporary keypair
to generate. By default, Packer generates a name with a UUID.
* `tags` (object of key/value strings) - Tags applied to the AMI and
relevant snapshots.
- `token` (string) - The access token to use. This is different from the
access key and secret key. If you're not sure what this is, then you
probably don't need it. This will also be read from the `AWS_SECURITY_TOKEN`
environmental variable.
* `temporary_key_pair_name` (string) - The name of the temporary keypair
to generate. By default, Packer generates a name with a UUID.
- `user_data` (string) - User data to apply when launching the instance. Note
that you need to be careful about escaping characters due to the templates
being JSON. It is often more convenient to use `user_data_file`, instead.
* `token` (string) - The access token to use. This is different from
the access key and secret key. If you're not sure what this is, then you
probably don't need it. This will also be read from the `AWS_SECURITY_TOKEN`
environmental variable.
- `user_data_file` (string) - Path to a file that will be used for the user
data when launching the instance.
* `user_data` (string) - User data to apply when launching the instance.
Note that you need to be careful about escaping characters due to the
templates being JSON. It is often more convenient to use `user_data_file`,
instead.
- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID
in order to create a temporary security group within the VPC.
* `user_data_file` (string) - Path to a file that will be used for the
user data when launching the instance.
* `vpc_id` (string) - If launching into a VPC subnet, Packer needs the
VPC ID in order to create a temporary security group within the VPC.
* `windows_password_timeout` (string) - The timeout for waiting for
a Windows password for Windows instances. Defaults to 20 minutes.
Example value: "10m"
- `windows_password_timeout` (string) - The timeout for waiting for a Windows
password for Windows instances. Defaults to 20 minutes. Example value: "10m"
## Basic Example
Here is a basic example. It is completely valid except for the access keys:
```javascript
``` {.javascript}
{
"type": "amazon-ebs",
"access_key": "YOUR KEY HERE",
@ -208,25 +210,23 @@ Here is a basic example. It is completely valid except for the access keys:
}
```
-> **Note:** Packer can also read the access key and secret
access key from environmental variables. See the configuration reference in
the section above for more information on what environmental variables Packer
will look for.
-&gt; **Note:** Packer can also read the access key and secret access key from
environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
## Accessing the Instance to Debug
If you need to access the instance to debug for some reason, run the builder
with the `-debug` flag. In debug mode, the Amazon builder will save the
private key in the current directory and will output the DNS or IP information
as well. You can use this information to access the instance as it is
running.
with the `-debug` flag. In debug mode, the Amazon builder will save the private
key in the current directory and will output the DNS or IP information as well.
You can use this information to access the instance as it is running.
## AMI Block Device Mappings Example
Here is an example using the optional AMI block device mappings. This will add
the /dev/sdb and /dev/sdc block device mappings to the finished AMI.
```javascript
``` {.javascript}
{
"type": "amazon-ebs",
"access_key": "YOUR KEY HERE",
@ -252,9 +252,9 @@ the /dev/sdb and /dev/sdc block device mappings to the finished AMI.
## Tag Example
Here is an example using the optional AMI tags. This will add the tags
"OS_Version" and "Release" to the finished AMI.
"OS\_Version" and "Release" to the finished AMI.
```javascript
``` {.javascript}
{
"type": "amazon-ebs",
"access_key": "YOUR KEY HERE",
@ -271,13 +271,10 @@ Here is an example using the optional AMI tags. This will add the tags
}
```
-> **Note:** Packer uses pre-built AMIs as the source for building images.
-&gt; **Note:** Packer uses pre-built AMIs as the source for building images.
These source AMIs may include volumes that are not flagged to be destroyed on
termiation of the instance building the new image. Packer will attempt to clean
up all residual volumes that are not designated by the user to remain after
termination. If you need to preserve those source volumes, you can overwrite the
termination setting by specifying `delete_on_termination=false` in the
`launch_device_mappings` block for the device.
[1]: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html
[2]: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html

View File

@ -1,9 +1,12 @@
---
layout: "docs"
page_title: "Amazon AMI Builder (instance-store)"
description: |-
The `amazon-instance` Packer builder is able to create Amazon AMIs backed by instance storage as the root device. For more information on the difference between instance storage and EBS-backed instances, see the storage for the root device section in the EC2 documentation.
---
description: |
The `amazon-instance` Packer builder is able to create Amazon AMIs backed by
instance storage as the root device. For more information on the difference
between instance storage and EBS-backed instances, see the storage for the root
device section in the EC2 documentation.
layout: docs
page_title: 'Amazon AMI Builder (instance-store)'
...
# AMI Builder (instance-store)
@ -11,24 +14,24 @@ Type: `amazon-instance`
The `amazon-instance` Packer builder is able to create Amazon AMIs backed by
instance storage as the root device. For more information on the difference
between instance storage and EBS-backed instances, see the
["storage for the root device" section in the EC2 documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
between instance storage and EBS-backed instances, see the ["storage for the
root device" section in the EC2
documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
This builder builds an AMI by launching an EC2 instance from an existing
instance-storage backed AMI, provisioning that running machine, and then
bundling and creating a new AMI from that machine.
This is all done in your own AWS account. The builder will create temporary
keypairs, security group rules, etc. that provide it temporary access to
the instance while the image is being created. This simplifies configuration
quite a bit.
bundling and creating a new AMI from that machine. This is all done in your own
AWS account. The builder will create temporary keypairs, security group rules,
etc. that provide it temporary access to the instance while the image is being
created. This simplifies configuration quite a bit.
The builder does _not_ manage AMIs. Once it creates an AMI and stores it
in your account, it is up to you to use, delete, etc. the AMI.
The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
account, it is up to you to use, delete, etc. the AMI.
-> **Note** This builder requires that the
[Amazon EC2 AMI Tools](http://aws.amazon.com/developertools/368)
are installed onto the machine. This can be done within a provisioner, but
must be done before the builder finishes running.
-&gt; **Note** This builder requires that the [Amazon EC2 AMI
Tools](http://aws.amazon.com/developertools/368) are installed onto the machine.
This can be done within a provisioner, but must be done before the builder
finishes running.
## Configuration Reference
@ -37,204 +40,204 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `access_key` (string) - The access key used to communicate with AWS.
If not specified, Packer will use the key from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
or fall back to environment variables `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` (in that order), if set.
- `access_key` (string) - The access key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
* `account_id` (string) - Your AWS account ID. This is required for bundling
the AMI. This is _not the same_ as the access key. You can find your
account ID in the security credentials page of your AWS account.
- `account_id` (string) - Your AWS account ID. This is required for bundling
the AMI. This is *not the same* as the access key. You can find your account
ID in the security credentials page of your AWS account.
* `ami_name` (string) - The name of the resulting AMI that will appear
when managing AMIs in the AWS console or via APIs. This must be unique.
To help make this unique, use a function like `timestamp` (see
[configuration templates](/docs/templates/configuration-templates.html) for more info)
- `ami_name` (string) - The name of the resulting AMI that will appear when
managing AMIs in the AWS console or via APIs. This must be unique. To help
make this unique, use a function like `timestamp` (see [configuration
templates](/docs/templates/configuration-templates.html) for more info)
* `instance_type` (string) - The EC2 instance type to use while building
the AMI, such as "m1.small".
- `instance_type` (string) - The EC2 instance type to use while building the
AMI, such as "m1.small".
* `region` (string) - The name of the region, such as "us-east-1", in which
to launch the EC2 instance to create the AMI.
- `region` (string) - The name of the region, such as "us-east-1", in which to
launch the EC2 instance to create the AMI.
* `s3_bucket` (string) - The name of the S3 bucket to upload the AMI.
This bucket will be created if it doesn't exist.
- `s3_bucket` (string) - The name of the S3 bucket to upload the AMI. This
bucket will be created if it doesn't exist.
* `secret_key` (string) - The secret key used to communicate with AWS.
If not specified, Packer will use the secret from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
or fall back to environment variables `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` (in that order), if set.
- `secret_key` (string) - The secret key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
* `source_ami` (string) - The initial AMI used as a base for the newly
created machine.
- `source_ami` (string) - The initial AMI used as a base for the newly
created machine.
* `ssh_username` (string) - The username to use in order to communicate
over SSH to the running machine.
- `ssh_username` (string) - The username to use in order to communicate over
SSH to the running machine.
* `x509_cert_path` (string) - The local path to a valid X509 certificate for
your AWS account. This is used for bundling the AMI. This X509 certificate
must be registered with your account from the security credentials page
in the AWS console.
- `x509_cert_path` (string) - The local path to a valid X509 certificate for
your AWS account. This is used for bundling the AMI. This X509 certificate
must be registered with your account from the security credentials page in
the AWS console.
* `x509_key_path` (string) - The local path to the private key for the X509
certificate specified by `x509_cert_path`. This is used for bundling the AMI.
- `x509_key_path` (string) - The local path to the private key for the X509
certificate specified by `x509_cert_path`. This is used for bundling
the AMI.
### Optional:
* `ami_block_device_mappings` (array of block device mappings) - Add the block
device mappings to the AMI. The block device mappings allow for keys:
- `ami_block_device_mappings` (array of block device mappings) - Add the block
device mappings to the AMI. The block device mappings allow for keys:
- `device_name` (string) - The device name exposed to the instance (for
example, "/dev/sdh" or "xvdh")
- `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device Mapping][1] for more information
- `snapshot_id` (string) - The ID of the snapshot
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
volumes, io1 for Provisioned IOPS (SSD) volumes, and standard for Magnetic
volumes
- `volume_size` (integer) - The size of the volume, in GiB. Required if not
specifying a `snapshot_id`
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
deleted on instance termination
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `no_device` (boolean) - Suppresses the specified device included in the
block device mapping of the AMI
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on [IOPs][2] for more information
- `device_name` (string) - The device name exposed to the instance (for
example, "/dev/sdh" or "xvdh")
- `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device
Mapping](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
for more information
- `snapshot_id` (string) - The ID of the snapshot
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
volumes, io1 for Provisioned IOPS (SSD) volumes, and standard for Magnetic
volumes
- `volume_size` (integer) - The size of the volume, in GiB. Required if not
specifying a `snapshot_id`
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
deleted on instance termination
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `no_device` (boolean) - Suppresses the specified device included in the
block device mapping of the AMI
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on
[IOPs](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
for more information
- `ami_description` (string) - The description to set for the
resulting AMI(s). By default this description is empty.
* `ami_description` (string) - The description to set for the resulting
AMI(s). By default this description is empty.
- `ami_groups` (array of strings) - A list of groups that have access to
launch the resulting AMI(s). By default no groups have permission to launch
the AMI. `all` will make the AMI publicly accessible. AWS currently doesn't
accept any value other than "all".
* `ami_groups` (array of strings) - A list of groups that have access
to launch the resulting AMI(s). By default no groups have permission
to launch the AMI. `all` will make the AMI publicly accessible.
AWS currently doesn't accept any value other than "all".
- `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
* `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
- `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
depending on the size of the AMI, but will generally take many minutes.
* `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
depending on the size of the AMI, but will generally take many minutes.
- `ami_users` (array of strings) - A list of account IDs that have access to
launch the resulting AMI(s). By default no additional users other than the
user creating the AMI has permissions to launch it.
* `ami_users` (array of strings) - A list of account IDs that have access
to launch the resulting AMI(s). By default no additional users other than the user
creating the AMI has permissions to launch it.
- `ami_virtualization_type` (string) - The type of virtualization for the AMI
you are building. This option is required to register HVM images. Can be
"paravirtual" (default) or "hvm".
* `ami_virtualization_type` (string) - The type of virtualization for the AMI
you are building. This option is required to register HVM images. Can be
"paravirtual" (default) or "hvm".
- `associate_public_ip_address` (boolean) - If using a non-default VPC, public
IP addresses are not provided by default. If this is toggled, your new
instance will get a Public IP.
* `associate_public_ip_address` (boolean) - If using a non-default VPC, public
IP addresses are not provided by default. If this is toggled, your new
instance will get a Public IP.
- `availability_zone` (string) - Destination availability zone to launch
instance in. Leave this empty to allow Amazon to auto-assign.
* `availability_zone` (string) - Destination availability zone to launch instance in.
Leave this empty to allow Amazon to auto-assign.
- `bundle_destination` (string) - The directory on the running instance where
the bundled AMI will be saved prior to uploading. By default this is "/tmp".
This directory must exist and be writable.
* `bundle_destination` (string) - The directory on the running instance
where the bundled AMI will be saved prior to uploading. By default this is
"/tmp". This directory must exist and be writable.
- `bundle_prefix` (string) - The prefix for files created from bundling the
root volume. By default this is "image-{{timestamp}}". The `timestamp`
variable should be used to make sure this is unique, otherwise it can
collide with other created AMIs by Packer in your account.
* `bundle_prefix` (string) - The prefix for files created from bundling
the root volume. By default this is "image-{{timestamp}}". The `timestamp`
variable should be used to make sure this is unique, otherwise it can
collide with other created AMIs by Packer in your account.
- `bundle_upload_command` (string) - The command to use to upload the
bundled volume. See the "custom bundle commands" section below for
more information.
* `bundle_upload_command` (string) - The command to use to upload the
bundled volume. See the "custom bundle commands" section below for more
information.
- `bundle_vol_command` (string) - The command to use to bundle the volume. See
the "custom bundle commands" section below for more information.
* `bundle_vol_command` (string) - The command to use to bundle the volume.
See the "custom bundle commands" section below for more information.
- `enhanced_networking` (boolean) - Enable enhanced
networking (SriovNetSupport) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy.
* `enhanced_networking` (boolean) - Enable enhanced networking (SriovNetSupport) on
HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy.
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
* `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
- `iam_instance_profile` (string) - The name of an [IAM instance
profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
to launch the EC2 instance with.
* `iam_instance_profile` (string) - The name of an
[IAM instance profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
to launch the EC2 instance with.
- `launch_block_device_mappings` (array of block device mappings) - Add the
block device mappings to the launch instance. The block device mappings are
the same as `ami_block_device_mappings` above.
* `launch_block_device_mappings` (array of block device mappings) - Add the
block device mappings to the launch instance. The block device mappings are
the same as `ami_block_device_mappings` above.
- `run_tags` (object of key/value strings) - Tags to apply to the instance
that is *launched* to create the AMI. These tags are *not* applied to the
resulting AMI unless they're duplicated in `tags`.
* `run_tags` (object of key/value strings) - Tags to apply to the instance
that is _launched_ to create the AMI. These tags are _not_ applied to
the resulting AMI unless they're duplicated in `tags`.
- `security_group_id` (string) - The ID (*not* the name) of the security group
to assign to the instance. By default this is not set and Packer will
automatically create a new temporary security group to allow SSH access.
Note that if this is specified, you must be sure the security group allows
access to the `ssh_port` given below.
* `security_group_id` (string) - The ID (_not_ the name) of the security
group to assign to the instance. By default this is not set and Packer
will automatically create a new temporary security group to allow SSH
access. Note that if this is specified, you must be sure the security
group allows access to the `ssh_port` given below.
- `security_group_ids` (array of strings) - A list of security groups as
described above. Note that if this is specified, you must omit the
`security_group_id`.
* `security_group_ids` (array of strings) - A list of security groups as
described above. Note that if this is specified, you must omit the
`security_group_id`.
- `spot_price` (string) - The maximum hourly price to launch a spot instance
to create the AMI. It is a type of instances that EC2 starts when the
maximum price that you specify exceeds the current spot price. Spot price
will be updated based on available spot instance capacity and current spot
Instance requests. It may save you some costs. You can set this to "auto"
for Packer to automatically discover the best spot price.
* `spot_price` (string) - The maximum hourly price to launch a spot instance
to create the AMI. It is a type of instances that EC2 starts when the maximum
price that you specify exceeds the current spot price. Spot price will be
updated based on available spot instance capacity and current spot Instance
requests. It may save you some costs. You can set this to "auto" for
Packer to automatically discover the best spot price.
- `spot_price_auto_product` (string) - Required if `spot_price` is set
to "auto". This tells Packer what sort of AMI you're launching to find the
best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)`
* `spot_price_auto_product` (string) - Required if `spot_price` is set to
"auto". This tells Packer what sort of AMI you're launching to find the best
spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)`
- `ssh_keypair_name` (string) - If specified, this is the key that will be
used for SSH with the machine. By default, this is blank, and Packer will
generate a temporary keypair. `ssh_private_key_file` must be specified
with this.
* `ssh_keypair_name` (string) - If specified, this is the key that will be
used for SSH with the machine. By default, this is blank, and Packer will
generate a temporary keypair. `ssh_private_key_file` must be specified
with this.
- `ssh_private_ip` (boolean) - If true, then SSH will always use the private
IP if available.
* `ssh_private_ip` (boolean) - If true, then SSH will always use the private
IP if available.
- `subnet_id` (string) - If using VPC, the ID of the subnet, such as
"subnet-12345def", where Packer will launch the EC2 instance. This field is
required if you are using an non-default VPC.
* `subnet_id` (string) - If using VPC, the ID of the subnet, such as
"subnet-12345def", where Packer will launch the EC2 instance. This field is
required if you are using an non-default VPC.
- `tags` (object of key/value strings) - Tags applied to the AMI.
* `tags` (object of key/value strings) - Tags applied to the AMI.
- `temporary_key_pair_name` (string) - The name of the temporary keypair
to generate. By default, Packer generates a name with a UUID.
* `temporary_key_pair_name` (string) - The name of the temporary keypair
to generate. By default, Packer generates a name with a UUID.
- `user_data` (string) - User data to apply when launching the instance. Note
that you need to be careful about escaping characters due to the templates
being JSON. It is often more convenient to use `user_data_file`, instead.
* `user_data` (string) - User data to apply when launching the instance.
Note that you need to be careful about escaping characters due to the
templates being JSON. It is often more convenient to use `user_data_file`,
instead.
- `user_data_file` (string) - Path to a file that will be used for the user
data when launching the instance.
* `user_data_file` (string) - Path to a file that will be used for the
user data when launching the instance.
- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID
in order to create a temporary security group within the VPC.
* `vpc_id` (string) - If launching into a VPC subnet, Packer needs the
VPC ID in order to create a temporary security group within the VPC.
- `x509_upload_path` (string) - The path on the remote machine where the X509
certificate will be uploaded. This path must already exist and be writable.
X509 certificates are uploaded after provisioning is run, so it is perfectly
okay to create this directory as part of the provisioning process.
* `x509_upload_path` (string) - The path on the remote machine where the
X509 certificate will be uploaded. This path must already exist and be
writable. X509 certificates are uploaded after provisioning is run, so
it is perfectly okay to create this directory as part of the provisioning
process.
* `windows_password_timeout` (string) - The timeout for waiting for
a Windows password for Windows instances. Defaults to 20 minutes.
Example value: "10m"
- `windows_password_timeout` (string) - The timeout for waiting for a Windows
password for Windows instances. Defaults to 20 minutes. Example value: "10m"
## Basic Example
Here is a basic example. It is completely valid except for the access keys:
```javascript
``` {.javascript}
{
"type": "amazon-instance",
"access_key": "YOUR KEY HERE",
@ -254,84 +257,79 @@ Here is a basic example. It is completely valid except for the access keys:
}
```
-> **Note:** Packer can also read the access key and secret
access key from environmental variables. See the configuration reference in
the section above for more information on what environmental variables Packer
will look for.
-&gt; **Note:** Packer can also read the access key and secret access key from
environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
## Accessing the Instance to Debug
If you need to access the instance to debug for some reason, run the builder
with the `-debug` flag. In debug mode, the Amazon builder will save the
private key in the current directory and will output the DNS or IP information
as well. You can use this information to access the instance as it is
running.
with the `-debug` flag. In debug mode, the Amazon builder will save the private
key in the current directory and will output the DNS or IP information as well.
You can use this information to access the instance as it is running.
## Custom Bundle Commands
A lot of the process required for creating an instance-store backed AMI
involves commands being run on the actual source instance. Specifically, the
`ec2-bundle-vol` and `ec2-upload-bundle` commands must be used to bundle
the root filesystem and upload it, respectively.
A lot of the process required for creating an instance-store backed AMI involves
commands being run on the actual source instance. Specifically, the
`ec2-bundle-vol` and `ec2-upload-bundle` commands must be used to bundle the
root filesystem and upload it, respectively.
Each of these commands have a lot of available flags. Instead of exposing each
possible flag as a template configuration option, the instance-store AMI
builder for Packer lets you customize the entire command used to bundle
and upload the AMI.
possible flag as a template configuration option, the instance-store AMI builder
for Packer lets you customize the entire command used to bundle and upload the
AMI.
These are configured with `bundle_vol_command` and `bundle_upload_command`.
Both of these configurations are
[configuration templates](/docs/templates/configuration-templates.html)
and have support for their own set of template variables.
These are configured with `bundle_vol_command` and `bundle_upload_command`. Both
of these configurations are [configuration
templates](/docs/templates/configuration-templates.html) and have support for
their own set of template variables.
### Bundle Volume Command
The default value for `bundle_vol_command` is shown below. It is split
across multiple lines for convenience of reading. The bundle volume command
is responsible for executing `ec2-bundle-vol` in order to store and image
of the root filesystem to use to create the AMI.
The default value for `bundle_vol_command` is shown below. It is split across
multiple lines for convenience of reading. The bundle volume command is
responsible for executing `ec2-bundle-vol` in order to store and image of the
root filesystem to use to create the AMI.
```text
``` {.text}
sudo -i -n ec2-bundle-vol \
-k {{.KeyPath}} \
-u {{.AccountId}} \
-c {{.CertPath}} \
-r {{.Architecture}} \
-e {{.PrivatePath}}/* \
-d {{.Destination}} \
-p {{.Prefix}} \
--batch \
--no-filter
-k {{.KeyPath}} \
-u {{.AccountId}} \
-c {{.CertPath}} \
-r {{.Architecture}} \
-e {{.PrivatePath}}/* \
-d {{.Destination}} \
-p {{.Prefix}} \
--batch \
--no-filter
```
The available template variables should be self-explanatory based on the
parameters they're used to satisfy the `ec2-bundle-vol` command.
~> **Warning!** Some versions of ec2-bundle-vol silently ignore all .pem and
\~&gt; **Warning!** Some versions of ec2-bundle-vol silently ignore all .pem and
.gpg files during the bundling of the AMI, which can cause problems on some
systems, such as Ubuntu. You may want to customize the bundle volume command
to include those files (see the `--no-filter` option of ec2-bundle-vol).
systems, such as Ubuntu. You may want to customize the bundle volume command to
include those files (see the `--no-filter` option of ec2-bundle-vol).
### Bundle Upload Command
The default value for `bundle_upload_command` is shown below. It is split
across multiple lines for convenience of reading. The bundle upload command
is responsible for taking the bundled volume and uploading it to S3.
The default value for `bundle_upload_command` is shown below. It is split across
multiple lines for convenience of reading. The bundle upload command is
responsible for taking the bundled volume and uploading it to S3.
```text
``` {.text}
sudo -i -n ec2-upload-bundle \
-b {{.BucketName}} \
-m {{.ManifestPath}} \
-a {{.AccessKey}} \
-s {{.SecretKey}} \
-d {{.BundleDirectory}} \
--batch \
--region {{.Region}} \
--retry
-b {{.BucketName}} \
-m {{.ManifestPath}} \
-a {{.AccessKey}} \
-s {{.SecretKey}} \
-d {{.BundleDirectory}} \
--batch \
--region {{.Region}} \
--retry
```
The available template variables should be self-explanatory based on the
parameters they're used to satisfy the `ec2-upload-bundle` command.
[1]: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html
[2]: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html

View File

@ -1,44 +1,93 @@
---
layout: "docs"
page_title: "Amazon AMI Builder"
description: |-
Packer is able to create Amazon AMIs. To achieve this, Packer comes with multiple builders depending on the strategy you want to use to build the AMI.
---
description: |
Packer is able to create Amazon AMIs. To achieve this, Packer comes with
multiple builders depending on the strategy you want to use to build the AMI.
layout: docs
page_title: Amazon AMI Builder
...
# Amazon AMI Builder
Packer is able to create Amazon AMIs. To achieve this, Packer comes with
multiple builders depending on the strategy you want to use to build the
AMI. Packer supports the following builders at the moment:
multiple builders depending on the strategy you want to use to build the AMI.
Packer supports the following builders at the moment:
* [amazon-ebs](/docs/builders/amazon-ebs.html) - Create EBS-backed AMIs
by launching a source AMI and re-packaging it into a new AMI after
provisioning. If in doubt, use this builder, which is the easiest to get
started with.
- [amazon-ebs](/docs/builders/amazon-ebs.html) - Create EBS-backed AMIs by
launching a source AMI and re-packaging it into a new AMI
after provisioning. If in doubt, use this builder, which is the easiest to
get started with.
* [amazon-instance](/docs/builders/amazon-instance.html) - Create
instance-store AMIs by launching and provisioning a source instance, then
rebundling it and uploading it to S3.
- [amazon-instance](/docs/builders/amazon-instance.html) - Create
instance-store AMIs by launching and provisioning a source instance, then
rebundling it and uploading it to S3.
* [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs
from an existing EC2 instance by mounting the root device and using a
[Chroot](http://en.wikipedia.org/wiki/Chroot) environment to provision
that device. This is an **advanced builder and should not be used by
newcomers**. However, it is also the fastest way to build an EBS-backed
AMI since no new EC2 instance needs to be launched.
- [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs
from an existing EC2 instance by mounting the root device and using a
[Chroot](http://en.wikipedia.org/wiki/Chroot) environment to provision
that device. This is an **advanced builder and should not be used by
newcomers**. However, it is also the fastest way to build an EBS-backed AMI
since no new EC2 instance needs to be launched.
-> **Don't know which builder to use?** If in doubt, use the
[amazon-ebs builder](/docs/builders/amazon-ebs.html). It is
much easier to use and Amazon generally recommends EBS-backed images nowadays.
-&gt; **Don't know which builder to use?** If in doubt, use the [amazon-ebs
builder](/docs/builders/amazon-ebs.html). It is much easier to use and Amazon
generally recommends EBS-backed images nowadays.
<span id="specifying-amazon-credentials"></span>
## Specifying Amazon Credentials
When you use any of the amazon builders, you must provide credentials to the API
in the form of an access key id and secret. These look like:
access key id: AKIAIOSFODNN7EXAMPLE
secret access key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
If you use other AWS tools you may already have these configured. If so, packer
will try to use them, *unless* they are specified in your packer template.
Credentials are resolved in the following order:
1. Values hard-coded in the packer template are always authoritative.
2. *Variables* in the packer template may be resolved from command-line flags
or from environment variables. Please read about [User
Variables](https://packer.io/docs/templates/user-variables.html)
for details.
3. If no credentials are found, packer falls back to automatic lookup.
### Automatic Lookup
If no AWS credentials are found in a packer template, we proceed on to the
following steps:
1. Lookup via environment variables.
- First `AWS_ACCESS_KEY_ID`, then `AWS_ACCESS_KEY`
- First `AWS_SECRET_ACCESS_KEY`, then `AWS_SECRET_KEY`
2. Look for [local AWS configuration
files](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files)
- First `~/.aws/credentials`
- Next based on `AWS_PROFILE`
3. Lookup an IAM role for the current EC2 instance (if you're running in EC2)
\~&gt; **Subtle details of automatic lookup may change over time.** The most
reliable way to specify your configuration is by setting them in template
variables (directly or indirectly), or by using the `AWS_ACCESS_KEY_ID` and
`AWS_SECRET_ACCESS_KEY` environment variables.
Environment variables provide the best portability, allowing you to run your
packer build on your workstation, in Atlas, or on another build server.
## Using an IAM Instance Profile
If AWS keys are not specified in the template, a [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file or through environment variables
Packer will use credentials provided by the instance's IAM profile, if it has one.
If AWS keys are not specified in the template, a
[credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files)
file or through environment variables Packer will use credentials provided by
the instance's IAM profile, if it has one.
The following policy document provides the minimal set permissions necessary for Packer to work:
The following policy document provides the minimal set permissions necessary for
Packer to work:
```javascript
``` {.javascript}
{
"Statement": [{
"Effect": "Allow",
@ -70,3 +119,29 @@ The following policy document provides the minimal set permissions necessary for
}]
}
```
## Troubleshooting
### Attaching IAM Policies to Roles
IAM policies can be associated with user or roles. If you use packer with IAM
roles, you may encounter an error like this one:
==> amazon-ebs: Error launching source instance: You are not authorized to perform this operation.
You can read more about why this happens on the [Amazon Security
Blog](http://blogs.aws.amazon.com/security/post/Tx3M0IFB5XBOCQX/Granting-Permission-to-Launch-EC2-Instances-with-IAM-Roles-PassRole-Permission).
The example policy below may help packer work with IAM roles. Note that this
example provides more than the minimal set of permissions needed for packer to
work, but specifics will depend on your use-case.
``` {.json}
{
"Sid": "PackerIAMPassRole",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": [
"*"
]
}
```

View File

@ -1,13 +1,15 @@
---
layout: "docs"
page_title: "Custom Builder"
description: |-
Packer is extensible, allowing you to write new builders without having to modify the core source code of Packer itself. Documentation for creating new builders is covered in the custom builders page of the Packer plugin section.
---
description: |
Packer is extensible, allowing you to write new builders without having to
modify the core source code of Packer itself. Documentation for creating new
builders is covered in the custom builders page of the Packer plugin section.
layout: docs
page_title: Custom Builder
...
# Custom Builder
Packer is extensible, allowing you to write new builders without having to
modify the core source code of Packer itself. Documentation for creating
new builders is covered in the [custom builders](/docs/extend/builder.html)
page of the Packer plugin section.
modify the core source code of Packer itself. Documentation for creating new
builders is covered in the [custom builders](/docs/extend/builder.html) page of
the Packer plugin section.

View File

@ -1,22 +1,26 @@
---
layout: "docs"
page_title: "DigitalOcean Builder"
description: |-
The `digitalocean` Packer builder is able to create new images for use with DigitalOcean. The builder takes a source image, runs any provisioning necessary on the image after launching it, then snapshots it into a reusable image. This reusable image can then be used as the foundation of new servers that are launched within DigitalOcean.
---
description: |
The `digitalocean` Packer builder is able to create new images for use with
DigitalOcean. The builder takes a source image, runs any provisioning necessary
on the image after launching it, then snapshots it into a reusable image. This
reusable image can then be used as the foundation of new servers that are
launched within DigitalOcean.
layout: docs
page_title: DigitalOcean Builder
...
# DigitalOcean Builder
Type: `digitalocean`
The `digitalocean` Packer builder is able to create new images for use with
[DigitalOcean](http://www.digitalocean.com). The builder takes a source
image, runs any provisioning necessary on the image after launching it,
then snapshots it into a reusable image. This reusable image can then be
used as the foundation of new servers that are launched within DigitalOcean.
[DigitalOcean](http://www.digitalocean.com). The builder takes a source image,
runs any provisioning necessary on the image after launching it, then snapshots
it into a reusable image. This reusable image can then be used as the foundation
of new servers that are launched within DigitalOcean.
The builder does _not_ manage images. Once it creates an image, it is up to
you to use it or delete it.
The builder does *not* manage images. Once it creates an image, it is up to you
to use it or delete it.
## Configuration Reference
@ -25,50 +29,55 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `api_token` (string) - The client TOKEN to use to access your account.
It can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`, if set.
- `api_token` (string) - The client TOKEN to use to access your account. It
can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`,
if set.
* `image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it.
See https://developers.digitalocean.com/documentation/v2/#list-all-images for details on how to get a list of the the accepted image names/slugs.
- `image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it. See
https://developers.digitalocean.com/documentation/v2/\#list-all-images for
details on how to get a list of the the accepted image names/slugs.
* `region` (string) - The name (or slug) of the region to launch the droplet in.
Consequently, this is the region where the snapshot will be available.
See https://developers.digitalocean.com/documentation/v2/#list-all-regions for the accepted region names/slugs.
- `region` (string) - The name (or slug) of the region to launch the
droplet in. Consequently, this is the region where the snapshot will
be available. See
https://developers.digitalocean.com/documentation/v2/\#list-all-regions for
the accepted region names/slugs.
* `size` (string) - The name (or slug) of the droplet size to use.
See https://developers.digitalocean.com/documentation/v2/#list-all-sizes for the accepted size names/slugs.
- `size` (string) - The name (or slug) of the droplet size to use. See
https://developers.digitalocean.com/documentation/v2/\#list-all-sizes for
the accepted size names/slugs.
### Optional:
* `droplet_name` (string) - The name assigned to the droplet. DigitalOcean
sets the hostname of the machine to this value.
- `droplet_name` (string) - The name assigned to the droplet. DigitalOcean
sets the hostname of the machine to this value.
* `private_networking` (boolean) - Set to `true` to enable private networking
for the droplet being created. This defaults to `false`, or not enabled.
- `private_networking` (boolean) - Set to `true` to enable private networking
for the droplet being created. This defaults to `false`, or not enabled.
* `snapshot_name` (string) - The name of the resulting snapshot that will
appear in your account. This must be unique.
To help make this unique, use a function like `timestamp` (see
[configuration templates](/docs/templates/configuration-templates.html) for more info)
- `snapshot_name` (string) - The name of the resulting snapshot that will
appear in your account. This must be unique. To help make this unique, use a
function like `timestamp` (see [configuration
templates](/docs/templates/configuration-templates.html) for more info)
* `state_timeout` (string) - The time to wait, as a duration string,
for a droplet to enter a desired state (such as "active") before
timing out. The default state timeout is "6m".
- `state_timeout` (string) - The time to wait, as a duration string, for a
droplet to enter a desired state (such as "active") before timing out. The
default state timeout is "6m".
* `user_data` (string) - User data to launch with the Droplet.
- `user_data` (string) - User data to launch with the Droplet.
## Basic Example
Here is a basic example. It is completely valid as soon as you enter your
own access tokens:
Here is a basic example. It is completely valid as soon as you enter your own
access tokens:
```javascript
``` {.javascript}
{
"type": "digitalocean",
"api_token": "YOUR API KEY",

View File

@ -1,39 +1,40 @@
---
layout: "docs"
page_title: "Docker Builder"
description: |-
The `docker` Packer builder builds Docker images using Docker. The builder starts a Docker container, runs provisioners within this container, then exports the container for reuse or commits the image.
---
description: |
The `docker` Packer builder builds Docker images using Docker. The builder
starts a Docker container, runs provisioners within this container, then exports
the container for reuse or commits the image.
layout: docs
page_title: Docker Builder
...
# Docker Builder
Type: `docker`
The `docker` Packer builder builds [Docker](http://www.docker.io) images using
Docker. The builder starts a Docker container, runs provisioners within
this container, then exports the container for reuse or commits the image.
Docker. The builder starts a Docker container, runs provisioners within this
container, then exports the container for reuse or commits the image.
Packer builds Docker containers _without_ the use of
[Dockerfiles](https://docs.docker.com/reference/builder/).
By not using Dockerfiles, Packer is able to provision
containers with portable scripts or configuration management systems
that are not tied to Docker in any way. It also has a simpler mental model:
you provision containers much the same way you provision a normal virtualized
or dedicated server. For more information, read the section on
[Dockerfiles](#toc_8).
Packer builds Docker containers *without* the use of
[Dockerfiles](https://docs.docker.com/reference/builder/). By not using
Dockerfiles, Packer is able to provision containers with portable scripts or
configuration management systems that are not tied to Docker in any way. It also
has a simpler mental model: you provision containers much the same way you
provision a normal virtualized or dedicated server. For more information, read
the section on [Dockerfiles](#toc_8).
The Docker builder must run on a machine that has Docker installed. Therefore
the builder only works on machines that support Docker (modern Linux machines).
If you want to use Packer to build Docker containers on another platform,
use [Vagrant](http://www.vagrantup.com) to start a Linux environment, then
run Packer within that environment.
If you want to use Packer to build Docker containers on another platform, use
[Vagrant](http://www.vagrantup.com) to start a Linux environment, then run
Packer within that environment.
## Basic Example: Export
Below is a fully functioning example. It doesn't do anything useful, since
no provisioners are defined, but it will effectively repackage an image.
Below is a fully functioning example. It doesn't do anything useful, since no
provisioners are defined, but it will effectively repackage an image.
```javascript
``` {.javascript}
{
"type": "docker",
"image": "ubuntu",
@ -43,11 +44,11 @@ no provisioners are defined, but it will effectively repackage an image.
## Basic Example: Commit
Below is another example, the same as above but instead of exporting the
running container, this one commits the container to an image. The image
can then be more easily tagged, pushed, etc.
Below is another example, the same as above but instead of exporting the running
container, this one commits the container to an image. The image can then be
more easily tagged, pushed, etc.
```javascript
``` {.javascript}
{
"type": "docker",
"image": "ubuntu",
@ -55,7 +56,6 @@ can then be more easily tagged, pushed, etc.
}
```
## Configuration Reference
Configuration options are organized below into two categories: required and
@ -63,47 +63,47 @@ optional. Within each category, the available options are alphabetized and
described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `commit` (boolean) - If true, the container will be committed to an
image rather than exported. This cannot be set if `export_path` is set.
- `commit` (boolean) - If true, the container will be committed to an image
rather than exported. This cannot be set if `export_path` is set.
* `export_path` (string) - The path where the final container will be exported
as a tar file. This cannot be set if `commit` is set to true.
- `export_path` (string) - The path where the final container will be exported
as a tar file. This cannot be set if `commit` is set to true.
* `image` (string) - The base image for the Docker container that will
be started. This image will be pulled from the Docker registry if it
doesn't already exist.
- `image` (string) - The base image for the Docker container that will
be started. This image will be pulled from the Docker registry if it doesn't
already exist.
### Optional:
* `login` (boolean) - Defaults to false. If true, the builder will
login in order to pull the image. The builder only logs in for the
duration of the pull. It always logs out afterwards.
- `login` (boolean) - Defaults to false. If true, the builder will login in
order to pull the image. The builder only logs in for the duration of
the pull. It always logs out afterwards.
* `login_email` (string) - The email to use to authenticate to login.
- `login_email` (string) - The email to use to authenticate to login.
* `login_username` (string) - The username to use to authenticate to login.
- `login_username` (string) - The username to use to authenticate to login.
* `login_password` (string) - The password to use to authenticate to login.
- `login_password` (string) - The password to use to authenticate to login.
* `login_server` (string) - The server address to login to.
- `login_server` (string) - The server address to login to.
* `pull` (boolean) - If true, the configured image will be pulled using
`docker pull` prior to use. Otherwise, it is assumed the image already
exists and can be used. This defaults to true if not set.
- `pull` (boolean) - If true, the configured image will be pulled using
`docker pull` prior to use. Otherwise, it is assumed the image already
exists and can be used. This defaults to true if not set.
* `run_command` (array of strings) - An array of arguments to pass to
`docker run` in order to run the container. By default this is set to
`["-d", "-i", "-t", "{{.Image}}", "/bin/bash"]`.
As you can see, you have a couple template variables to customize, as well.
- `run_command` (array of strings) - An array of arguments to pass to
`docker run` in order to run the container. By default this is set to
`["-d", "-i", "-t", "{{.Image}}", "/bin/bash"]`. As you can see, you have a
couple template variables to customize, as well.
* `volumes` (map of strings to strings) - A mapping of additional volumes
to mount into this container. The key of the object is the host path,
the value is the container path.
- `volumes` (map of strings to strings) - A mapping of additional volumes to
mount into this container. The key of the object is the host path, the value
is the container path.
## Using the Artifact: Export
@ -113,27 +113,26 @@ with the [docker-import](/docs/post-processors/docker-import.html) and
[docker-push](/docs/post-processors/docker-push.html) post-processors.
**Note:** This section is covering how to use an artifact that has been
_exported_. More specifically, if you set `export_path` in your configuration.
*exported*. More specifically, if you set `export_path` in your configuration.
If you set `commit`, see the next section.
The example below shows a full configuration that would import and push
the created image. This is accomplished using a sequence definition (a
collection of post-processors that are treated as as single pipeline, see
[Post-Processors](/docs/templates/post-processors.html)
for more information):
The example below shows a full configuration that would import and push the
created image. This is accomplished using a sequence definition (a collection of
post-processors that are treated as as single pipeline, see
[Post-Processors](/docs/templates/post-processors.html) for more information):
```javascript
``` {.javascript}
{
"post-processors": [
[
{
"type": "docker-import",
"repository": "mitchellh/packer",
"tag": "0.7"
},
"docker-push"
]
]
[
{
"type": "docker-import",
"repository": "mitchellh/packer",
"tag": "0.7"
},
"docker-push"
]
]
}
```
@ -143,10 +142,10 @@ post-processor which will import the artifact as a docker image. The resulting
docker image is then passed on to the `docker-push` post-processor which handles
pushing the image to a container repository.
If you want to do this manually, however, perhaps from a script, you can
import the image using the process below:
If you want to do this manually, however, perhaps from a script, you can import
the image using the process below:
```text
``` {.text}
$ docker import - registry.mydomain.com/mycontainer:latest < artifact.tar
```
@ -157,23 +156,22 @@ and `docker push`, respectively.
If you committed your container to an image, you probably want to tag, save,
push, etc. Packer can do this automatically for you. An example is shown below
which tags and pushes an image. This is accomplished using a sequence
definition (a collection of post-processors that are treated as as single
pipeline, see [Post-Processors](/docs/templates/post-processors.html) for more
information):
which tags and pushes an image. This is accomplished using a sequence definition
(a collection of post-processors that are treated as as single pipeline, see
[Post-Processors](/docs/templates/post-processors.html) for more information):
```javascript
``` {.javascript}
{
"post-processors": [
[
{
"type": "docker-tag",
"repository": "mitchellh/packer",
"tag": "0.7"
},
"docker-push"
]
]
[
{
"type": "docker-tag",
"repository": "mitchellh/packer",
"tag": "0.7"
},
"docker-push"
]
]
}
```
@ -187,52 +185,52 @@ Going a step further, if you wanted to tag and push an image to multiple
container repositories, this could be accomplished by defining two,
nearly-identical sequence definitions, as demonstrated by the example below:
```javascript
``` {.javascript}
{
"post-processors": [
[
{
"type": "docker-tag",
"repository": "mitchellh/packer",
"tag": "0.7"
},
"docker-push"
],
[
{
"type": "docker-tag",
"repository": "hashicorp/packer",
"tag": "0.7"
},
"docker-push"
]
]
"post-processors": [
[
{
"type": "docker-tag",
"repository": "mitchellh/packer",
"tag": "0.7"
},
"docker-push"
],
[
{
"type": "docker-tag",
"repository": "hashicorp/packer",
"tag": "0.7"
},
"docker-push"
]
]
}
```
## Dockerfiles
This builder allows you to build Docker images _without_ Dockerfiles.
This builder allows you to build Docker images *without* Dockerfiles.
With this builder, you can repeatably create Docker images without the use of
a Dockerfile. You don't need to know the syntax or semantics of Dockerfiles.
With this builder, you can repeatably create Docker images without the use of a
Dockerfile. You don't need to know the syntax or semantics of Dockerfiles.
Instead, you can just provide shell scripts, Chef recipes, Puppet manifests,
etc. to provision your Docker container just like you would a regular
virtualized or dedicated machine.
While Docker has many features, Packer views Docker simply as an LXC
container runner. To that end, Packer is able to repeatably build these
LXC containers using portable provisioning scripts.
While Docker has many features, Packer views Docker simply as an LXC container
runner. To that end, Packer is able to repeatably build these LXC containers
using portable provisioning scripts.
Dockerfiles have some additional features that Packer doesn't support
which are able to be worked around. Many of these features will be automated
by Packer in the future:
Dockerfiles have some additional features that Packer doesn't support which are
able to be worked around. Many of these features will be automated by Packer in
the future:
* Dockerfiles will snapshot the container at each step, allowing you to
go back to any step in the history of building. Packer doesn't do this yet,
but inter-step snapshotting is on the way.
- Dockerfiles will snapshot the container at each step, allowing you to go
back to any step in the history of building. Packer doesn't do this yet, but
inter-step snapshotting is on the way.
* Dockerfiles can contain information such as exposed ports, shared
volumes, and other metadata. Packer builds a raw Docker container image
that has none of this metadata. You can pass in much of this metadata
at runtime with `docker run`.
- Dockerfiles can contain information such as exposed ports, shared volumes,
and other metadata. Packer builds a raw Docker container image that has none
of this metadata. You can pass in much of this metadata at runtime with
`docker run`.

View File

@ -0,0 +1,151 @@
---
description: |
The `googlecompute` Packer builder is able to create images for use with Google
Compute Engine (GCE) based on existing images. Google Compute Engine doesn't
allow the creation of images from scratch.
layout: docs
page_title: Google Compute Builder
...
# Google Compute Builder
Type: `googlecompute`
The `googlecompute` Packer builder is able to create
[images](https://developers.google.com/compute/docs/images) for use with [Google
Compute Engine](https://cloud.google.com/products/compute-engine)(GCE) based on
existing images. Google Compute Engine doesn't allow the creation of images from
scratch.
## Authentication
Authenticating with Google Cloud services requires at most one JSON file, called
the *account file*. The *account file* is **not** required if you are running
the `googlecompute` Packer builder from a GCE instance with a
properly-configured [Compute Engine Service
Account](https://cloud.google.com/compute/docs/authentication).
### Running With a Compute Engine Service Account
If you run the `googlecompute` Packer builder from a GCE instance, you can
configure that instance to use a [Compute Engine Service
Account](https://cloud.google.com/compute/docs/authentication). This will allow
Packer to authenticate to Google Cloud without having to bake in a separate
credential/authentication file.
To create a GCE instance that uses a service account, provide the required
scopes when launching the instance.
For `gcloud`, do this via the `--scopes` parameter:
``` {.sh}
gcloud compute --project YOUR_PROJECT instances create "INSTANCE-NAME" ... \
--scopes "https://www.googleapis.com/auth/compute" \
"https://www.googleapis.com/auth/devstorage.full_control" \
...
```
For the [Google Developers Console](https://console.developers.google.com):
1. Choose "Show advanced options"
2. Tick "Enable Compute Engine service account"
3. Choose "Read Write" for Compute
4. Chose "Full" for "Storage"
**The service account will be used automatically by Packer as long as there is
no *account file* specified in the Packer configuration file.**
### Running Without a Compute Engine Service Account
The [Google Developers Console](https://console.developers.google.com) allows
you to create and download a credential file that will let you use the
`googlecompute` Packer builder anywhere. To make the process more
straightforwarded, it is documented here.
1. Log into the [Google Developers
Console](https://console.developers.google.com) and select a project.
2. Under the "APIs & Auth" section, click "Credentials."
3. Click the "Create new Client ID" button, select "Service account", and click
"Create Client ID"
4. Click "Generate new JSON key" for the Service Account you just created. A
JSON file will be downloaded automatically. This is your *account file*.
## Basic Example
Below is a fully functioning example. It doesn't do anything useful, since no
provisioners are defined, but it will effectively repackage an existing GCE
image. The account file is obtained in the previous section.
``` {.javascript}
{
"type": "googlecompute",
"account_file": "account.json",
"project_id": "my-project",
"source_image": "debian-7-wheezy-v20150127",
"zone": "us-central1-a"
}
```
## Configuration Reference
Configuration options are organized below into two categories: required and
optional. Within each category, the available options are alphabetized and
described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
- `project_id` (string) - The project ID that will be used to launch instances
and store images.
- `source_image` (string) - The source image to use to create the new
image from. Example: `"debian-7-wheezy-v20150127"`
- `zone` (string) - The zone in which to launch the instance used to create
the image. Example: `"us-central1-a"`
### Optional:
- `account_file` (string) - The JSON file containing your account credentials.
Not required if you run Packer on a GCE instance with a service account.
Instructions for creating file or using service accounts are above.
- `disk_size` (integer) - The size of the disk in GB. This defaults to `10`,
which is 10GB.
- `image_name` (string) - The unique name of the resulting image. Defaults to
`"packer-{{timestamp}}"`.
- `image_description` (string) - The description of the resulting image.
- `instance_name` (string) - A name to give the launched instance. Beware that
this must be unique. Defaults to `"packer-{{uuid}}"`.
- `machine_type` (string) - The machine type. Defaults to `"n1-standard-1"`.
- `metadata` (object of key/value strings)
- `network` (string) - The Google Compute network to use for the
launched instance. Defaults to `"default"`.
- `state_timeout` (string) - The time to wait for instance state changes.
Defaults to `"5m"`.
- `tags` (array of strings)
- `use_internal_ip` (boolean) - If true, use the instance's internal IP
instead of its external IP during building.
## Gotchas
Centos images have root ssh access disabled by default. Set `ssh_username` to
any user, which will be created by packer with sudo access.
The machine type must have a scratch disk, which means you can't use an
`f1-micro` or `g1-small` to build images.

View File

@ -1,136 +0,0 @@
---
layout: "docs"
page_title: "Google Compute Builder"
description: |-
The `googlecompute` Packer builder is able to create images for use with Google Compute Engine (GCE) based on existing images. Google Compute Engine doesn't allow the creation of images from scratch.
---
# Google Compute Builder
Type: `googlecompute`
The `googlecompute` Packer builder is able to create [images](https://developers.google.com/compute/docs/images) for use with
[Google Compute Engine](https://cloud.google.com/products/compute-engine)(GCE) based on existing images. Google
Compute Engine doesn't allow the creation of images from scratch.
## Authentication
Authenticating with Google Cloud services requires at most one JSON file,
called the _account file_. The _account file_ is **not** required if you are running
the `googlecompute` Packer builder from a GCE instance with a properly-configured
[Compute Engine Service Account](https://cloud.google.com/compute/docs/authentication).
### Running With a Compute Engine Service Account
If you run the `googlecompute` Packer builder from a GCE instance, you can configure that
instance to use a [Compute Engine Service Account](https://cloud.google.com/compute/docs/authentication). This will allow Packer to authenticate
to Google Cloud without having to bake in a separate credential/authentication file.
To create a GCE instance that uses a service account, provide the required scopes when
launching the instance.
For `gcloud`, do this via the `--scopes` parameter:
```sh
gcloud compute --project YOUR_PROJECT instances create "INSTANCE-NAME" ... \
--scopes "https://www.googleapis.com/auth/compute" \
"https://www.googleapis.com/auth/devstorage.full_control" \
...
```
For the [Google Developers Console](https://console.developers.google.com):
1. Choose "Show advanced options"
2. Tick "Enable Compute Engine service account"
3. Choose "Read Write" for Compute
4. Chose "Full" for "Storage"
**The service account will be used automatically by Packer as long as there is
no _account file_ specified in the Packer configuration file.**
### Running Without a Compute Engine Service Account
The [Google Developers Console](https://console.developers.google.com) allows you to
create and download a credential file that will let you use the `googlecompute` Packer
builder anywhere. To make
the process more straightforwarded, it is documented here.
1. Log into the [Google Developers Console](https://console.developers.google.com)
and select a project.
2. Under the "APIs & Auth" section, click "Credentials."
3. Click the "Create new Client ID" button, select "Service account", and click "Create Client ID"
4. Click "Generate new JSON key" for the Service Account you just created. A JSON file will be downloaded automatically. This is your
_account file_.
## Basic Example
Below is a fully functioning example. It doesn't do anything useful,
since no provisioners are defined, but it will effectively repackage an
existing GCE image. The account file is obtained in the previous section.
```javascript
{
"type": "googlecompute",
"account_file": "account.json",
"project_id": "my-project",
"source_image": "debian-7-wheezy-v20150127",
"zone": "us-central1-a"
}
```
## Configuration Reference
Configuration options are organized below into two categories: required and optional. Within
each category, the available options are alphabetized and described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
### Required:
* `project_id` (string) - The project ID that will be used to launch instances
and store images.
* `source_image` (string) - The source image to use to create the new image
from. Example: `"debian-7-wheezy-v20150127"`
* `zone` (string) - The zone in which to launch the instance used to create
the image. Example: `"us-central1-a"`
### Optional:
* `account_file` (string) - The JSON file containing your account credentials.
Not required if you run Packer on a GCE instance with a service account.
Instructions for creating file or using service accounts are above.
* `disk_size` (integer) - The size of the disk in GB.
This defaults to `10`, which is 10GB.
* `image_name` (string) - The unique name of the resulting image.
Defaults to `"packer-{{timestamp}}"`.
* `image_description` (string) - The description of the resulting image.
* `instance_name` (string) - A name to give the launched instance. Beware
that this must be unique. Defaults to `"packer-{{uuid}}"`.
* `machine_type` (string) - The machine type. Defaults to `"n1-standard-1"`.
* `metadata` (object of key/value strings)
* `network` (string) - The Google Compute network to use for the launched
instance. Defaults to `"default"`.
* `state_timeout` (string) - The time to wait for instance state changes.
Defaults to `"5m"`.
* `tags` (array of strings)
## Gotchas
Centos images have root ssh access disabled by default. Set `ssh_username` to any user, which will be created by packer with sudo access.
The machine type must have a scratch disk, which means you can't use an `f1-micro` or `g1-small` to build images.

View File

@ -1,24 +1,28 @@
---
layout: "docs"
page_title: "Null Builder"
description: |-
The `null` Packer builder is not really a builder, it just sets up an SSH connection and runs the provisioners. It can be used to debug provisioners without incurring high wait times. It does not create any kind of image or artifact.
---
description: |
The `null` Packer builder is not really a builder, it just sets up an SSH
connection and runs the provisioners. It can be used to debug provisioners
without incurring high wait times. It does not create any kind of image or
artifact.
layout: docs
page_title: Null Builder
...
# Null Builder
Type: `null`
The `null` Packer builder is not really a builder, it just sets up an SSH connection
and runs the provisioners. It can be used to debug provisioners without
incurring high wait times. It does not create any kind of image or artifact.
The `null` Packer builder is not really a builder, it just sets up an SSH
connection and runs the provisioners. It can be used to debug provisioners
without incurring high wait times. It does not create any kind of image or
artifact.
## Basic Example
Below is a fully functioning example. It doesn't do anything useful, since
no provisioners are defined, but it will connect to the specified host via ssh.
Below is a fully functioning example. It doesn't do anything useful, since no
provisioners are defined, but it will connect to the specified host via ssh.
```javascript
``` {.javascript}
{
"type": "null",
"ssh_host": "127.0.0.1",
@ -31,4 +35,3 @@ no provisioners are defined, but it will connect to the specified host via ssh.
The null builder has no configuration parameters other than the
[communicator](/docs/templates/communicator.html) settings.

View File

@ -1,25 +1,30 @@
---
layout: "docs"
page_title: "OpenStack Builder"
description: |-
The `openstack` Packer builder is able to create new images for use with OpenStack. The builder takes a source image, runs any provisioning necessary on the image after launching it, then creates a new reusable image. This reusable image can then be used as the foundation of new servers that are launched within OpenStack. The builder will create temporary keypairs that provide temporary access to the server while the image is being created. This simplifies configuration quite a bit.
---
description: |
The `openstack` Packer builder is able to create new images for use with
OpenStack. The builder takes a source image, runs any provisioning necessary on
the image after launching it, then creates a new reusable image. This reusable
image can then be used as the foundation of new servers that are launched within
OpenStack. The builder will create temporary keypairs that provide temporary
access to the server while the image is being created. This simplifies
configuration quite a bit.
layout: docs
page_title: OpenStack Builder
...
# OpenStack Builder
Type: `openstack`
The `openstack` Packer builder is able to create new images for use with
[OpenStack](http://www.openstack.org). The builder takes a source
image, runs any provisioning necessary on the image after launching it,
then creates a new reusable image. This reusable image can then be
used as the foundation of new servers that are launched within OpenStack.
The builder will create temporary keypairs that provide temporary access to
the server while the image is being created. This simplifies configuration
quite a bit.
[OpenStack](http://www.openstack.org). The builder takes a source image, runs
any provisioning necessary on the image after launching it, then creates a new
reusable image. This reusable image can then be used as the foundation of new
servers that are launched within OpenStack. The builder will create temporary
keypairs that provide temporary access to the server while the image is being
created. This simplifies configuration quite a bit.
The builder does _not_ manage images. Once it creates an image, it is up to
you to use it or delete it.
The builder does *not* manage images. Once it creates an image, it is up to you
to use it or delete it.
## Configuration Reference
@ -28,81 +33,82 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `flavor` (string) - The ID, name, or full URL for the desired flavor for the
server to be created.
- `flavor` (string) - The ID, name, or full URL for the desired flavor for the
server to be created.
* `image_name` (string) - The name of the resulting image.
- `image_name` (string) - The name of the resulting image.
* `source_image` (string) - The ID or full URL to the base image to use.
This is the image that will be used to launch a new server and provision it.
Unless you specify completely custom SSH settings, the source image must
have `cloud-init` installed so that the keypair gets assigned properly.
- `source_image` (string) - The ID or full URL to the base image to use. This
is the image that will be used to launch a new server and provision it.
Unless you specify completely custom SSH settings, the source image must
have `cloud-init` installed so that the keypair gets assigned properly.
* `username` (string) - The username used to connect to the OpenStack service.
If not specified, Packer will use the environment variable
`OS_USERNAME`, if set.
- `username` (string) - The username used to connect to the OpenStack service.
If not specified, Packer will use the environment variable `OS_USERNAME`,
if set.
* `password` (string) - The password used to connect to the OpenStack service.
If not specified, Packer will use the environment variables
`OS_PASSWORD`, if set.
- `password` (string) - The password used to connect to the OpenStack service.
If not specified, Packer will use the environment variables `OS_PASSWORD`,
if set.
### Optional:
* `api_key` (string) - The API key used to access OpenStack. Some OpenStack
installations require this.
- `api_key` (string) - The API key used to access OpenStack. Some OpenStack
installations require this.
* `availability_zone` (string) - The availability zone to launch the
server in. If this isn't specified, the default enforced by your OpenStack
cluster will be used. This may be required for some OpenStack clusters.
- `availability_zone` (string) - The availability zone to launch the
server in. If this isn't specified, the default enforced by your OpenStack
cluster will be used. This may be required for some OpenStack clusters.
* `floating_ip` (string) - A specific floating IP to assign to this instance.
`use_floating_ip` must also be set to true for this to have an affect.
- `floating_ip` (string) - A specific floating IP to assign to this instance.
`use_floating_ip` must also be set to true for this to have an affect.
* `floating_ip_pool` (string) - The name of the floating IP pool to use
to allocate a floating IP. `use_floating_ip` must also be set to true
for this to have an affect.
- `floating_ip_pool` (string) - The name of the floating IP pool to use to
allocate a floating IP. `use_floating_ip` must also be set to true for this
to have an affect.
* `insecure` (boolean) - Whether or not the connection to OpenStack can be done
over an insecure connection. By default this is false.
- `insecure` (boolean) - Whether or not the connection to OpenStack can be
done over an insecure connection. By default this is false.
* `networks` (array of strings) - A list of networks by UUID to attach
to this instance.
- `networks` (array of strings) - A list of networks by UUID to attach to
this instance.
* `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the
instance into. Some OpenStack installations require this.
If not specified, Packer will use the environment variable
`OS_TENANT_NAME`, if set.
- `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the
instance into. Some OpenStack installations require this. If not specified,
Packer will use the environment variable `OS_TENANT_NAME`, if set.
* `security_groups` (array of strings) - A list of security groups by name
to add to this instance.
- `security_groups` (array of strings) - A list of security groups by name to
add to this instance.
* `region` (string) - The name of the region, such as "DFW", in which
to launch the server to create the AMI.
If not specified, Packer will use the environment variable
`OS_REGION_NAME`, if set.
- `region` (string) - The name of the region, such as "DFW", in which to
launch the server to create the AMI. If not specified, Packer will use the
environment variable `OS_REGION_NAME`, if set.
* `ssh_interface` (string) - The type of interface to connect via SSH. Values
useful for Rackspace are "public" or "private", and the default behavior is
to connect via whichever is returned first from the OpenStack API.
- `ssh_interface` (string) - The type of interface to connect via SSH. Values
useful for Rackspace are "public" or "private", and the default behavior is
to connect via whichever is returned first from the OpenStack API.
* `use_floating_ip` (boolean) - Whether or not to use a floating IP for
the instance. Defaults to false.
- `use_floating_ip` (boolean) - Whether or not to use a floating IP for
the instance. Defaults to false.
* `rackconnect_wait` (boolean) - For rackspace, whether or not to wait for
Rackconnect to assign the machine an IP address before connecting via SSH.
Defaults to false.
- `rackconnect_wait` (boolean) - For rackspace, whether or not to wait for
Rackconnect to assign the machine an IP address before connecting via SSH.
Defaults to false.
- `metadata` (object of key/value strings) - Glance metadata that will be
applied to the image.
## Basic Example: Rackspace public cloud
Here is a basic example. This is a working example to build a
Ubuntu 12.04 LTS (Precise Pangolin) on Rackspace OpenStack cloud offering.
Here is a basic example. This is a working example to build a Ubuntu 12.04 LTS
(Precise Pangolin) on Rackspace OpenStack cloud offering.
```javascript
``` {.javascript}
{
"type": "openstack",
"username": "foo",
@ -117,10 +123,10 @@ Ubuntu 12.04 LTS (Precise Pangolin) on Rackspace OpenStack cloud offering.
## Basic Example: Private OpenStack cloud
This example builds an Ubuntu 14.04 image on a private OpenStack cloud,
powered by Metacloud.
This example builds an Ubuntu 14.04 image on a private OpenStack cloud, powered
by Metacloud.
```javascript
``` {.javascript}
{
"type": "openstack",
"ssh_username": "root",
@ -130,12 +136,12 @@ powered by Metacloud.
}
```
In this case, the connection information for connecting to OpenStack
doesn't appear in the template. That is because I source a standard
OpenStack script with environment variables set before I run this. This
script is setting environment variables like:
In this case, the connection information for connecting to OpenStack doesn't
appear in the template. That is because I source a standard OpenStack script
with environment variables set before I run this. This script is setting
environment variables like:
* `OS_AUTH_URL`
* `OS_TENANT_ID`
* `OS_USERNAME`
* `OS_PASSWORD`
- `OS_AUTH_URL`
- `OS_TENANT_ID`
- `OS_USERNAME`
- `OS_PASSWORD`

View File

@ -1,31 +1,31 @@
---
layout: "docs"
page_title: "Parallels Builder (from an ISO)"
description: |-
The Parallels Packer builder is able to create Parallels Desktop for Mac virtual machines and export them in the PVM format, starting from an ISO image.
---
description: |
The Parallels Packer builder is able to create Parallels Desktop for Mac virtual
machines and export them in the PVM format, starting from an ISO image.
layout: docs
page_title: 'Parallels Builder (from an ISO)'
...
# Parallels Builder (from an ISO)
Type: `parallels-iso`
The Parallels Packer builder is able to create
[Parallels Desktop for Mac](http://www.parallels.com/products/desktop/) virtual
machines and export them in the PVM format, starting from an
ISO image.
The Parallels Packer builder is able to create [Parallels Desktop for
Mac](http://www.parallels.com/products/desktop/) virtual machines and export
them in the PVM format, starting from an ISO image.
The builder builds a virtual machine by creating a new virtual machine
from scratch, booting it, installing an OS, provisioning software within
the OS, then shutting it down. The result of the Parallels builder is a directory
containing all the files necessary to run the virtual machine portably.
The builder builds a virtual machine by creating a new virtual machine from
scratch, booting it, installing an OS, provisioning software within the OS, then
shutting it down. The result of the Parallels builder is a directory containing
all the files necessary to run the virtual machine portably.
## Basic Example
Here is a basic example. This example is not functional. It will start the
OS installer but then fail because we don't provide the preseed file for
Ubuntu to self-install. Still, the example serves to show the basic configuration:
Here is a basic example. This example is not functional. It will start the OS
installer but then fail because we don't provide the preseed file for Ubuntu to
self-install. Still, the example serves to show the basic configuration:
```javascript
``` {.javascript}
{
"type": "parallels-iso",
"guest_os_type": "ubuntu",
@ -40,219 +40,222 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
}
```
It is important to add a `shutdown_command`. By default Packer halts the
virtual machine and the file system may not be sync'd. Thus, changes made in a
It is important to add a `shutdown_command`. By default Packer halts the virtual
machine and the file system may not be sync'd. Thus, changes made in a
provisioner might not be saved.
## Configuration Reference
There are many configuration options available for the Parallels builder.
They are organized below into two categories: required and optional. Within
each category, the available options are alphabetized and described.
There are many configuration options available for the Parallels builder. They
are organized below into two categories: required and optional. Within each
category, the available options are alphabetized and described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior
to booting a virtual machine with the ISO attached. The type of the
checksum is specified with `iso_checksum_type`, documented below.
- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior to
booting a virtual machine with the ISO attached. The type of the checksum is
specified with `iso_checksum_type`, documented below.
* `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
"sha512" currently. While "none" will skip checksumming, this is not
recommended since ISO files are generally large and corruption does happen
from time to time.
- `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
"sha512" currently. While "none" will skip checksumming, this is not
recommended since ISO files are generally large and corruption does happen
from time to time.
* `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file).
If this is an HTTP URL, Packer will download it and cache it between
runs.
- `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file). If
this is an HTTP URL, Packer will download it and cache it between runs.
* `ssh_username` (string) - The username to use to SSH into the machine
once the OS is installed.
- `ssh_username` (string) - The username to use to SSH into the machine once
the OS is installed.
* `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to
install into the VM. Valid values are "win", "lin", "mac", "os2" and "other".
This can be omitted only if `parallels_tools_mode` is "disable".
- `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to
install into the VM. Valid values are "win", "lin", "mac", "os2"
and "other". This can be omitted only if `parallels_tools_mode`
is "disable".
### Optional:
* `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot
command. If this is not specified, it is assumed the installer will start
itself.
- `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the
boot command. If this is not specified, it is assumed the installer will
start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified,
the default is 10 seconds.
- `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't
specified, the default is 10 seconds.
* `disk_size` (integer) - The size, in megabytes, of the hard disk to create
for the VM. By default, this is 40000 (about 40 GB).
- `disk_size` (integer) - The size, in megabytes, of the hard disk to create
for the VM. By default, this is 40000 (about 40 GB).
* `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful
for unattended Windows installs, which look for an `Autounattend.xml` file
on removable media. By default, no floppy will be attached. All files
listed in this setting get placed into the root directory of the floppy
and the floppy is attached as the first floppy device. Currently, no
support exists for creating sub-directories on the floppy. Wildcard
characters (*, ?, and []) are allowed. Directory names are also allowed,
which will add all the files found in the directory to the floppy.
- `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful for
unattended Windows installs, which look for an `Autounattend.xml` file on
removable media. By default, no floppy will be attached. All files listed in
this setting get placed into the root directory of the floppy and the floppy
is attached as the first floppy device. Currently, no support exists for
creating sub-directories on the floppy. Wildcard characters (\*, ?,
and \[\]) are allowed. Directory names are also allowed, which will add all
the files found in the directory to the floppy.
* `guest_os_type` (string) - The guest OS type being installed. By default
this is "other", but you can get _dramatic_ performance improvements by
setting this to the proper value. To view all available values for this
run `prlctl create x --distribution list`. Setting the correct value hints to
Parallels Desktop how to optimize the virtual hardware to work best with
that operating system.
- `guest_os_type` (string) - The guest OS type being installed. By default
this is "other", but you can get *dramatic* performance improvements by
setting this to the proper value. To view all available values for this run
`prlctl create x --distribution list`. Setting the correct value hints to
Parallels Desktop how to optimize the virtual hardware to work best with
that operating system.
* `hard_drive_interface` (string) - The type of controller that the
hard drives are attached to, defaults to "sata". Valid options are
"sata", "ide", and "scsi".
- `hard_drive_interface` (string) - The type of controller that the hard
drives are attached to, defaults to "sata". Valid options are "sata", "ide",
and "scsi".
* `host_interfaces` (array of strings) - A list of which interfaces on the
host should be searched for a IP address. The first IP address found on
one of these will be used as `{{ .HTTPIP }}` in the `boot_command`.
Defaults to ["en0", "en1", "en2", "en3", "en4", "en5", "en6", "en7", "en8",
"en9", "ppp0", "ppp1", "ppp2"].
- `host_interfaces` (array of strings) - A list of which interfaces on the
host should be searched for a IP address. The first IP address found on one
of these will be used as `{{ .HTTPIP }}` in the `boot_command`. Defaults to
\["en0", "en1", "en2", "en3", "en4", "en5", "en6", "en7", "en8", "en9",
"ppp0", "ppp1", "ppp2"\].
* `http_directory` (string) - Path to a directory to serve using an HTTP
server. The files in this directory will be available over HTTP that will
be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP
server will be started. The address and port of the HTTP server will be
available as variables in `boot_command`. This is covered in more detail
below.
- `http_directory` (string) - Path to a directory to serve using an
HTTP server. The files in this directory will be available over HTTP that
will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP server
will be started. The address and port of the HTTP server will be available
as variables in `boot_command`. This is covered in more detail below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`.
Because Packer often runs in parallel, Packer will choose a randomly available
port in this range to run the HTTP server. If you want to force the HTTP
server to be on one port, make this minimum and maximum port the same.
By default the values are 8000 and 9000, respectively.
- `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the
`http_directory`. Because Packer often runs in parallel, Packer will choose
a randomly available port in this range to run the HTTP server. If you want
to force the HTTP server to be on one port, make this minimum and maximum
port the same. By default the values are 8000 and 9000, respectively.
* `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to download
or while downloading a single URL, it will move on to the next. All URLs
must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
- `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to
download or while downloading a single URL, it will move on to the next. All
URLs must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
* `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder.
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
of the build.
- `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running
the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
name of the build.
* `parallels_tools_guest_path` (string) - The path in the virtual machine to upload
Parallels Tools. This only takes effect if `parallels_tools_mode` is "upload".
This is a [configuration template](/docs/templates/configuration-templates.html)
that has a single valid variable: `Flavor`, which will be the value of
`parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso" which
should upload into the login directory of the user.
- `parallels_tools_guest_path` (string) - The path in the virtual machine to
upload Parallels Tools. This only takes effect if `parallels_tools_mode`
is "upload". This is a [configuration
template](/docs/templates/configuration-templates.html) that has a single
valid variable: `Flavor`, which will be the value of
`parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso"
which should upload into the login directory of the user.
* `parallels_tools_mode` (string) - The method by which Parallels Tools are made
available to the guest for installation. Valid options are "upload", "attach",
or "disable". If the mode is "attach" the Parallels Tools ISO will be attached
as a CD device to the virtual machine. If the mode is "upload" the Parallels
Tools ISO will be uploaded to the path specified by
`parallels_tools_guest_path`. The default value is "upload".
- `parallels_tools_mode` (string) - The method by which Parallels Tools are
made available to the guest for installation. Valid options are "upload",
"attach", or "disable". If the mode is "attach" the Parallels Tools ISO will
be attached as a CD device to the virtual machine. If the mode is "upload"
the Parallels Tools ISO will be uploaded to the path specified by
`parallels_tools_guest_path`. The default value is "upload".
* `prlctl` (array of array of strings) - Custom `prlctl` commands to execute in
order to further customize the virtual machine being created. The value of
this is an array of commands to execute. The commands are executed in the order
defined in the template. For each command, the command is defined itself as an
array of strings, where each string represents a single argument on the
command-line to `prlctl` (but excluding `prlctl` itself). Each arg is treated
as a [configuration template](/docs/templates/configuration-templates.html),
where the `Name` variable is replaced with the VM name. More details on how
to use `prlctl` are below.
- `prlctl` (array of array of strings) - Custom `prlctl` commands to execute
in order to further customize the virtual machine being created. The value
of this is an array of commands to execute. The commands are executed in the
order defined in the template. For each command, the command is defined
itself as an array of strings, where each string represents a single
argument on the command-line to `prlctl` (but excluding `prlctl` itself).
Each arg is treated as a [configuration
template](/docs/templates/configuration-templates.html), where the `Name`
variable is replaced with the VM name. More details on how to use `prlctl`
are below.
* `prlctl_post` (array of array of strings) - Identical to `prlctl`,
except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported.
- `prlctl_post` (array of array of strings) - Identical to `prlctl`, except
that it is run after the virtual machine is shutdown, and before the virtual
machine is exported.
* `prlctl_version_file` (string) - The path within the virtual machine to upload
a file that contains the `prlctl` version that was used to create the machine.
This information can be useful for provisioning. By default this is
".prlctl_version", which will generally upload it into the home directory.
- `prlctl_version_file` (string) - The path within the virtual machine to
upload a file that contains the `prlctl` version that was used to create
the machine. This information can be useful for provisioning. By default
this is ".prlctl\_version", which will generally upload it into the
home directory.
* `shutdown_command` (string) - The command to use to gracefully shut down
the machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
- `shutdown_command` (string) - The command to use to gracefully shut down the
machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
* `shutdown_timeout` (string) - The amount of time to wait after executing
the `shutdown_command` for the virtual machine to actually shut down.
If it doesn't shut down in this time, it is an error. By default, the timeout
is "5m", or five minutes.
- `shutdown_timeout` (string) - The amount of time to wait after executing the
`shutdown_command` for the virtual machine to actually shut down. If it
doesn't shut down in this time, it is an error. By default, the timeout is
"5m", or five minutes.
* `vm_name` (string) - This is the name of the PVM directory for the new
virtual machine, without the file extension. By default this is
"packer-BUILDNAME", where "BUILDNAME" is the name of the build.
- `vm_name` (string) - This is the name of the PVM directory for the new
virtual machine, without the file extension. By default this is
"packer-BUILDNAME", where "BUILDNAME" is the name of the build.
## Boot Command
The `boot_command` configuration is very important: it specifies the keys
to type when the virtual machine is first booted in order to start the
OS installer. This command is typed after `boot_wait`, which gives the
virtual machine some time to actually load the ISO.
The `boot_command` configuration is very important: it specifies the keys to
type when the virtual machine is first booted in order to start the OS
installer. This command is typed after `boot_wait`, which gives the virtual
machine some time to actually load the ISO.
As documented above, the `boot_command` is an array of strings. The
strings are all typed in sequence. It is an array only to improve readability
within the template.
As documented above, the `boot_command` is an array of strings. The strings are
all typed in sequence. It is an array only to improve readability within the
template.
The boot command is "typed" character for character (using the Parallels
Virtualization SDK, see [Parallels Builder](/docs/builders/parallels.html))
simulating a human actually typing the keyboard. There are a set of special
keys available. If these are in your boot command, they will be replaced by
the proper key:
simulating a human actually typing the keyboard. There are a set of special keys
available. If these are in your boot command, they will be replaced by the
proper key:
* `<bs>` - Backspace
- `<bs>` - Backspace
* `<del>` - Delete
- `<del>` - Delete
* `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
- `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
* `<esc>` - Simulates pressing the escape key.
- `<esc>` - Simulates pressing the escape key.
* `<tab>` - Simulates pressing the tab key.
- `<tab>` - Simulates pressing the tab key.
* `<f1>` - `<f12>` - Simulates pressing a function key.
- `<f1>` - `<f12>` - Simulates pressing a function key.
* `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
- `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
* `<spacebar>` - Simulates pressing the spacebar.
- `<spacebar>` - Simulates pressing the spacebar.
* `<insert>` - Simulates pressing the insert key.
- `<insert>` - Simulates pressing the insert key.
* `<home>` `<end>` - Simulates pressing the home and end keys.
- `<home>` `<end>` - Simulates pressing the home and end keys.
* `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
- `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
* `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before sending any additional keys. This
is useful if you have to generally wait for the UI to update before typing more.
- `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before
sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
In addition to the special keys, each command to type is treated as a
[configuration template](/docs/templates/configuration-templates.html).
The available variables are:
[configuration template](/docs/templates/configuration-templates.html). The
available variables are:
* `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will
be blank!
- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will be
blank!
Example boot command. This is actually a working boot command used to start
an Ubuntu 12.04 installer:
Example boot command. This is actually a working boot command used to start an
Ubuntu 12.04 installer:
```text
``` {.text}
[
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic ",
@ -267,17 +270,18 @@ an Ubuntu 12.04 installer:
```
## prlctl Commands
In order to perform extra customization of the virtual machine, a template can
define extra calls to `prlctl` to perform.
[prlctl](http://download.parallels.com/desktop/v9/ga/docs/en_US/Parallels%20Command%20Line%20Reference%20Guide.pdf)
is the command-line interface to Parallels Desktop. It can be used to configure
the virtual machine, such as set RAM, CPUs, etc.
Extra `prlctl` commands are defined in the template in the `prlctl` section.
An example is shown below that sets the memory and number of CPUs within the
Extra `prlctl` commands are defined in the template in the `prlctl` section. An
example is shown below that sets the memory and number of CPUs within the
virtual machine:
```javascript
``` {.javascript}
{
"prlctl": [
["set", "{{.Name}}", "--memsize", "1024"],
@ -291,7 +295,7 @@ executed in the order defined. So in the above example, the memory will be set
followed by the CPUs.
Each command itself is an array of strings, where each string is an argument to
`prlctl`. Each argument is treated as a
[configuration template](/docs/templates/configuration-templates.html). The only
available variable is `Name` which is replaced with the unique name of the VM,
which is required for many `prlctl` calls.
`prlctl`. Each argument is treated as a [configuration
template](/docs/templates/configuration-templates.html). The only available
variable is `Name` which is replaced with the unique name of the VM, which is
required for many `prlctl` calls.

View File

@ -1,30 +1,31 @@
---
layout: "docs"
page_title: "Parallels Builder (from a PVM)"
description: |-
This Parallels builder is able to create Parallels Desktop for Mac virtual machines and export them in the PVM format, starting from an existing PVM (exported virtual machine image).
---
description: |
This Parallels builder is able to create Parallels Desktop for Mac virtual
machines and export them in the PVM format, starting from an existing PVM
(exported virtual machine image).
layout: docs
page_title: 'Parallels Builder (from a PVM)'
...
# Parallels Builder (from a PVM)
Type: `parallels-pvm`
This Parallels builder is able to create
[Parallels Desktop for Mac](http://www.parallels.com/products/desktop/)
virtual machines and export them in the PVM format, starting from an
existing PVM (exported virtual machine image).
This Parallels builder is able to create [Parallels Desktop for
Mac](http://www.parallels.com/products/desktop/) virtual machines and export
them in the PVM format, starting from an existing PVM (exported virtual machine
image).
The builder builds a virtual machine by importing an existing PVM
file. It then boots this image, runs provisioners on this new VM, and
exports that VM to create the image. The imported machine is deleted prior
to finishing the build.
The builder builds a virtual machine by importing an existing PVM file. It then
boots this image, runs provisioners on this new VM, and exports that VM to
create the image. The imported machine is deleted prior to finishing the build.
## Basic Example
Here is a basic example. This example is functional if you have an PVM matching
the settings here.
```javascript
``` {.javascript}
{
"type": "parallels-pvm",
"parallels_tools_flavor": "lin",
@ -36,175 +37,183 @@ the settings here.
}
```
It is important to add a `shutdown_command`. By default Packer halts the
virtual machine and the file system may not be sync'd. Thus, changes made in a
It is important to add a `shutdown_command`. By default Packer halts the virtual
machine and the file system may not be sync'd. Thus, changes made in a
provisioner might not be saved.
## Configuration Reference
There are many configuration options available for the Parallels builder.
They are organized below into two categories: required and optional. Within
each category, the available options are alphabetized and described.
There are many configuration options available for the Parallels builder. They
are organized below into two categories: required and optional. Within each
category, the available options are alphabetized and described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `source_path` (string) - The path to a PVM directory that acts as
the source of this build.
- `source_path` (string) - The path to a PVM directory that acts as the source
of this build.
* `ssh_username` (string) - The username to use to SSH into the machine
once the OS is installed.
- `ssh_username` (string) - The username to use to SSH into the machine once
the OS is installed.
* `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to
install into the VM. Valid values are "win", "lin", "mac", "os2" and "other".
This can be omitted only if `parallels_tools_mode` is "disable".
- `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to
install into the VM. Valid values are "win", "lin", "mac", "os2"
and "other". This can be omitted only if `parallels_tools_mode`
is "disable".
### Optional:
* `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot
command. If this is not specified, it is assumed the installer will start
itself.
- `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the
boot command. If this is not specified, it is assumed the installer will
start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified,
the default is 10 seconds.
- `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't
specified, the default is 10 seconds.
* `floppy_files` (array of strings) - A list of files to put onto a floppy
disk that is attached when the VM is booted for the first time. This is
most useful for unattended Windows installs, which look for an
`Autounattend.xml` file on removable media. By default no floppy will
be attached. The files listed in this configuration will all be put
into the root directory of the floppy disk; sub-directories are not supported.
- `floppy_files` (array of strings) - A list of files to put onto a floppy
disk that is attached when the VM is booted for the first time. This is most
useful for unattended Windows installs, which look for an `Autounattend.xml`
file on removable media. By default no floppy will be attached. The files
listed in this configuration will all be put into the root directory of the
floppy disk; sub-directories are not supported.
* `reassign_mac` (boolean) - If this is "false" the MAC address of the first
NIC will reused when imported else a new MAC address will be generated by
Parallels. Defaults to "false".
- `reassign_mac` (boolean) - If this is "false" the MAC address of the first
NIC will reused when imported else a new MAC address will be generated
by Parallels. Defaults to "false".
* `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder.
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
of the build.
- `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running
the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
name of the build.
* `parallels_tools_guest_path` (string) - The path in the VM to upload
Parallels Tools. This only takes effect if `parallels_tools_mode` is "upload".
This is a [configuration template](/docs/templates/configuration-templates.html)
that has a single valid variable: `Flavor`, which will be the value of
`parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso" which
should upload into the login directory of the user.
- `parallels_tools_guest_path` (string) - The path in the VM to upload
Parallels Tools. This only takes effect if `parallels_tools_mode`
is "upload". This is a [configuration
template](/docs/templates/configuration-templates.html) that has a single
valid variable: `Flavor`, which will be the value of
`parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso"
which should upload into the login directory of the user.
* `parallels_tools_mode` (string) - The method by which Parallels Tools are made
available to the guest for installation. Valid options are "upload", "attach",
or "disable". If the mode is "attach" the Parallels Tools ISO will be attached
as a CD device to the virtual machine. If the mode is "upload" the Parallels
Tools ISO will be uploaded to the path specified by
`parallels_tools_guest_path`. The default value is "upload".
- `parallels_tools_mode` (string) - The method by which Parallels Tools are
made available to the guest for installation. Valid options are "upload",
"attach", or "disable". If the mode is "attach" the Parallels Tools ISO will
be attached as a CD device to the virtual machine. If the mode is "upload"
the Parallels Tools ISO will be uploaded to the path specified by
`parallels_tools_guest_path`. The default value is "upload".
* `prlctl` (array of array of strings) - Custom `prlctl` commands to execute in
order to further customize the virtual machine being created. The value of
this is an array of commands to execute. The commands are executed in the order
defined in the template. For each command, the command is defined itself as an
array of strings, where each string represents a single argument on the
command-line to `prlctl` (but excluding `prlctl` itself). Each arg is treated
as a [configuration template](/docs/templates/configuration-templates.html),
where the `Name` variable is replaced with the VM name. More details on how
to use `prlctl` are below.
- `prlctl` (array of array of strings) - Custom `prlctl` commands to execute
in order to further customize the virtual machine being created. The value
of this is an array of commands to execute. The commands are executed in the
order defined in the template. For each command, the command is defined
itself as an array of strings, where each string represents a single
argument on the command-line to `prlctl` (but excluding `prlctl` itself).
Each arg is treated as a [configuration
template](/docs/templates/configuration-templates.html), where the `Name`
variable is replaced with the VM name. More details on how to use `prlctl`
are below.
* `prlctl_post` (array of array of strings) - Identical to `prlctl`,
except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported.
- `prlctl_post` (array of array of strings) - Identical to `prlctl`, except
that it is run after the virtual machine is shutdown, and before the virtual
machine is exported.
* `prlctl_version_file` (string) - The path within the virtual machine to upload
a file that contains the `prlctl` version that was used to create the machine.
This information can be useful for provisioning. By default this is
".prlctl_version", which will generally upload it into the home directory.
- `prlctl_version_file` (string) - The path within the virtual machine to
upload a file that contains the `prlctl` version that was used to create
the machine. This information can be useful for provisioning. By default
this is ".prlctl\_version", which will generally upload it into the
home directory.
* `shutdown_command` (string) - The command to use to gracefully shut down
the machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
- `shutdown_command` (string) - The command to use to gracefully shut down the
machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
* `shutdown_timeout` (string) - The amount of time to wait after executing
the `shutdown_command` for the virtual machine to actually shut down.
If it doesn't shut down in this time, it is an error. By default, the timeout
is "5m", or five minutes.
- `shutdown_timeout` (string) - The amount of time to wait after executing the
`shutdown_command` for the virtual machine to actually shut down. If it
doesn't shut down in this time, it is an error. By default, the timeout is
"5m", or five minutes.
* `vm_name` (string) - This is the name of the virtual machine when it is
imported as well as the name of the PVM directory when the virtual machine is
exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is
the name of the build.
- `vm_name` (string) - This is the name of the virtual machine when it is
imported as well as the name of the PVM directory when the virtual machine
is exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is the
name of the build.
## Parallels Tools
After the virtual machine is up and the operating system is installed, Packer
uploads the Parallels Tools into the virtual machine. The path where they are
uploaded is controllable by `parallels_tools_path`, and defaults to
"prl-tools.iso". Without an absolute path, it is uploaded to the home directory
of the SSH user. Parallels Tools ISO's can be found in:
"/Applications/Parallels Desktop.app/Contents/Resources/Tools/"
of the SSH user. Parallels Tools ISO's can be found in: "/Applications/Parallels
Desktop.app/Contents/Resources/Tools/"
## Boot Command
The `boot_command` specifies the keys to type when the virtual machine is first booted. This command is typed after `boot_wait`.
The `boot_command` specifies the keys to type when the virtual machine is first
booted. This command is typed after `boot_wait`.
As documented above, the `boot_command` is an array of strings. The
strings are all typed in sequence. It is an array only to improve readability
within the template.
As documented above, the `boot_command` is an array of strings. The strings are
all typed in sequence. It is an array only to improve readability within the
template.
The boot command is "typed" character for character (using the Parallels
Virtualization SDK, see [Parallels Builder](/docs/builders/parallels.html))
simulating a human actually typing the keyboard. There are a set of special
keys available. If these are in your boot command, they will be replaced by
the proper key:
simulating a human actually typing the keyboard. There are a set of special keys
available. If these are in your boot command, they will be replaced by the
proper key:
* `<bs>` - Backspace
- `<bs>` - Backspace
* `<del>` - Delete
- `<del>` - Delete
* `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
- `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
* `<esc>` - Simulates pressing the escape key.
- `<esc>` - Simulates pressing the escape key.
* `<tab>` - Simulates pressing the tab key.
- `<tab>` - Simulates pressing the tab key.
* `<f1>` - `<f12>` - Simulates pressing a function key.
- `<f1>` - `<f12>` - Simulates pressing a function key.
* `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
- `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
* `<spacebar>` - Simulates pressing the spacebar.
- `<spacebar>` - Simulates pressing the spacebar.
* `<insert>` - Simulates pressing the insert key.
- `<insert>` - Simulates pressing the insert key.
* `<home>` `<end>` - Simulates pressing the home and end keys.
- `<home>` `<end>` - Simulates pressing the home and end keys.
* `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
- `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
* `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before sending any additional keys. This
is useful if you have to generally wait for the UI to update before typing more.
- `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before
sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
In addition to the special keys, each command to type is treated as a
[configuration template](/docs/templates/configuration-templates.html).
The available variables are:
[configuration template](/docs/templates/configuration-templates.html). The
available variables are:
## prlctl Commands
In order to perform extra customization of the virtual machine, a template can
define extra calls to `prlctl` to perform.
[prlctl](http://download.parallels.com/desktop/v9/ga/docs/en_US/Parallels%20Command%20Line%20Reference%20Guide.pdf)
is the command-line interface to Parallels Desktop. It can be used to configure
the virtual machine, such as set RAM, CPUs, etc.
Extra `prlctl` commands are defined in the template in the `prlctl` section.
An example is shown below that sets the memory and number of CPUs within the
Extra `prlctl` commands are defined in the template in the `prlctl` section. An
example is shown below that sets the memory and number of CPUs within the
virtual machine:
```javascript
``` {.javascript}
{
"prlctl": [
["set", "{{.Name}}", "--memsize", "1024"],
@ -218,7 +227,7 @@ executed in the order defined. So in the above example, the memory will be set
followed by the CPUs.
Each command itself is an array of strings, where each string is an argument to
`prlctl`. Each argument is treated as a
[configuration template](/docs/templates/configuration-templates.html). The only
available variable is `Name` which is replaced with the unique name of the VM,
which is required for many `prlctl` calls.
`prlctl`. Each argument is treated as a [configuration
template](/docs/templates/configuration-templates.html). The only available
variable is `Name` which is replaced with the unique name of the VM, which is
required for many `prlctl` calls.

View File

@ -1,34 +1,37 @@
---
layout: "docs"
page_title: "Parallels Builder"
description: |-
The Parallels Packer builder is able to create Parallels Desktop for Mac virtual machines and export them in the PVM format.
---
description: |
The Parallels Packer builder is able to create Parallels Desktop for Mac virtual
machines and export them in the PVM format.
layout: docs
page_title: Parallels Builder
...
# Parallels Builder
The Parallels Packer builder is able to create [Parallels Desktop for Mac](http://www.parallels.com/products/desktop/) virtual machines and export them in the PVM format.
The Parallels Packer builder is able to create [Parallels Desktop for
Mac](http://www.parallels.com/products/desktop/) virtual machines and export
them in the PVM format.
Packer actually comes with multiple builders able to create Parallels
machines, depending on the strategy you want to use to build the image.
Packer supports the following Parallels builders:
Packer actually comes with multiple builders able to create Parallels machines,
depending on the strategy you want to use to build the image. Packer supports
the following Parallels builders:
* [parallels-iso](/docs/builders/parallels-iso.html) - Starts from
an ISO file, creates a brand new Parallels VM, installs an OS,
provisions software within the OS, then exports that machine to create
an image. This is best for people who want to start from scratch.
* [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder
imports an existing PVM file, runs provisioners on top of that VM,
and exports that machine to create an image. This is best if you have
an existing Parallels VM export you want to use as the source. As an
additional benefit, you can feed the artifact of this builder back into
itself to iterate on a machine.
- [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO
file, creates a brand new Parallels VM, installs an OS, provisions software
within the OS, then exports that machine to create an image. This is best
for people who want to start from scratch.
- [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports an
existing PVM file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing Parallels
VM export you want to use as the source. As an additional benefit, you can
feed the artifact of this builder back into itself to iterate on a machine.
## Requirements
In addition to [Parallels Desktop for Mac](http://www.parallels.com/products/desktop/) this requires the
[Parallels Virtualization SDK](http://www.parallels.com/downloads/desktop/).
In addition to [Parallels Desktop for
Mac](http://www.parallels.com/products/desktop/) this requires the [Parallels
Virtualization SDK](http://www.parallels.com/downloads/desktop/).
The SDK can be installed by downloading and following the instructions in the dmg.
The SDK can be installed by downloading and following the instructions in the
dmg.

View File

@ -1,30 +1,31 @@
---
layout: "docs"
page_title: "QEMU Builder"
description: |-
The Qemu Packer builder is able to create KVM and Xen virtual machine images. Support for Xen is experimental at this time.
---
description: |
The Qemu Packer builder is able to create KVM and Xen virtual machine images.
Support for Xen is experimental at this time.
layout: docs
page_title: QEMU Builder
...
# QEMU Builder
Type: `qemu`
The Qemu Packer builder is able to create [KVM](http://www.linux-kvm.org)
and [Xen](http://www.xenproject.org) virtual machine images. Support
for Xen is experimental at this time.
The Qemu Packer builder is able to create [KVM](http://www.linux-kvm.org) and
[Xen](http://www.xenproject.org) virtual machine images. Support for Xen is
experimental at this time.
The builder builds a virtual machine by creating a new virtual machine
from scratch, booting it, installing an OS, rebooting the machine with the
boot media as the virtual hard drive, provisioning software within
the OS, then shutting it down. The result of the Qemu builder is a directory
containing the image file necessary to run the virtual machine on KVM or Xen.
The builder builds a virtual machine by creating a new virtual machine from
scratch, booting it, installing an OS, rebooting the machine with the boot media
as the virtual hard drive, provisioning software within the OS, then shutting it
down. The result of the Qemu builder is a directory containing the image file
necessary to run the virtual machine on KVM or Xen.
## Basic Example
Here is a basic example. This example is functional so long as you fixup
paths to files, URLS for ISOs and checksums.
Here is a basic example. This example is functional so long as you fixup paths
to files, URLS for ISOs and checksums.
```javascript
``` {.javascript}
{
"builders":
[
@ -62,153 +63,153 @@ paths to files, URLS for ISOs and checksums.
}
```
A working CentOS 6.x kickstart file can be found
[at this URL](https://gist.github.com/mitchellh/7328271/#file-centos6-ks-cfg), adapted from an unknown source.
Place this file in the http directory with the proper name. For the
example above, it should go into "httpdir" with a name of "centos6-ks.cfg".
A working CentOS 6.x kickstart file can be found [at this
URL](https://gist.github.com/mitchellh/7328271/#file-centos6-ks-cfg), adapted
from an unknown source. Place this file in the http directory with the proper
name. For the example above, it should go into "httpdir" with a name of
"centos6-ks.cfg".
## Configuration Reference
There are many configuration options available for the Qemu builder.
They are organized below into two categories: required and optional. Within
each category, the available options are alphabetized and described.
There are many configuration options available for the Qemu builder. They are
organized below into two categories: required and optional. Within each
category, the available options are alphabetized and described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior
to booting a virtual machine with the ISO attached. The type of the
checksum is specified with `iso_checksum_type`, documented below.
- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior to
booting a virtual machine with the ISO attached. The type of the checksum is
specified with `iso_checksum_type`, documented below.
* `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "md5", "sha1", "sha256", or "sha512" currently.
- `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "md5", "sha1", "sha256", or
"sha512" currently.
* `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file).
If this is an HTTP URL, Packer will download it and cache it between
runs.
- `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file). If
this is an HTTP URL, Packer will download it and cache it between runs.
* `ssh_username` (string) - The username to use to SSH into the machine
once the OS is installed.
- `ssh_username` (string) - The username to use to SSH into the machine once
the OS is installed.
### Optional:
* `accelerator` (string) - The accelerator type to use when running the VM.
This may have a value of either "none", "kvm", "tcg", or "xen" and you must have that
support in on the machine on which you run the builder. By default "kvm"
is used.
- `accelerator` (string) - The accelerator type to use when running the VM.
This may have a value of either "none", "kvm", "tcg", or "xen" and you must
have that support in on the machine on which you run the builder. By default
"kvm" is used.
* `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot
command. If this is not specified, it is assumed the installer will start
itself.
- `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the
boot command. If this is not specified, it is assumed the installer will
start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified,
the default is 10 seconds.
- `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't
specified, the default is 10 seconds.
* `disk_cache` (string) - The cache mode to use for disk. Allowed values
include any of "writethrough", "writeback", "none", "unsafe" or
"directsync". By default, this is set to "writeback".
- `disk_cache` (string) - The cache mode to use for disk. Allowed values
include any of "writethrough", "writeback", "none", "unsafe"
or "directsync". By default, this is set to "writeback".
* `disk_discard` (string) - The discard mode to use for disk. Allowed values
include any of "unmap" or "ignore". By default, this is set to "ignore".
- `disk_discard` (string) - The discard mode to use for disk. Allowed values
include any of "unmap" or "ignore". By default, this is set to "ignore".
* `disk_image` (boolean) - Packer defaults to building from an ISO file,
this parameter controls whether the ISO URL supplied is actually a bootable
QEMU image. When this value is set to true, the machine will clone the
source, resize it according to `disk_size` and boot the image.
- `disk_image` (boolean) - Packer defaults to building from an ISO file, this
parameter controls whether the ISO URL supplied is actually a bootable
QEMU image. When this value is set to true, the machine will clone the
source, resize it according to `disk_size` and boot the image.
* `disk_interface` (string) - The interface to use for the disk. Allowed
values include any of "ide," "scsi" or "virtio." Note also that any boot
commands or kickstart type scripts must have proper adjustments for
resulting device names. The Qemu builder uses "virtio" by default.
- `disk_interface` (string) - The interface to use for the disk. Allowed
values include any of "ide," "scsi" or "virtio." Note also that any boot
commands or kickstart type scripts must have proper adjustments for
resulting device names. The Qemu builder uses "virtio" by default.
* `disk_size` (integer) - The size, in megabytes, of the hard disk to create
for the VM. By default, this is 40000 (about 40 GB).
- `disk_size` (integer) - The size, in megabytes, of the hard disk to create
for the VM. By default, this is 40000 (about 40 GB).
* `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful
for unattended Windows installs, which look for an `Autounattend.xml` file
on removable media. By default, no floppy will be attached. All files
listed in this setting get placed into the root directory of the floppy
and the floppy is attached as the first floppy device. Currently, no
support exists for creating sub-directories on the floppy. Wildcard
characters (*, ?, and []) are allowed. Directory names are also allowed,
which will add all the files found in the directory to the floppy.
- `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful for
unattended Windows installs, which look for an `Autounattend.xml` file on
removable media. By default, no floppy will be attached. All files listed in
this setting get placed into the root directory of the floppy and the floppy
is attached as the first floppy device. Currently, no support exists for
creating sub-directories on the floppy. Wildcard characters (\*, ?,
and \[\]) are allowed. Directory names are also allowed, which will add all
the files found in the directory to the floppy.
* `format` (string) - Either "qcow2" or "raw", this specifies the output
format of the virtual machine image. This defaults to "qcow2".
- `format` (string) - Either "qcow2" or "raw", this specifies the output
format of the virtual machine image. This defaults to "qcow2".
* `headless` (boolean) - Packer defaults to building QEMU virtual machines by
launching a GUI that shows the console of the machine being built.
When this value is set to true, the machine will start without a console.
- `headless` (boolean) - Packer defaults to building QEMU virtual machines by
launching a GUI that shows the console of the machine being built. When this
value is set to true, the machine will start without a console.
* `http_directory` (string) - Path to a directory to serve using an HTTP
server. The files in this directory will be available over HTTP that will
be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP
server will be started. The address and port of the HTTP server will be
available as variables in `boot_command`. This is covered in more detail
below.
- `http_directory` (string) - Path to a directory to serve using an
HTTP server. The files in this directory will be available over HTTP that
will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP server
will be started. The address and port of the HTTP server will be available
as variables in `boot_command`. This is covered in more detail below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`.
Because Packer often runs in parallel, Packer will choose a randomly available
port in this range to run the HTTP server. If you want to force the HTTP
server to be on one port, make this minimum and maximum port the same.
By default the values are 8000 and 9000, respectively.
- `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the
`http_directory`. Because Packer often runs in parallel, Packer will choose
a randomly available port in this range to run the HTTP server. If you want
to force the HTTP server to be on one port, make this minimum and maximum
port the same. By default the values are 8000 and 9000, respectively.
* `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to download
or while downloading a single URL, it will move on to the next. All URLs
must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
- `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to
download or while downloading a single URL, it will move on to the next. All
URLs must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
* `machine_type` (string) - The type of machine emulation to use. Run
your qemu binary with the flags `-machine help` to list available types
for your system. This defaults to "pc".
- `machine_type` (string) - The type of machine emulation to use. Run your
qemu binary with the flags `-machine help` to list available types for
your system. This defaults to "pc".
* `net_device` (string) - The driver to use for the network interface. Allowed
values "ne2k_pci," "i82551," "i82557b," "i82559er," "rtl8139," "e1000,"
"pcnet" or "virtio." The Qemu builder uses "virtio" by default.
- `net_device` (string) - The driver to use for the network interface. Allowed
values "ne2k\_pci," "i82551," "i82557b," "i82559er," "rtl8139," "e1000,"
"pcnet" or "virtio." The Qemu builder uses "virtio" by default.
* `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder.
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
of the build.
- `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running
the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
name of the build.
* `qemu_binary` (string) - The name of the Qemu binary to look for. This
defaults to "qemu-system-x86_64", but may need to be changed for some
platforms. For example "qemu-kvm", or "qemu-system-i386" may be a better
choice for some systems.
- `qemu_binary` (string) - The name of the Qemu binary to look for. This
defaults to "qemu-system-x86\_64", but may need to be changed for
some platforms. For example "qemu-kvm", or "qemu-system-i386" may be a
better choice for some systems.
* `qemuargs` (array of array of strings) - Allows complete control over
the qemu command line (though not, at this time, qemu-img). Each array
of strings makes up a command line switch that overrides matching default
switch/value pairs. Any value specified as an empty string is ignored.
All values after the switch are concatenated with no separator.
- `qemuargs` (array of array of strings) - Allows complete control over the
qemu command line (though not, at this time, qemu-img). Each array of
strings makes up a command line switch that overrides matching default
switch/value pairs. Any value specified as an empty string is ignored. All
values after the switch are concatenated with no separator.
~> **Warning:** The qemu command line allows extreme flexibility, so beware of
conflicting arguments causing failures of your run. For instance, using
\~&gt; **Warning:** The qemu command line allows extreme flexibility, so beware
of conflicting arguments causing failures of your run. For instance, using
--no-acpi could break the ability to send power signal type commands (e.g.,
shutdown -P now) to the virtual machine, thus preventing proper shutdown. To
see the defaults, look in the packer.log file and search for the
qemu-system-x86 command. The arguments are all printed for review.
shutdown -P now) to the virtual machine, thus preventing proper shutdown. To see
the defaults, look in the packer.log file and search for the qemu-system-x86
command. The arguments are all printed for review.
The following shows a sample usage:
The following shows a sample usage:
```javascript
``` {.javascript}
// ...
"qemuargs": [
[ "-m", "1024M" ],
@ -224,91 +225,91 @@ qemu-system-x86 command. The arguments are all printed for review.
// ...
```
would produce the following (not including other defaults supplied by the builder and not otherwise conflicting with the qemuargs):
would produce the following (not including other defaults supplied by the
builder and not otherwise conflicting with the qemuargs):
<pre class="prettyprint">
qemu-system-x86 -m 1024m --no-acpi -netdev user,id=mynet0,hostfwd=hostip:hostport-guestip:guestport -device virtio-net,netdev=mynet0"
qemu-system-x86 -m 1024m --no-acpi -netdev user,id=mynet0,hostfwd=hostip:hostport-guestip:guestport -device virtio-net,netdev=mynet0"
</pre>
- `shutdown_command` (string) - The command to use to gracefully shut down the
machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
* `shutdown_command` (string) - The command to use to gracefully shut down
the machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
- `shutdown_timeout` (string) - The amount of time to wait after executing the
`shutdown_command` for the virtual machine to actually shut down. If it
doesn't shut down in this time, it is an error. By default, the timeout is
"5m", or five minutes.
* `shutdown_timeout` (string) - The amount of time to wait after executing
the `shutdown_command` for the virtual machine to actually shut down.
If it doesn't shut down in this time, it is an error. By default, the timeout
is "5m", or five minutes.
- `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and
maximum port to use for the SSH port on the host machine which is forwarded
to the SSH port on the guest machine. Because Packer often runs in parallel,
Packer will choose a randomly available port in this range to use as the
host port.
* `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and
maximum port to use for the SSH port on the host machine which is forwarded
to the SSH port on the guest machine. Because Packer often runs in parallel,
Packer will choose a randomly available port in this range to use as the
host port.
- `vm_name` (string) - This is the name of the image (QCOW2 or IMG) file for
the new virtual machine, without the file extension. By default this is
"packer-BUILDNAME", where "BUILDNAME" is the name of the build.
* `vm_name` (string) - This is the name of the image (QCOW2 or IMG) file for
the new virtual machine, without the file extension. By default this is
"packer-BUILDNAME", where "BUILDNAME" is the name of the build.
* `vnc_port_min` and `vnc_port_max` (integer) - The minimum and
maximum port to use for the VNC port on the host machine which is forwarded
to the VNC port on the guest machine. Because Packer often runs in parallel,
Packer will choose a randomly available port in this range to use as the
host port.
- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port
to use for the VNC port on the host machine which is forwarded to the VNC
port on the guest machine. Because Packer often runs in parallel, Packer
will choose a randomly available port in this range to use as the host port.
## Boot Command
The `boot_command` configuration is very important: it specifies the keys
to type when the virtual machine is first booted in order to start the
OS installer. This command is typed after `boot_wait`, which gives the
virtual machine some time to actually load the ISO.
The `boot_command` configuration is very important: it specifies the keys to
type when the virtual machine is first booted in order to start the OS
installer. This command is typed after `boot_wait`, which gives the virtual
machine some time to actually load the ISO.
As documented above, the `boot_command` is an array of strings. The
strings are all typed in sequence. It is an array only to improve readability
within the template.
As documented above, the `boot_command` is an array of strings. The strings are
all typed in sequence. It is an array only to improve readability within the
template.
The boot command is "typed" character for character over a VNC connection
to the machine, simulating a human actually typing the keyboard. There are
a set of special keys available. If these are in your boot command, they
will be replaced by the proper key:
The boot command is "typed" character for character over a VNC connection to the
machine, simulating a human actually typing the keyboard. There are a set of
special keys available. If these are in your boot command, they will be replaced
by the proper key:
* `<bs>` - Backspace
- `<bs>` - Backspace
* `<del>` - Delete
- `<del>` - Delete
* `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
- `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
* `<esc>` - Simulates pressing the escape key.
- `<esc>` - Simulates pressing the escape key.
* `<tab>` - Simulates pressing the tab key.
- `<tab>` - Simulates pressing the tab key.
* `<f1>` - `<f12>` - Simulates pressing a function key.
- `<f1>` - `<f12>` - Simulates pressing a function key.
* `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
- `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
* `<spacebar>` - Simulates pressing the spacebar.
- `<spacebar>` - Simulates pressing the spacebar.
* `<insert>` - Simulates pressing the insert key.
- `<insert>` - Simulates pressing the insert key.
* `<home>` `<end>` - Simulates pressing the home and end keys.
- `<home>` `<end>` - Simulates pressing the home and end keys.
* `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
- `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
* `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before sending any additional keys. This
is useful if you have to generally wait for the UI to update before typing more.
- `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before
sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
In addition to the special keys, each command to type is treated as a
[configuration template](/docs/templates/configuration-templates.html).
The available variables are:
[configuration template](/docs/templates/configuration-templates.html). The
available variables are:
* `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will
be blank!
- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will be
blank!
Example boot command. This is actually a working boot command used to start
an CentOS 6.4 installer:
Example boot command. This is actually a working boot command used to start an
CentOS 6.4 installer:
```javascript
``` {.javascript}
"boot_command":
[
"<tab><wait>",

View File

@ -1,30 +1,31 @@
---
layout: "docs"
page_title: "VirtualBox Builder (from an ISO)"
description: |-
The VirtualBox Packer builder is able to create VirtualBox virtual machines and export them in the OVF format, starting from an ISO image.
---
description: |
The VirtualBox Packer builder is able to create VirtualBox virtual machines and
export them in the OVF format, starting from an ISO image.
layout: docs
page_title: 'VirtualBox Builder (from an ISO)'
...
# VirtualBox Builder (from an ISO)
Type: `virtualbox-iso`
The VirtualBox Packer builder is able to create [VirtualBox](https://www.virtualbox.org/)
virtual machines and export them in the OVF format, starting from an
ISO image.
The VirtualBox Packer builder is able to create
[VirtualBox](https://www.virtualbox.org/) virtual machines and export them in
the OVF format, starting from an ISO image.
The builder builds a virtual machine by creating a new virtual machine
from scratch, booting it, installing an OS, provisioning software within
the OS, then shutting it down. The result of the VirtualBox builder is a directory
containing all the files necessary to run the virtual machine portably.
The builder builds a virtual machine by creating a new virtual machine from
scratch, booting it, installing an OS, provisioning software within the OS, then
shutting it down. The result of the VirtualBox builder is a directory containing
all the files necessary to run the virtual machine portably.
## Basic Example
Here is a basic example. This example is not functional. It will start the
OS installer but then fail because we don't provide the preseed file for
Ubuntu to self-install. Still, the example serves to show the basic configuration:
Here is a basic example. This example is not functional. It will start the OS
installer but then fail because we don't provide the preseed file for Ubuntu to
self-install. Still, the example serves to show the basic configuration:
```javascript
``` {.javascript}
{
"type": "virtualbox-iso",
"guest_os_type": "Ubuntu_64",
@ -37,250 +38,254 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
}
```
It is important to add a `shutdown_command`. By default Packer halts the
virtual machine and the file system may not be sync'd. Thus, changes made in a
It is important to add a `shutdown_command`. By default Packer halts the virtual
machine and the file system may not be sync'd. Thus, changes made in a
provisioner might not be saved.
## Configuration Reference
There are many configuration options available for the VirtualBox builder.
They are organized below into two categories: required and optional. Within
each category, the available options are alphabetized and described.
There are many configuration options available for the VirtualBox builder. They
are organized below into two categories: required and optional. Within each
category, the available options are alphabetized and described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior
to booting a virtual machine with the ISO attached. The type of the
checksum is specified with `iso_checksum_type`, documented below.
- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior to
booting a virtual machine with the ISO attached. The type of the checksum is
specified with `iso_checksum_type`, documented below.
* `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
"sha512" currently. While "none" will skip checksumming, this is not
recommended since ISO files are generally large and corruption does happen
from time to time.
- `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
"sha512" currently. While "none" will skip checksumming, this is not
recommended since ISO files are generally large and corruption does happen
from time to time.
* `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file).
If this is an HTTP URL, Packer will download it and cache it between
runs.
- `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file). If
this is an HTTP URL, Packer will download it and cache it between runs.
* `ssh_username` (string) - The username to use to SSH into the machine
once the OS is installed.
- `ssh_username` (string) - The username to use to SSH into the machine once
the OS is installed.
- `ssh_password` (string) - The password to use to SSH into the machine once
the OS is installed.
### Optional:
* `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot
command. If this is not specified, it is assumed the installer will start
itself.
- `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the
boot command. If this is not specified, it is assumed the installer will
start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified,
the default is 10 seconds.
- `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't
specified, the default is 10 seconds.
* `disk_size` (integer) - The size, in megabytes, of the hard disk to create
for the VM. By default, this is 40000 (about 40 GB).
- `disk_size` (integer) - The size, in megabytes, of the hard disk to create
for the VM. By default, this is 40000 (about 40 GB).
* `export_opts` (array of strings) - Additional options to pass to the `VBoxManage export`.
This can be useful for passing product information to include in the resulting
appliance file.
- `export_opts` (array of strings) - Additional options to pass to the
`VBoxManage export`. This can be useful for passing product information to
include in the resulting appliance file.
* `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful
for unattended Windows installs, which look for an `Autounattend.xml` file
on removable media. By default, no floppy will be attached. All files
listed in this setting get placed into the root directory of the floppy
and the floppy is attached as the first floppy device. Currently, no
support exists for creating sub-directories on the floppy. Wildcard
characters (*, ?, and []) are allowed. Directory names are also allowed,
which will add all the files found in the directory to the floppy.
- `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful for
unattended Windows installs, which look for an `Autounattend.xml` file on
removable media. By default, no floppy will be attached. All files listed in
this setting get placed into the root directory of the floppy and the floppy
is attached as the first floppy device. Currently, no support exists for
creating sub-directories on the floppy. Wildcard characters (\*, ?,
and \[\]) are allowed. Directory names are also allowed, which will add all
the files found in the directory to the floppy.
* `format` (string) - Either "ovf" or "ova", this specifies the output
format of the exported virtual machine. This defaults to "ovf".
- `format` (string) - Either "ovf" or "ova", this specifies the output format
of the exported virtual machine. This defaults to "ovf".
* `guest_additions_mode` (string) - The method by which guest additions
are made available to the guest for installation. Valid options are
"upload", "attach", or "disable". If the mode is "attach" the guest
additions ISO will be attached as a CD device to the virtual machine.
If the mode is "upload" the guest additions ISO will be uploaded to
the path specified by `guest_additions_path`. The default value is
"upload". If "disable" is used, guest additions won't be downloaded,
either.
- `guest_additions_mode` (string) - The method by which guest additions are
made available to the guest for installation. Valid options are "upload",
"attach", or "disable". If the mode is "attach" the guest additions ISO will
be attached as a CD device to the virtual machine. If the mode is "upload"
the guest additions ISO will be uploaded to the path specified by
`guest_additions_path`. The default value is "upload". If "disable" is used,
guest additions won't be downloaded, either.
* `guest_additions_path` (string) - The path on the guest virtual machine
where the VirtualBox guest additions ISO will be uploaded. By default this
is "VBoxGuestAdditions.iso" which should upload into the login directory
of the user. This is a [configuration template](/docs/templates/configuration-templates.html)
where the `Version` variable is replaced with the VirtualBox version.
- `guest_additions_path` (string) - The path on the guest virtual machine
where the VirtualBox guest additions ISO will be uploaded. By default this
is "VBoxGuestAdditions.iso" which should upload into the login directory of
the user. This is a [configuration
template](/docs/templates/configuration-templates.html) where the `Version`
variable is replaced with the VirtualBox version.
* `guest_additions_sha256` (string) - The SHA256 checksum of the guest
additions ISO that will be uploaded to the guest VM. By default the
checksums will be downloaded from the VirtualBox website, so this only
needs to be set if you want to be explicit about the checksum.
- `guest_additions_sha256` (string) - The SHA256 checksum of the guest
additions ISO that will be uploaded to the guest VM. By default the
checksums will be downloaded from the VirtualBox website, so this only needs
to be set if you want to be explicit about the checksum.
* `guest_additions_url` (string) - The URL to the guest additions ISO
to upload. This can also be a file URL if the ISO is at a local path.
By default, the VirtualBox builder will attempt to find the guest additions
ISO on the local file system. If it is not available locally, the builder
will download the proper guest additions ISO from the internet.
- `guest_additions_url` (string) - The URL to the guest additions ISO
to upload. This can also be a file URL if the ISO is at a local path. By
default, the VirtualBox builder will attempt to find the guest additions ISO
on the local file system. If it is not available locally, the builder will
download the proper guest additions ISO from the internet.
* `guest_os_type` (string) - The guest OS type being installed. By default
this is "other", but you can get _dramatic_ performance improvements by
setting this to the proper value. To view all available values for this
run `VBoxManage list ostypes`. Setting the correct value hints to VirtualBox
how to optimize the virtual hardware to work best with that operating
system.
- `guest_os_type` (string) - The guest OS type being installed. By default
this is "other", but you can get *dramatic* performance improvements by
setting this to the proper value. To view all available values for this run
`VBoxManage list ostypes`. Setting the correct value hints to VirtualBox how
to optimize the virtual hardware to work best with that operating system.
* `hard_drive_interface` (string) - The type of controller that the primary
hard drive is attached to, defaults to "ide". When set to "sata", the
drive is attached to an AHCI SATA controller. When set to "scsi", the drive
is attached to an LsiLogic SCSI controller.
- `hard_drive_interface` (string) - The type of controller that the primary
hard drive is attached to, defaults to "ide". When set to "sata", the drive
is attached to an AHCI SATA controller. When set to "scsi", the drive is
attached to an LsiLogic SCSI controller.
* `headless` (boolean) - Packer defaults to building VirtualBox
virtual machines by launching a GUI that shows the console of the
machine being built. When this value is set to true, the machine will
start without a console.
- `headless` (boolean) - Packer defaults to building VirtualBox virtual
machines by launching a GUI that shows the console of the machine
being built. When this value is set to true, the machine will start without
a console.
* `http_directory` (string) - Path to a directory to serve using an HTTP
server. The files in this directory will be available over HTTP that will
be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP
server will be started. The address and port of the HTTP server will be
available as variables in `boot_command`. This is covered in more detail
below.
- `http_directory` (string) - Path to a directory to serve using an
HTTP server. The files in this directory will be available over HTTP that
will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP server
will be started. The address and port of the HTTP server will be available
as variables in `boot_command`. This is covered in more detail below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`.
Because Packer often runs in parallel, Packer will choose a randomly available
port in this range to run the HTTP server. If you want to force the HTTP
server to be on one port, make this minimum and maximum port the same.
By default the values are 8000 and 9000, respectively.
- `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the
`http_directory`. Because Packer often runs in parallel, Packer will choose
a randomly available port in this range to run the HTTP server. If you want
to force the HTTP server to be on one port, make this minimum and maximum
port the same. By default the values are 8000 and 9000, respectively.
* `iso_interface` (string) - The type of controller that the ISO is attached
to, defaults to "ide". When set to "sata", the drive is attached to an
AHCI SATA controller.
- `iso_interface` (string) - The type of controller that the ISO is attached
to, defaults to "ide". When set to "sata", the drive is attached to an AHCI
SATA controller.
* `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to download
or while downloading a single URL, it will move on to the next. All URLs
must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
- `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to
download or while downloading a single URL, it will move on to the next. All
URLs must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
* `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder.
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
of the build.
- `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running
the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
name of the build.
* `shutdown_command` (string) - The command to use to gracefully shut down the machine once all
the provisioning is done. By default this is an empty string, which tells Packer to just
forcefully shut down the machine unless a shutdown command takes place inside script so this may
safely be omitted. If one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your last script.
- `shutdown_command` (string) - The command to use to gracefully shut down the
machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine unless a
shutdown command takes place inside script so this may safely be omitted. If
one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your
last script.
* `shutdown_timeout` (string) - The amount of time to wait after executing
the `shutdown_command` for the virtual machine to actually shut down.
If it doesn't shut down in this time, it is an error. By default, the timeout
is "5m", or five minutes.
- `shutdown_timeout` (string) - The amount of time to wait after executing the
`shutdown_command` for the virtual machine to actually shut down. If it
doesn't shut down in this time, it is an error. By default, the timeout is
"5m", or five minutes.
* `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and
maximum port to use for the SSH port on the host machine which is forwarded
to the SSH port on the guest machine. Because Packer often runs in parallel,
Packer will choose a randomly available port in this range to use as the
host port.
- `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and
maximum port to use for the SSH port on the host machine which is forwarded
to the SSH port on the guest machine. Because Packer often runs in parallel,
Packer will choose a randomly available port in this range to use as the
host port.
* `ssh_skip_nat_mapping` (boolean) - Defaults to false. When enabled, Packer does
not setup forwarded port mapping for SSH requests and uses `ssh_port` on the
host to communicate to the virtual machine
- `ssh_skip_nat_mapping` (boolean) - Defaults to false. When enabled, Packer
does not setup forwarded port mapping for SSH requests and uses `ssh_port`
on the host to communicate to the virtual machine
* `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to
execute in order to further customize the virtual machine being created.
The value of this is an array of commands to execute. The commands are executed
in the order defined in the template. For each command, the command is
defined itself as an array of strings, where each string represents a single
argument on the command-line to `VBoxManage` (but excluding `VBoxManage`
itself). Each arg is treated as a [configuration template](/docs/templates/configuration-templates.html),
where the `Name` variable is replaced with the VM name. More details on how
to use `VBoxManage` are below.
- `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to
execute in order to further customize the virtual machine being created. The
value of this is an array of commands to execute. The commands are executed
in the order defined in the template. For each command, the command is
defined itself as an array of strings, where each string represents a single
argument on the command-line to `VBoxManage` (but excluding
`VBoxManage` itself). Each arg is treated as a [configuration
template](/docs/templates/configuration-templates.html), where the `Name`
variable is replaced with the VM name. More details on how to use
`VBoxManage` are below.
* `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`,
except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported.
- `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`,
except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported.
* `virtualbox_version_file` (string) - The path within the virtual machine
to upload a file that contains the VirtualBox version that was used to
create the machine. This information can be useful for provisioning.
By default this is ".vbox_version", which will generally be upload it into
the home directory.
- `virtualbox_version_file` (string) - The path within the virtual machine to
upload a file that contains the VirtualBox version that was used to create
the machine. This information can be useful for provisioning. By default
this is ".vbox\_version", which will generally be upload it into the
home directory.
* `vm_name` (string) - This is the name of the OVF file for the new virtual
machine, without the file extension. By default this is "packer-BUILDNAME",
where "BUILDNAME" is the name of the build.
- `vm_name` (string) - This is the name of the OVF file for the new virtual
machine, without the file extension. By default this is "packer-BUILDNAME",
where "BUILDNAME" is the name of the build.
## Boot Command
The `boot_command` configuration is very important: it specifies the keys
to type when the virtual machine is first booted in order to start the
OS installer. This command is typed after `boot_wait`, which gives the
virtual machine some time to actually load the ISO.
The `boot_command` configuration is very important: it specifies the keys to
type when the virtual machine is first booted in order to start the OS
installer. This command is typed after `boot_wait`, which gives the virtual
machine some time to actually load the ISO.
As documented above, the `boot_command` is an array of strings. The
strings are all typed in sequence. It is an array only to improve readability
within the template.
As documented above, the `boot_command` is an array of strings. The strings are
all typed in sequence. It is an array only to improve readability within the
template.
The boot command is "typed" character for character over a VNC connection
to the machine, simulating a human actually typing the keyboard. There are
a set of special keys available. If these are in your boot command, they
will be replaced by the proper key:
The boot command is "typed" character for character over a VNC connection to the
machine, simulating a human actually typing the keyboard. There are a set of
special keys available. If these are in your boot command, they will be replaced
by the proper key:
* `<bs>` - Backspace
- `<bs>` - Backspace
* `<del>` - Delete
- `<del>` - Delete
* `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
- `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
* `<esc>` - Simulates pressing the escape key.
- `<esc>` - Simulates pressing the escape key.
* `<tab>` - Simulates pressing the tab key.
- `<tab>` - Simulates pressing the tab key.
* `<f1>` - `<f12>` - Simulates pressing a function key.
- `<f1>` - `<f12>` - Simulates pressing a function key.
* `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
- `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
* `<spacebar>` - Simulates pressing the spacebar.
- `<spacebar>` - Simulates pressing the spacebar.
* `<insert>` - Simulates pressing the insert key.
- `<insert>` - Simulates pressing the insert key.
* `<home>` `<end>` - Simulates pressing the home and end keys.
- `<home>` `<end>` - Simulates pressing the home and end keys.
* `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
- `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
* `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before sending any additional keys. This
is useful if you have to generally wait for the UI to update before typing more.
- `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before
sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
In addition to the special keys, each command to type is treated as a
[configuration template](/docs/templates/configuration-templates.html).
The available variables are:
[configuration template](/docs/templates/configuration-templates.html). The
available variables are:
* `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will
be blank!
- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will be
blank!
Example boot command. This is actually a working boot command used to start
an Ubuntu 12.04 installer:
Example boot command. This is actually a working boot command used to start an
Ubuntu 12.04 installer:
```text
``` {.text}
[
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic ",
@ -296,31 +301,32 @@ an Ubuntu 12.04 installer:
## Guest Additions
Packer will automatically download the proper guest additions for the
version of VirtualBox that is running and upload those guest additions into
the virtual machine so that provisioners can easily install them.
Packer will automatically download the proper guest additions for the version of
VirtualBox that is running and upload those guest additions into the virtual
machine so that provisioners can easily install them.
Packer downloads the guest additions from the official VirtualBox website,
and verifies the file with the official checksums released by VirtualBox.
Packer downloads the guest additions from the official VirtualBox website, and
verifies the file with the official checksums released by VirtualBox.
After the virtual machine is up and the operating system is installed,
Packer uploads the guest additions into the virtual machine. The path where
they are uploaded is controllable by `guest_additions_path`, and defaults
to "VBoxGuestAdditions.iso". Without an absolute path, it is uploaded to the
home directory of the SSH user.
After the virtual machine is up and the operating system is installed, Packer
uploads the guest additions into the virtual machine. The path where they are
uploaded is controllable by `guest_additions_path`, and defaults to
"VBoxGuestAdditions.iso". Without an absolute path, it is uploaded to the home
directory of the SSH user.
## VBoxManage Commands
In order to perform extra customization of the virtual machine, a template
can define extra calls to `VBoxManage` to perform. [VBoxManage](http://www.virtualbox.org/manual/ch08.html)
is the command-line interface to VirtualBox where you can completely control
VirtualBox. It can be used to do things such as set RAM, CPUs, etc.
In order to perform extra customization of the virtual machine, a template can
define extra calls to `VBoxManage` to perform.
[VBoxManage](http://www.virtualbox.org/manual/ch08.html) is the command-line
interface to VirtualBox where you can completely control VirtualBox. It can be
used to do things such as set RAM, CPUs, etc.
Extra VBoxManage commands are defined in the template in the `vboxmanage` section.
An example is shown below that sets the memory and number of CPUs within the
virtual machine:
Extra VBoxManage commands are defined in the template in the `vboxmanage`
section. An example is shown below that sets the memory and number of CPUs
within the virtual machine:
```javascript
``` {.javascript}
{
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "1024"],
@ -329,12 +335,12 @@ virtual machine:
}
```
The value of `vboxmanage` is an array of commands to execute. These commands
are executed in the order defined. So in the above example, the memory will be
set followed by the CPUs.
The value of `vboxmanage` is an array of commands to execute. These commands are
executed in the order defined. So in the above example, the memory will be set
followed by the CPUs.
Each command itself is an array of strings, where each string is an argument
to `VBoxManage`. Each argument is treated as a
[configuration template](/docs/templates/configuration-templates.html).
The only available variable is `Name` which is replaced with the unique
name of the VM, which is required for many VBoxManage calls.
Each command itself is an array of strings, where each string is an argument to
`VBoxManage`. Each argument is treated as a [configuration
template](/docs/templates/configuration-templates.html). The only available
variable is `Name` which is replaced with the unique name of the VM, which is
required for many VBoxManage calls.

View File

@ -1,39 +1,41 @@
---
layout: "docs"
page_title: "VirtualBox Builder (from an OVF/OVA)"
description: |-
This VirtualBox Packer builder is able to create VirtualBox virtual machines and export them in the OVF format, starting from an existing OVF/OVA (exported virtual machine image).
---
description: |
This VirtualBox Packer builder is able to create VirtualBox virtual machines and
export them in the OVF format, starting from an existing OVF/OVA (exported
virtual machine image).
layout: docs
page_title: 'VirtualBox Builder (from an OVF/OVA)'
...
# VirtualBox Builder (from an OVF/OVA)
Type: `virtualbox-ovf`
This VirtualBox Packer builder is able to create [VirtualBox](https://www.virtualbox.org/)
virtual machines and export them in the OVF format, starting from an
existing OVF/OVA (exported virtual machine image).
This VirtualBox Packer builder is able to create
[VirtualBox](https://www.virtualbox.org/) virtual machines and export them in
the OVF format, starting from an existing OVF/OVA (exported virtual machine
image).
When exporting from VirtualBox make sure to choose OVF Version 2, since Version 1 is not compatible and will generate errors like this:
When exporting from VirtualBox make sure to choose OVF Version 2, since Version
1 is not compatible and will generate errors like this:
```
==> virtualbox-ovf: Progress state: VBOX_E_FILE_ERROR
==> virtualbox-ovf: VBoxManage: error: Appliance read failed
==> virtualbox-ovf: VBoxManage: error: Error reading "source.ova": element "Section" has no "type" attribute, line 21
==> virtualbox-ovf: VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Appliance, interface IAppliance
==> virtualbox-ovf: VBoxManage: error: Context: "int handleImportAppliance(HandlerArg*)" at line 304 of file VBoxManageAppliance.cpp
```
==> virtualbox-ovf: Progress state: VBOX_E_FILE_ERROR
==> virtualbox-ovf: VBoxManage: error: Appliance read failed
==> virtualbox-ovf: VBoxManage: error: Error reading "source.ova": element "Section" has no "type" attribute, line 21
==> virtualbox-ovf: VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Appliance, interface IAppliance
==> virtualbox-ovf: VBoxManage: error: Context: "int handleImportAppliance(HandlerArg*)" at line 304 of file VBoxManageAppliance.cpp
The builder builds a virtual machine by importing an existing OVF or OVA
file. It then boots this image, runs provisioners on this new VM, and
exports that VM to create the image. The imported machine is deleted prior
to finishing the build.
The builder builds a virtual machine by importing an existing OVF or OVA file.
It then boots this image, runs provisioners on this new VM, and exports that VM
to create the image. The imported machine is deleted prior to finishing the
build.
## Basic Example
Here is a basic example. This example is functional if you have an OVF matching
the settings here.
```javascript
``` {.javascript}
{
"type": "virtualbox-ovf",
"source_path": "source.ovf",
@ -43,193 +45,196 @@ the settings here.
}
```
It is important to add a `shutdown_command`. By default Packer halts the
virtual machine and the file system may not be sync'd. Thus, changes made in a
It is important to add a `shutdown_command`. By default Packer halts the virtual
machine and the file system may not be sync'd. Thus, changes made in a
provisioner might not be saved.
## Configuration Reference
There are many configuration options available for the VirtualBox builder.
They are organized below into two categories: required and optional. Within
each category, the available options are alphabetized and described.
There are many configuration options available for the VirtualBox builder. They
are organized below into two categories: required and optional. Within each
category, the available options are alphabetized and described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `source_path` (string) - The path to an OVF or OVA file that acts as
the source of this build.
- `source_path` (string) - The path to an OVF or OVA file that acts as the
source of this build.
* `ssh_username` (string) - The username to use to SSH into the machine
once the OS is installed.
- `ssh_username` (string) - The username to use to SSH into the machine once
the OS is installed.
### Optional:
* `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot
command. If this is not specified, it is assumed the installer will start
itself.
- `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the
boot command. If this is not specified, it is assumed the installer will
start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified,
the default is 10 seconds.
- `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't
specified, the default is 10 seconds.
* `export_opts` (array of strings) - Additional options to pass to the `VBoxManage export`.
This can be useful for passing product information to include in the resulting
appliance file.
- `export_opts` (array of strings) - Additional options to pass to the
`VBoxManage export`. This can be useful for passing product information to
include in the resulting appliance file.
* `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful
for unattended Windows installs, which look for an `Autounattend.xml` file
on removable media. By default, no floppy will be attached. All files
listed in this setting get placed into the root directory of the floppy
and the floppy is attached as the first floppy device. Currently, no
support exists for creating sub-directories on the floppy. Wildcard
characters (*, ?, and []) are allowed. Directory names are also allowed,
which will add all the files found in the directory to the floppy.
- `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful for
unattended Windows installs, which look for an `Autounattend.xml` file on
removable media. By default, no floppy will be attached. All files listed in
this setting get placed into the root directory of the floppy and the floppy
is attached as the first floppy device. Currently, no support exists for
creating sub-directories on the floppy. Wildcard characters (\*, ?,
and \[\]) are allowed. Directory names are also allowed, which will add all
the files found in the directory to the floppy.
* `format` (string) - Either "ovf" or "ova", this specifies the output
format of the exported virtual machine. This defaults to "ovf".
- `format` (string) - Either "ovf" or "ova", this specifies the output format
of the exported virtual machine. This defaults to "ovf".
* `guest_additions_mode` (string) - The method by which guest additions
are made available to the guest for installation. Valid options are
"upload", "attach", or "disable". If the mode is "attach" the guest
additions ISO will be attached as a CD device to the virtual machine.
If the mode is "upload" the guest additions ISO will be uploaded to
the path specified by `guest_additions_path`. The default value is
"upload". If "disable" is used, guest additions won't be downloaded,
either.
- `guest_additions_mode` (string) - The method by which guest additions are
made available to the guest for installation. Valid options are "upload",
"attach", or "disable". If the mode is "attach" the guest additions ISO will
be attached as a CD device to the virtual machine. If the mode is "upload"
the guest additions ISO will be uploaded to the path specified by
`guest_additions_path`. The default value is "upload". If "disable" is used,
guest additions won't be downloaded, either.
* `guest_additions_path` (string) - The path on the guest virtual machine
where the VirtualBox guest additions ISO will be uploaded. By default this
is "VBoxGuestAdditions.iso" which should upload into the login directory
of the user. This is a [configuration template](/docs/templates/configuration-templates.html)
where the `Version` variable is replaced with the VirtualBox version.
- `guest_additions_path` (string) - The path on the guest virtual machine
where the VirtualBox guest additions ISO will be uploaded. By default this
is "VBoxGuestAdditions.iso" which should upload into the login directory of
the user. This is a [configuration
template](/docs/templates/configuration-templates.html) where the `Version`
variable is replaced with the VirtualBox version.
* `guest_additions_sha256` (string) - The SHA256 checksum of the guest
additions ISO that will be uploaded to the guest VM. By default the
checksums will be downloaded from the VirtualBox website, so this only
needs to be set if you want to be explicit about the checksum.
- `guest_additions_sha256` (string) - The SHA256 checksum of the guest
additions ISO that will be uploaded to the guest VM. By default the
checksums will be downloaded from the VirtualBox website, so this only needs
to be set if you want to be explicit about the checksum.
* `guest_additions_url` (string) - The URL to the guest additions ISO
to upload. This can also be a file URL if the ISO is at a local path.
By default the VirtualBox builder will go and download the proper
guest additions ISO from the internet.
- `guest_additions_url` (string) - The URL to the guest additions ISO
to upload. This can also be a file URL if the ISO is at a local path. By
default the VirtualBox builder will go and download the proper guest
additions ISO from the internet.
* `headless` (boolean) - Packer defaults to building VirtualBox
virtual machines by launching a GUI that shows the console of the
machine being built. When this value is set to true, the machine will
start without a console.
- `headless` (boolean) - Packer defaults to building VirtualBox virtual
machines by launching a GUI that shows the console of the machine
being built. When this value is set to true, the machine will start without
a console.
* `http_directory` (string) - Path to a directory to serve using an HTTP
server. The files in this directory will be available over HTTP that will
be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP
server will be started. The address and port of the HTTP server will be
available as variables in `boot_command`. This is covered in more detail
below.
- `http_directory` (string) - Path to a directory to serve using an
HTTP server. The files in this directory will be available over HTTP that
will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP server
will be started. The address and port of the HTTP server will be available
as variables in `boot_command`. This is covered in more detail below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`.
Because Packer often runs in parallel, Packer will choose a randomly available
port in this range to run the HTTP server. If you want to force the HTTP
server to be on one port, make this minimum and maximum port the same.
By default the values are 8000 and 9000, respectively.
- `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the
`http_directory`. Because Packer often runs in parallel, Packer will choose
a randomly available port in this range to run the HTTP server. If you want
to force the HTTP server to be on one port, make this minimum and maximum
port the same. By default the values are 8000 and 9000, respectively.
* `import_flags` (array of strings) - Additional flags to pass to
- `import_flags` (array of strings) - Additional flags to pass to
`VBoxManage import`. This can be used to add additional command-line flags
such as `--eula-accept` to accept a EULA in the OVF.
* `import_opts` (string) - Additional options to pass to the `VBoxManage import`.
This can be useful for passing "keepallmacs" or "keepnatmacs" options for existing
ovf images.
- `import_opts` (string) - Additional options to pass to the
`VBoxManage import`. This can be useful for passing "keepallmacs" or
"keepnatmacs" options for existing ovf images.
* `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder.
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
of the build.
- `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running
the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
name of the build.
* `shutdown_command` (string) - The command to use to gracefully shut down the machine once all
the provisioning is done. By default this is an empty string, which tells Packer to just
forcefully shut down the machine unless a shutdown command takes place inside script so this may
safely be omitted. If one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your last script.
- `shutdown_command` (string) - The command to use to gracefully shut down the
machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine unless a
shutdown command takes place inside script so this may safely be omitted. If
one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your
last script.
* `shutdown_timeout` (string) - The amount of time to wait after executing
the `shutdown_command` for the virtual machine to actually shut down.
If it doesn't shut down in this time, it is an error. By default, the timeout
is "5m", or five minutes.
- `shutdown_timeout` (string) - The amount of time to wait after executing the
`shutdown_command` for the virtual machine to actually shut down. If it
doesn't shut down in this time, it is an error. By default, the timeout is
"5m", or five minutes.
* `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and
maximum port to use for the SSH port on the host machine which is forwarded
to the SSH port on the guest machine. Because Packer often runs in parallel,
Packer will choose a randomly available port in this range to use as the
host port.
- `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and
maximum port to use for the SSH port on the host machine which is forwarded
to the SSH port on the guest machine. Because Packer often runs in parallel,
Packer will choose a randomly available port in this range to use as the
host port.
* `ssh_skip_nat_mapping` (boolean) - Defaults to false. When enabled, Packer does
not setup forwarded port mapping for SSH requests and uses `ssh_port` on the
host to communicate to the virtual machine
- `ssh_skip_nat_mapping` (boolean) - Defaults to false. When enabled, Packer
does not setup forwarded port mapping for SSH requests and uses `ssh_port`
on the host to communicate to the virtual machine
* `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to
execute in order to further customize the virtual machine being created.
The value of this is an array of commands to execute. The commands are executed
in the order defined in the template. For each command, the command is
defined itself as an array of strings, where each string represents a single
argument on the command-line to `VBoxManage` (but excluding `VBoxManage`
itself). Each arg is treated as a [configuration template](/docs/templates/configuration-templates.html),
where the `Name` variable is replaced with the VM name. More details on how
to use `VBoxManage` are below.
- `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to
execute in order to further customize the virtual machine being created. The
value of this is an array of commands to execute. The commands are executed
in the order defined in the template. For each command, the command is
defined itself as an array of strings, where each string represents a single
argument on the command-line to `VBoxManage` (but excluding
`VBoxManage` itself). Each arg is treated as a [configuration
template](/docs/templates/configuration-templates.html), where the `Name`
variable is replaced with the VM name. More details on how to use
`VBoxManage` are below.
* `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`,
except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported.
- `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`,
except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported.
* `virtualbox_version_file` (string) - The path within the virtual machine
to upload a file that contains the VirtualBox version that was used to
create the machine. This information can be useful for provisioning.
By default this is ".vbox_version", which will generally be upload it into
the home directory.
- `virtualbox_version_file` (string) - The path within the virtual machine to
upload a file that contains the VirtualBox version that was used to create
the machine. This information can be useful for provisioning. By default
this is ".vbox\_version", which will generally be upload it into the
home directory.
* `vm_name` (string) - This is the name of the virtual machine when it is
imported as well as the name of the OVF file when the virtual machine is
exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is
the name of the build.
- `vm_name` (string) - This is the name of the virtual machine when it is
imported as well as the name of the OVF file when the virtual machine
is exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is the
name of the build.
## Guest Additions
Packer will automatically download the proper guest additions for the
version of VirtualBox that is running and upload those guest additions into
the virtual machine so that provisioners can easily install them.
Packer will automatically download the proper guest additions for the version of
VirtualBox that is running and upload those guest additions into the virtual
machine so that provisioners can easily install them.
Packer downloads the guest additions from the official VirtualBox website,
and verifies the file with the official checksums released by VirtualBox.
Packer downloads the guest additions from the official VirtualBox website, and
verifies the file with the official checksums released by VirtualBox.
After the virtual machine is up and the operating system is installed,
Packer uploads the guest additions into the virtual machine. The path where
they are uploaded is controllable by `guest_additions_path`, and defaults
to "VBoxGuestAdditions.iso". Without an absolute path, it is uploaded to the
home directory of the SSH user.
After the virtual machine is up and the operating system is installed, Packer
uploads the guest additions into the virtual machine. The path where they are
uploaded is controllable by `guest_additions_path`, and defaults to
"VBoxGuestAdditions.iso". Without an absolute path, it is uploaded to the home
directory of the SSH user.
## VBoxManage Commands
In order to perform extra customization of the virtual machine, a template
can define extra calls to `VBoxManage` to perform. [VBoxManage](http://www.virtualbox.org/manual/ch08.html)
is the command-line interface to VirtualBox where you can completely control
VirtualBox. It can be used to do things such as set RAM, CPUs, etc.
In order to perform extra customization of the virtual machine, a template can
define extra calls to `VBoxManage` to perform.
[VBoxManage](http://www.virtualbox.org/manual/ch08.html) is the command-line
interface to VirtualBox where you can completely control VirtualBox. It can be
used to do things such as set RAM, CPUs, etc.
Extra VBoxManage commands are defined in the template in the `vboxmanage` section.
An example is shown below that sets the memory and number of CPUs within the
virtual machine:
Extra VBoxManage commands are defined in the template in the `vboxmanage`
section. An example is shown below that sets the memory and number of CPUs
within the virtual machine:
```javascript
``` {.javascript}
{
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "1024"],
@ -238,12 +243,12 @@ virtual machine:
}
```
The value of `vboxmanage` is an array of commands to execute. These commands
are executed in the order defined. So in the above example, the memory will be
set followed by the CPUs.
The value of `vboxmanage` is an array of commands to execute. These commands are
executed in the order defined. So in the above example, the memory will be set
followed by the CPUs.
Each command itself is an array of strings, where each string is an argument
to `VBoxManage`. Each argument is treated as a
[configuration template](/docs/templates/configuration-templates.html).
The only available variable is `Name` which is replaced with the unique
name of the VM, which is required for many VBoxManage calls.
Each command itself is an array of strings, where each string is an argument to
`VBoxManage`. Each argument is treated as a [configuration
template](/docs/templates/configuration-templates.html). The only available
variable is `Name` which is replaced with the unique name of the VM, which is
required for many VBoxManage calls.

View File

@ -1,27 +1,29 @@
---
layout: "docs"
page_title: "VirtualBox Builder"
description: |-
The VirtualBox Packer builder is able to create VirtualBox virtual machines and export them in the OVA or OVF format.
---
description: |
The VirtualBox Packer builder is able to create VirtualBox virtual machines and
export them in the OVA or OVF format.
layout: docs
page_title: VirtualBox Builder
...
# VirtualBox Builder
The VirtualBox Packer builder is able to create [VirtualBox](http://www.virtualbox.org)
virtual machines and export them in the OVA or OVF format.
The VirtualBox Packer builder is able to create
[VirtualBox](http://www.virtualbox.org) virtual machines and export them in the
OVA or OVF format.
Packer actually comes with multiple builders able to create VirtualBox
machines, depending on the strategy you want to use to build the image.
Packer supports the following VirtualBox builders:
Packer actually comes with multiple builders able to create VirtualBox machines,
depending on the strategy you want to use to build the image. Packer supports
the following VirtualBox builders:
* [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from
an ISO file, creates a brand new VirtualBox VM, installs an OS,
provisions software within the OS, then exports that machine to create
an image. This is best for people who want to start from scratch.
- [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from an ISO
file, creates a brand new VirtualBox VM, installs an OS, provisions software
within the OS, then exports that machine to create an image. This is best
for people who want to start from scratch.
* [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder
imports an existing OVF/OVA file, runs provisioners on top of that VM,
and exports that machine to create an image. This is best if you have
an existing VirtualBox VM export you want to use as the source. As an
additional benefit, you can feed the artifact of this builder back into
itself to iterate on a machine.
- [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports
an existing OVF/OVA file, runs provisioners on top of that VM, and exports
that machine to create an image. This is best if you have an existing
VirtualBox VM export you want to use as the source. As an additional
benefit, you can feed the artifact of this builder back into itself to
iterate on a machine.

View File

@ -1,37 +1,40 @@
---
layout: "docs"
page_title: "VMware Builder from ISO"
description: |-
This VMware Packer builder is able to create VMware virtual machines from an ISO file as a source. It currently supports building virtual machines on hosts running VMware Fusion for OS X, VMware Workstation for Linux and Windows, and VMware Player on Linux. It can also build machines directly on VMware vSphere Hypervisor using SSH as opposed to the vSphere API.
---
description: |
This VMware Packer builder is able to create VMware virtual machines from an ISO
file as a source. It currently supports building virtual machines on hosts
running VMware Fusion for OS X, VMware Workstation for Linux and Windows, and
VMware Player on Linux. It can also build machines directly on VMware vSphere
Hypervisor using SSH as opposed to the vSphere API.
layout: docs
page_title: VMware Builder from ISO
...
# VMware Builder (from ISO)
Type: `vmware-iso`
This VMware Packer builder is able to create VMware virtual machines from an
ISO file as a source. It currently
supports building virtual machines on hosts running
[VMware Fusion](http://www.vmware.com/products/fusion/overview.html) for OS X,
[VMware Workstation](http://www.vmware.com/products/workstation/overview.html)
for Linux and Windows, and
[VMware Player](http://www.vmware.com/products/player/) on Linux. It can
also build machines directly on
[VMware vSphere Hypervisor](http://www.vmware.com/products/vsphere-hypervisor/)
using SSH as opposed to the vSphere API.
This VMware Packer builder is able to create VMware virtual machines from an ISO
file as a source. It currently supports building virtual machines on hosts
running [VMware Fusion](http://www.vmware.com/products/fusion/overview.html) for
OS X, [VMware
Workstation](http://www.vmware.com/products/workstation/overview.html) for Linux
and Windows, and [VMware Player](http://www.vmware.com/products/player/) on
Linux. It can also build machines directly on [VMware vSphere
Hypervisor](http://www.vmware.com/products/vsphere-hypervisor/) using SSH as
opposed to the vSphere API.
The builder builds a virtual machine by creating a new virtual machine
from scratch, booting it, installing an OS, provisioning software within
the OS, then shutting it down. The result of the VMware builder is a directory
containing all the files necessary to run the virtual machine.
The builder builds a virtual machine by creating a new virtual machine from
scratch, booting it, installing an OS, provisioning software within the OS, then
shutting it down. The result of the VMware builder is a directory containing all
the files necessary to run the virtual machine.
## Basic Example
Here is a basic example. This example is not functional. It will start the
OS installer but then fail because we don't provide the preseed file for
Ubuntu to self-install. Still, the example serves to show the basic configuration:
Here is a basic example. This example is not functional. It will start the OS
installer but then fail because we don't provide the preseed file for Ubuntu to
self-install. Still, the example serves to show the basic configuration:
```javascript
``` {.javascript}
{
"type": "vmware-iso",
"iso_url": "http://old-releases.ubuntu.com/releases/precise/ubuntu-12.04.2-server-amd64.iso",
@ -44,261 +47,265 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
## Configuration Reference
There are many configuration options available for the VMware builder.
They are organized below into two categories: required and optional. Within
each category, the available options are alphabetized and described.
There are many configuration options available for the VMware builder. They are
organized below into two categories: required and optional. Within each
category, the available options are alphabetized and described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior
to booting a virtual machine with the ISO attached. The type of the
checksum is specified with `iso_checksum_type`, documented below.
- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
files are so large, this is required and Packer will verify it prior to
booting a virtual machine with the ISO attached. The type of the checksum is
specified with `iso_checksum_type`, documented below.
* `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
"sha512" currently. While "none" will skip checksumming, this is not
recommended since ISO files are generally large and corruption does happen
from time to time.
- `iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
"sha512" currently. While "none" will skip checksumming, this is not
recommended since ISO files are generally large and corruption does happen
from time to time.
* `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file).
If this is an HTTP URL, Packer will download it and cache it between
runs.
- `iso_url` (string) - A URL to the ISO containing the installation image.
This URL can be either an HTTP URL or a file URL (or path to a file). If
this is an HTTP URL, Packer will download it and cache it between runs.
* `ssh_username` (string) - The username to use to SSH into the machine
once the OS is installed.
- `ssh_username` (string) - The username to use to SSH into the machine once
the OS is installed.
### Optional:
* `disk_additional_size` (array of integers) - The size(s) of any additional
hard disks for the VM in megabytes. If this is not specified then the VM will
only contain a primary hard disk. The builder uses expandable, not fixed-size
virtual hard disks, so the actual file representing the disk will not use the
full size unless it is full.
- `disk_additional_size` (array of integers) - The size(s) of any additional
hard disks for the VM in megabytes. If this is not specified then the VM
will only contain a primary hard disk. The builder uses expandable, not
fixed-size virtual hard disks, so the actual file representing the disk will
not use the full size unless it is full.
* `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot
command. If this is not specified, it is assumed the installer will start
itself.
- `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the
boot command. If this is not specified, it is assumed the installer will
start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified,
the default is 10 seconds.
- `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't
specified, the default is 10 seconds.
* `disk_size` (integer) - The size of the hard disk for the VM in megabytes.
The builder uses expandable, not fixed-size virtual hard disks, so the
actual file representing the disk will not use the full size unless it is full.
By default this is set to 40,000 (about 40 GB).
- `disk_size` (integer) - The size of the hard disk for the VM in megabytes.
The builder uses expandable, not fixed-size virtual hard disks, so the
actual file representing the disk will not use the full size unless it
is full. By default this is set to 40,000 (about 40 GB).
* `disk_type_id` (string) - The type of VMware virtual disk to create.
The default is "1", which corresponds to a growable virtual disk split in
2GB files. This option is for advanced usage, modify only if you
know what you're doing. For more information, please consult the
[Virtual Disk Manager User's Guide](http://www.vmware.com/pdf/VirtualDiskManager.pdf)
for desktop VMware clients. For ESXi, refer to the proper ESXi documentation.
- `disk_type_id` (string) - The type of VMware virtual disk to create. The
default is "1", which corresponds to a growable virtual disk split in
2GB files. This option is for advanced usage, modify only if you know what
you're doing. For more information, please consult the [Virtual Disk Manager
User's Guide](http://www.vmware.com/pdf/VirtualDiskManager.pdf) for desktop
VMware clients. For ESXi, refer to the proper ESXi documentation.
* `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful
for unattended Windows installs, which look for an `Autounattend.xml` file
on removable media. By default, no floppy will be attached. All files
listed in this setting get placed into the root directory of the floppy
and the floppy is attached as the first floppy device. Currently, no
support exists for creating sub-directories on the floppy. Wildcard
characters (*, ?, and []) are allowed. Directory names are also allowed,
which will add all the files found in the directory to the floppy.
- `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful for
unattended Windows installs, which look for an `Autounattend.xml` file on
removable media. By default, no floppy will be attached. All files listed in
this setting get placed into the root directory of the floppy and the floppy
is attached as the first floppy device. Currently, no support exists for
creating sub-directories on the floppy. Wildcard characters (\*, ?,
and \[\]) are allowed. Directory names are also allowed, which will add all
the files found in the directory to the floppy.
* `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this
is "/Applications/VMware Fusion.app" but this setting allows you to
customize this.
- `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is
"/Applications/VMware Fusion.app" but this setting allows you to
customize this.
* `guest_os_type` (string) - The guest OS type being installed. This will be
set in the VMware VMX. By default this is "other". By specifying a more specific
OS type, VMware may perform some optimizations or virtual hardware changes
to better support the operating system running in the virtual machine.
- `guest_os_type` (string) - The guest OS type being installed. This will be
set in the VMware VMX. By default this is "other". By specifying a more
specific OS type, VMware may perform some optimizations or virtual hardware
changes to better support the operating system running in the
virtual machine.
* `headless` (boolean) - Packer defaults to building VMware
virtual machines by launching a GUI that shows the console of the
machine being built. When this value is set to true, the machine will
start without a console. For VMware machines, Packer will output VNC
connection information in case you need to connect to the console to
debug the build process.
- `headless` (boolean) - Packer defaults to building VMware virtual machines
by launching a GUI that shows the console of the machine being built. When
this value is set to true, the machine will start without a console. For
VMware machines, Packer will output VNC connection information in case you
need to connect to the console to debug the build process.
* `http_directory` (string) - Path to a directory to serve using an HTTP
server. The files in this directory will be available over HTTP that will
be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP
server will be started. The address and port of the HTTP server will be
available as variables in `boot_command`. This is covered in more detail
below.
- `http_directory` (string) - Path to a directory to serve using an
HTTP server. The files in this directory will be available over HTTP that
will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP server
will be started. The address and port of the HTTP server will be available
as variables in `boot_command`. This is covered in more detail below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`.
Because Packer often runs in parallel, Packer will choose a randomly available
port in this range to run the HTTP server. If you want to force the HTTP
server to be on one port, make this minimum and maximum port the same.
By default the values are 8000 and 9000, respectively.
- `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the
`http_directory`. Because Packer often runs in parallel, Packer will choose
a randomly available port in this range to run the HTTP server. If you want
to force the HTTP server to be on one port, make this minimum and maximum
port the same. By default the values are 8000 and 9000, respectively.
* `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to download
or while downloading a single URL, it will move on to the next. All URLs
must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
- `iso_urls` (array of strings) - Multiple URLs for the ISO to download.
Packer will try these in order. If anything goes wrong attempting to
download or while downloading a single URL, it will move on to the next. All
URLs must point to the same file (same checksum). By default this is empty
and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified.
* `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder.
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
of the build.
- `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running
the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
name of the build.
* `remote_cache_datastore` (string) - The path to the datastore where
supporting files will be stored during the build on the remote machine.
By default this is the same as the `remote_datastore` option. This only
has an effect if `remote_type` is enabled.
- `remote_cache_datastore` (string) - The path to the datastore where
supporting files will be stored during the build on the remote machine. By
default this is the same as the `remote_datastore` option. This only has an
effect if `remote_type` is enabled.
* `remote_cache_directory` (string) - The path where the ISO and/or floppy
files will be stored during the build on the remote machine. The path is
relative to the `remote_cache_datastore` on the remote machine. By default
this is "packer_cache". This only has an effect if `remote_type` is enabled.
- `remote_cache_directory` (string) - The path where the ISO and/or floppy
files will be stored during the build on the remote machine. The path is
relative to the `remote_cache_datastore` on the remote machine. By default
this is "packer\_cache". This only has an effect if `remote_type`
is enabled.
* `remote_datastore` (string) - The path to the datastore where the resulting
VM will be stored when it is built on the remote machine. By default this
is "datastore1". This only has an effect if `remote_type` is enabled.
- `remote_datastore` (string) - The path to the datastore where the resulting
VM will be stored when it is built on the remote machine. By default this
is "datastore1". This only has an effect if `remote_type` is enabled.
* `remote_host` (string) - The host of the remote machine used for access.
This is only required if `remote_type` is enabled.
- `remote_host` (string) - The host of the remote machine used for access.
This is only required if `remote_type` is enabled.
* `remote_password` (string) - The SSH password for the user used to
access the remote machine. By default this is empty. This only has an
effect if `remote_type` is enabled.
- `remote_password` (string) - The SSH password for the user used to access
the remote machine. By default this is empty. This only has an effect if
`remote_type` is enabled.
* `remote_type` (string) - The type of remote machine that will be used to
build this VM rather than a local desktop product. The only value accepted
for this currently is "esx5". If this is not set, a desktop product will be
used. By default, this is not set.
- `remote_type` (string) - The type of remote machine that will be used to
build this VM rather than a local desktop product. The only value accepted
for this currently is "esx5". If this is not set, a desktop product will
be used. By default, this is not set.
* `remote_username` (string) - The username for the SSH user that will access
the remote machine. This is required if `remote_type` is enabled.
- `remote_username` (string) - The username for the SSH user that will access
the remote machine. This is required if `remote_type` is enabled.
* `shutdown_command` (string) - The command to use to gracefully shut down
the machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
- `shutdown_command` (string) - The command to use to gracefully shut down the
machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
* `shutdown_timeout` (string) - The amount of time to wait after executing
the `shutdown_command` for the virtual machine to actually shut down.
If it doesn't shut down in this time, it is an error. By default, the timeout
is "5m", or five minutes.
- `shutdown_timeout` (string) - The amount of time to wait after executing the
`shutdown_command` for the virtual machine to actually shut down. If it
doesn't shut down in this time, it is an error. By default, the timeout is
"5m", or five minutes.
* `skip_compaction` (boolean) - VMware-created disks are defragmented
and compacted at the end of the build process using `vmware-vdiskmanager`.
In certain rare cases, this might actually end up making the resulting disks
slightly larger. If you find this to be the case, you can disable compaction
using this configuration value.
- `skip_compaction` (boolean) - VMware-created disks are defragmented and
compacted at the end of the build process using `vmware-vdiskmanager`. In
certain rare cases, this might actually end up making the resulting disks
slightly larger. If you find this to be the case, you can disable compaction
using this configuration value.
* `tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to
upload into the VM. Valid values are "darwin", "linux", and "windows".
By default, this is empty, which means VMware tools won't be uploaded.
- `tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to
upload into the VM. Valid values are "darwin", "linux", and "windows". By
default, this is empty, which means VMware tools won't be uploaded.
* `tools_upload_path` (string) - The path in the VM to upload the VMware
tools. This only takes effect if `tools_upload_flavor` is non-empty.
This is a [configuration template](/docs/templates/configuration-templates.html)
that has a single valid variable: `Flavor`, which will be the value of
`tools_upload_flavor`. By default the upload path is set to
`{{.Flavor}}.iso`. This setting is not used when `remote_type` is "esx5".
- `tools_upload_path` (string) - The path in the VM to upload the
VMware tools. This only takes effect if `tools_upload_flavor` is non-empty.
This is a [configuration
template](/docs/templates/configuration-templates.html) that has a single
valid variable: `Flavor`, which will be the value of `tools_upload_flavor`.
By default the upload path is set to `{{.Flavor}}.iso`. This setting is not
used when `remote_type` is "esx5".
* `version` (string) - The [vmx hardware version](http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003746) for the new virtual machine. Only the default value has been tested, any other value is experimental. Default value is '9'.
- `version` (string) - The [vmx hardware
version](http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003746)
for the new virtual machine. Only the default value has been tested, any
other value is experimental. Default value is '9'.
* `vm_name` (string) - This is the name of the VMX file for the new virtual
machine, without the file extension. By default this is "packer-BUILDNAME",
where "BUILDNAME" is the name of the build.
- `vm_name` (string) - This is the name of the VMX file for the new virtual
machine, without the file extension. By default this is "packer-BUILDNAME",
where "BUILDNAME" is the name of the build.
* `vmdk_name` (string) - The filename of the virtual disk that'll be created,
without the extension. This defaults to "packer".
- `vmdk_name` (string) - The filename of the virtual disk that'll be created,
without the extension. This defaults to "packer".
* `vmx_data` (object of key/value strings) - Arbitrary key/values
to enter into the virtual machine VMX file. This is for advanced users
who want to set properties such as memory, CPU, etc.
- `vmx_data` (object of key/value strings) - Arbitrary key/values to enter
into the virtual machine VMX file. This is for advanced users who want to
set properties such as memory, CPU, etc.
* `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`,
except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported.
- `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`,
except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported.
* `vmx_template_path` (string) - Path to a
[configuration template](/docs/templates/configuration-templates.html) that
defines the contents of the virtual machine VMX file for VMware. This is
for **advanced users only** as this can render the virtual machine
non-functional. See below for more information. For basic VMX modifications,
try `vmx_data` first.
- `vmx_template_path` (string) - Path to a [configuration
template](/docs/templates/configuration-templates.html) that defines the
contents of the virtual machine VMX file for VMware. This is for **advanced
users only** as this can render the virtual machine non-functional. See
below for more information. For basic VMX modifications, try
`vmx_data` first.
* `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port to
use for VNC access to the virtual machine. The builder uses VNC to type
the initial `boot_command`. Because Packer generally runs in parallel, Packer
uses a randomly chosen port in this range that appears available. By default
this is 5900 to 6000. The minimum and maximum ports are inclusive.
- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port
to use for VNC access to the virtual machine. The builder uses VNC to type
the initial `boot_command`. Because Packer generally runs in parallel,
Packer uses a randomly chosen port in this range that appears available. By
default this is 5900 to 6000. The minimum and maximum ports are inclusive.
## Boot Command
The `boot_command` configuration is very important: it specifies the keys
to type when the virtual machine is first booted in order to start the
OS installer. This command is typed after `boot_wait`, which gives the
virtual machine some time to actually load the ISO.
The `boot_command` configuration is very important: it specifies the keys to
type when the virtual machine is first booted in order to start the OS
installer. This command is typed after `boot_wait`, which gives the virtual
machine some time to actually load the ISO.
As documented above, the `boot_command` is an array of strings. The
strings are all typed in sequence. It is an array only to improve readability
within the template.
As documented above, the `boot_command` is an array of strings. The strings are
all typed in sequence. It is an array only to improve readability within the
template.
The boot command is "typed" character for character over a VNC connection
to the machine, simulating a human actually typing the keyboard. There are
a set of special keys available. If these are in your boot command, they
will be replaced by the proper key:
The boot command is "typed" character for character over a VNC connection to the
machine, simulating a human actually typing the keyboard. There are a set of
special keys available. If these are in your boot command, they will be replaced
by the proper key:
* `<bs>` - Backspace
- `<bs>` - Backspace
* `<del>` - Delete
- `<del>` - Delete
* `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
- `<enter>` and `<return>` - Simulates an actual "enter" or "return" keypress.
* `<esc>` - Simulates pressing the escape key.
- `<esc>` - Simulates pressing the escape key.
* `<tab>` - Simulates pressing the tab key.
- `<tab>` - Simulates pressing the tab key.
* `<f1>` - `<f12>` - Simulates pressing a function key.
- `<f1>` - `<f12>` - Simulates pressing a function key.
* `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
- `<up>` `<down>` `<left>` `<right>` - Simulates pressing an arrow key.
* `<spacebar>` - Simulates pressing the spacebar.
- `<spacebar>` - Simulates pressing the spacebar.
* `<insert>` - Simulates pressing the insert key.
- `<insert>` - Simulates pressing the insert key.
* `<home>` `<end>` - Simulates pressing the home and end keys.
- `<home>` `<end>` - Simulates pressing the home and end keys.
* `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
- `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
* `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before sending any additional keys. This
is useful if you have to generally wait for the UI to update before typing more.
- `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before
sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
In addition to the special keys, each command to type is treated as a
[configuration template](/docs/templates/configuration-templates.html).
The available variables are:
[configuration template](/docs/templates/configuration-templates.html). The
available variables are:
* `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will
be blank!
- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will be
blank!
Example boot command. This is actually a working boot command used to start
an Ubuntu 12.04 installer:
Example boot command. This is actually a working boot command used to start an
Ubuntu 12.04 installer:
```text
``` {.text}
[
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic ",
@ -314,71 +321,75 @@ an Ubuntu 12.04 installer:
## VMX Template
The heart of a VMware machine is the "vmx" file. This contains all the
virtual hardware metadata necessary for the VM to function. Packer by default
uses a [safe, flexible VMX file](https://github.com/mitchellh/packer/blob/20541a7eda085aa5cf35bfed5069592ca49d106e/builder/vmware/step_create_vmx.go#L84).
But for advanced users, this template can be customized. This allows
Packer to build virtual machines of effectively any guest operating system
type.
The heart of a VMware machine is the "vmx" file. This contains all the virtual
hardware metadata necessary for the VM to function. Packer by default uses a
[safe, flexible VMX
file](https://github.com/mitchellh/packer/blob/20541a7eda085aa5cf35bfed5069592ca49d106e/builder/vmware/step_create_vmx.go#L84).
But for advanced users, this template can be customized. This allows Packer to
build virtual machines of effectively any guest operating system type.
~> **This is an advanced feature.** Modifying the VMX template
can easily cause your virtual machine to not boot properly. Please only
modify the template if you know what you're doing.
\~&gt; **This is an advanced feature.** Modifying the VMX template can easily
cause your virtual machine to not boot properly. Please only modify the template
if you know what you're doing.
Within the template, a handful of variables are available so that your
template can continue working with the rest of the Packer machinery. Using
these variables isn't required, however.
Within the template, a handful of variables are available so that your template
can continue working with the rest of the Packer machinery. Using these
variables isn't required, however.
* `Name` - The name of the virtual machine.
* `GuestOS` - The VMware-valid guest OS type.
* `DiskName` - The filename (without the suffix) of the main virtual disk.
* `ISOPath` - The path to the ISO to use for the OS installation.
* `Version` - The Hardware version VMWare will execute this vm under. Also known as the `virtualhw.version`.
- `Name` - The name of the virtual machine.
- `GuestOS` - The VMware-valid guest OS type.
- `DiskName` - The filename (without the suffix) of the main virtual disk.
- `ISOPath` - The path to the ISO to use for the OS installation.
- `Version` - The Hardware version VMWare will execute this vm under. Also
known as the `virtualhw.version`.
## Building on a Remote vSphere Hypervisor
In addition to using the desktop products of VMware locally to build
virtual machines, Packer can use a remote VMware Hypervisor to build
the virtual machine.
In addition to using the desktop products of VMware locally to build virtual
machines, Packer can use a remote VMware Hypervisor to build the virtual
machine.
-> **Note:** Packer supports ESXi 5.1 and above.
-&gt; **Note:** Packer supports ESXi 5.1 and above.
Before using a remote vSphere Hypervisor, you need to enable GuestIPHack by running the following command:
Before using a remote vSphere Hypervisor, you need to enable GuestIPHack by
running the following command:
```text
``` {.text}
esxcli system settings advanced set -o /Net/GuestIPHack -i 1
```
When using a remote VMware Hypervisor, the builder still downloads the
ISO and various files locally, and uploads these to the remote machine.
Packer currently uses SSH to communicate to the ESXi machine rather than
the vSphere API. At some point, the vSphere API may be used.
When using a remote VMware Hypervisor, the builder still downloads the ISO and
various files locally, and uploads these to the remote machine. Packer currently
uses SSH to communicate to the ESXi machine rather than the vSphere API. At some
point, the vSphere API may be used.
Packer also requires VNC to issue boot commands during a build,
which may be disabled on some remote VMware Hypervisors. Please consult
the appropriate documentation on how to update VMware Hypervisor's firewall
to allow these connections.
Packer also requires VNC to issue boot commands during a build, which may be
disabled on some remote VMware Hypervisors. Please consult the appropriate
documentation on how to update VMware Hypervisor's firewall to allow these
connections.
To use a remote VMware vSphere Hypervisor to build your virtual machine,
fill in the required `remote_*` configurations:
To use a remote VMware vSphere Hypervisor to build your virtual machine, fill in
the required `remote_*` configurations:
* `remote_type` - This must be set to "esx5".
- `remote_type` - This must be set to "esx5".
* `remote_host` - The host of the remote machine.
- `remote_host` - The host of the remote machine.
Additionally, there are some optional configurations that you'll likely
have to modify as well:
Additionally, there are some optional configurations that you'll likely have to
modify as well:
* `remote_datastore` - The path to the datastore where the VM will be
stored on the ESXi machine.
- `remote_port` - The SSH port of the remote machine
* `remote_cache_datastore` - The path to the datastore where
supporting files will be stored during the build on the remote machine.
- `remote_datastore` - The path to the datastore where the VM will be stored
on the ESXi machine.
* `remote_cache_directory` - The path where the ISO and/or floppy
files will be stored during the build on the remote machine. The path is
relative to the `remote_cache_datastore` on the remote machine.
- `remote_cache_datastore` - The path to the datastore where supporting files
will be stored during the build on the remote machine.
* `remote_username` - The SSH username used to access the remote machine.
- `remote_cache_directory` - The path where the ISO and/or floppy files will
be stored during the build on the remote machine. The path is relative to
the `remote_cache_datastore` on the remote machine.
* `remote_password` - The SSH password for access to the remote machine.
- `remote_username` - The SSH username used to access the remote machine.
- `remote_password` - The SSH password for access to the remote machine.

View File

@ -1,34 +1,37 @@
---
layout: "docs"
page_title: "VMware Builder from VMX"
description: |-
This VMware Packer builder is able to create VMware virtual machines from an existing VMware virtual machine (a VMX file). It currently supports building virtual machines on hosts running VMware Fusion Professional for OS X, VMware Workstation for Linux and Windows, and VMware Player on Linux.
---
description: |
This VMware Packer builder is able to create VMware virtual machines from an
existing VMware virtual machine (a VMX file). It currently supports building
virtual machines on hosts running VMware Fusion Professional for OS X, VMware
Workstation for Linux and Windows, and VMware Player on Linux.
layout: docs
page_title: VMware Builder from VMX
...
# VMware Builder (from VMX)
Type: `vmware-vmx`
This VMware Packer builder is able to create VMware virtual machines from an
existing VMware virtual machine (a VMX file). It currently
supports building virtual machines on hosts running
[VMware Fusion Professional](http://www.vmware.com/products/fusion-professional/) for OS X,
existing VMware virtual machine (a VMX file). It currently supports building
virtual machines on hosts running [VMware Fusion
Professional](http://www.vmware.com/products/fusion-professional/) for OS X,
[VMware Workstation](http://www.vmware.com/products/workstation/overview.html)
for Linux and Windows, and
[VMware Player](http://www.vmware.com/products/player/) on Linux.
for Linux and Windows, and [VMware
Player](http://www.vmware.com/products/player/) on Linux.
The builder builds a virtual machine by cloning the VMX file using
the clone capabilities introduced in VMware Fusion Professional 6, Workstation 10,
and Player 6. After cloning the VM, it provisions software within the
new machine, shuts it down, and compacts the disks. The resulting folder
contains a new VMware virtual machine.
The builder builds a virtual machine by cloning the VMX file using the clone
capabilities introduced in VMware Fusion Professional 6, Workstation 10, and
Player 6. After cloning the VM, it provisions software within the new machine,
shuts it down, and compacts the disks. The resulting folder contains a new
VMware virtual machine.
## Basic Example
Here is an example. This example is fully functional as long as the source
path points to a real VMX file with the proper settings:
Here is an example. This example is fully functional as long as the source path
points to a real VMX file with the proper settings:
```javascript
``` {.javascript}
{
"type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx",
@ -40,110 +43,110 @@ path points to a real VMX file with the proper settings:
## Configuration Reference
There are many configuration options available for the VMware builder.
They are organized below into two categories: required and optional. Within
each category, the available options are alphabetized and described.
There are many configuration options available for the VMware builder. They are
organized below into two categories: required and optional. Within each
category, the available options are alphabetized and described.
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
can be configured for this builder.
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
* `source_path` (string) - Path to the source VMX file to clone.
- `source_path` (string) - Path to the source VMX file to clone.
* `ssh_username` (string) - The username to use to SSH into the machine
once the OS is installed.
- `ssh_username` (string) - The username to use to SSH into the machine once
the OS is installed.
### Optional:
* `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the boot
command. If this is not specified, it is assumed the installer will start
itself.
- `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the
boot command. If this is not specified, it is assumed the installer will
start itself.
* `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't specified,
the default is 10 seconds.
- `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't
specified, the default is 10 seconds.
* `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful
for unattended Windows installs, which look for an `Autounattend.xml` file
on removable media. By default, no floppy will be attached. All files
listed in this setting get placed into the root directory of the floppy
and the floppy is attached as the first floppy device. Currently, no
support exists for creating sub-directories on the floppy. Wildcard
characters (*, ?, and []) are allowed. Directory names are also allowed,
which will add all the files found in the directory to the floppy.
- `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful for
unattended Windows installs, which look for an `Autounattend.xml` file on
removable media. By default, no floppy will be attached. All files listed in
this setting get placed into the root directory of the floppy and the floppy
is attached as the first floppy device. Currently, no support exists for
creating sub-directories on the floppy. Wildcard characters (\*, ?,
and \[\]) are allowed. Directory names are also allowed, which will add all
the files found in the directory to the floppy.
* `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this
is "/Applications/VMware Fusion.app" but this setting allows you to
customize this.
- `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is
"/Applications/VMware Fusion.app" but this setting allows you to
customize this.
* `headless` (boolean) - Packer defaults to building VMware
virtual machines by launching a GUI that shows the console of the
machine being built. When this value is set to true, the machine will
start without a console. For VMware machines, Packer will output VNC
connection information in case you need to connect to the console to
debug the build process.
- `headless` (boolean) - Packer defaults to building VMware virtual machines
by launching a GUI that shows the console of the machine being built. When
this value is set to true, the machine will start without a console. For
VMware machines, Packer will output VNC connection information in case you
need to connect to the console to debug the build process.
* `http_directory` (string) - Path to a directory to serve using an HTTP
server. The files in this directory will be available over HTTP that will
be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP
server will be started. The address and port of the HTTP server will be
available as variables in `boot_command`. This is covered in more detail
below.
- `http_directory` (string) - Path to a directory to serve using an
HTTP server. The files in this directory will be available over HTTP that
will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP server
will be started. The address and port of the HTTP server will be available
as variables in `boot_command`. This is covered in more detail below.
* `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`.
Because Packer often runs in parallel, Packer will choose a randomly available
port in this range to run the HTTP server. If you want to force the HTTP
server to be on one port, make this minimum and maximum port the same.
By default the values are 8000 and 9000, respectively.
- `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the
`http_directory`. Because Packer often runs in parallel, Packer will choose
a randomly available port in this range to run the HTTP server. If you want
to force the HTTP server to be on one port, make this minimum and maximum
port the same. By default the values are 8000 and 9000, respectively.
* `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder.
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
of the build.
- `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running
the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
name of the build.
* `shutdown_command` (string) - The command to use to gracefully shut down the machine once all
the provisioning is done. By default this is an empty string, which tells Packer to just
forcefully shut down the machine unless a shutdown command takes place inside script so this may
safely be omitted. If one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your last script.
- `shutdown_command` (string) - The command to use to gracefully shut down the
machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine unless a
shutdown command takes place inside script so this may safely be omitted. If
one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your
last script.
* `shutdown_timeout` (string) - The amount of time to wait after executing
the `shutdown_command` for the virtual machine to actually shut down.
If it doesn't shut down in this time, it is an error. By default, the timeout
is "5m", or five minutes.
- `shutdown_timeout` (string) - The amount of time to wait after executing the
`shutdown_command` for the virtual machine to actually shut down. If it
doesn't shut down in this time, it is an error. By default, the timeout is
"5m", or five minutes.
* `skip_compaction` (boolean) - VMware-created disks are defragmented
and compacted at the end of the build process using `vmware-vdiskmanager`.
In certain rare cases, this might actually end up making the resulting disks
slightly larger. If you find this to be the case, you can disable compaction
using this configuration value.
- `skip_compaction` (boolean) - VMware-created disks are defragmented and
compacted at the end of the build process using `vmware-vdiskmanager`. In
certain rare cases, this might actually end up making the resulting disks
slightly larger. If you find this to be the case, you can disable compaction
using this configuration value.
* `vm_name` (string) - This is the name of the VMX file for the new virtual
machine, without the file extension. By default this is "packer-BUILDNAME",
where "BUILDNAME" is the name of the build.
- `vm_name` (string) - This is the name of the VMX file for the new virtual
machine, without the file extension. By default this is "packer-BUILDNAME",
where "BUILDNAME" is the name of the build.
* `vmx_data` (object of key/value strings) - Arbitrary key/values
to enter into the virtual machine VMX file. This is for advanced users
who want to set properties such as memory, CPU, etc.
- `vmx_data` (object of key/value strings) - Arbitrary key/values to enter
into the virtual machine VMX file. This is for advanced users who want to
set properties such as memory, CPU, etc.
* `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`,
except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported.
- `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`,
except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported.
* `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port to
use for VNC access to the virtual machine. The builder uses VNC to type
the initial `boot_command`. Because Packer generally runs in parallel, Packer
uses a randomly chosen port in this range that appears available. By default
this is 5900 to 6000. The minimum and maximum ports are inclusive.
- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port
to use for VNC access to the virtual machine. The builder uses VNC to type
the initial `boot_command`. Because Packer generally runs in parallel,
Packer uses a randomly chosen port in this range that appears available. By
default this is 5900 to 6000. The minimum and maximum ports are inclusive.

View File

@ -1,27 +1,28 @@
---
layout: "docs"
page_title: "VMware Builder"
description: |-
The VMware Packer builder is able to create VMware virtual machines for use with any VMware product.
---
description: |
The VMware Packer builder is able to create VMware virtual machines for use with
any VMware product.
layout: docs
page_title: VMware Builder
...
# VMware Builder
The VMware Packer builder is able to create VMware virtual machines for use
with any VMware product.
The VMware Packer builder is able to create VMware virtual machines for use with
any VMware product.
Packer actually comes with multiple builders able to create VMware
machines, depending on the strategy you want to use to build the image.
Packer supports the following VMware builders:
Packer actually comes with multiple builders able to create VMware machines,
depending on the strategy you want to use to build the image. Packer supports
the following VMware builders:
* [vmware-iso](/docs/builders/vmware-iso.html) - Starts from
an ISO file, creates a brand new VMware VM, installs an OS,
provisions software within the OS, then exports that machine to create
an image. This is best for people who want to start from scratch.
- [vmware-iso](/docs/builders/vmware-iso.html) - Starts from an ISO file,
creates a brand new VMware VM, installs an OS, provisions software within
the OS, then exports that machine to create an image. This is best for
people who want to start from scratch.
* [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder
imports an existing VMware machine (from a VMX file), runs provisioners
on top of that VM, and exports that machine to create an image.
This is best if you have an existing VMware VM you want to use as the
source. As an additional benefit, you can feed the artifact of this
builder back into Packer to iterate on a machine.
- [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder imports an
existing VMware machine (from a VMX file), runs provisioners on top of that
VM, and exports that machine to create an image. This is best if you have an
existing VMware VM you want to use as the source. As an additional benefit,
you can feed the artifact of this builder back into Packer to iterate on
a machine.

View File

@ -1,37 +1,42 @@
---
layout: "docs"
page_title: "Build - Command-Line"
description: |-
The `packer build` Packer command takes a template and runs all the builds within it in order to generate a set of artifacts. The various builds specified within a template are executed in parallel, unless otherwise specified. And the artifacts that are created will be outputted at the end of the build.
---
description: |
The `packer build` Packer command takes a template and runs all the builds
within it in order to generate a set of artifacts. The various builds specified
within a template are executed in parallel, unless otherwise specified. And the
artifacts that are created will be outputted at the end of the build.
layout: docs
page_title: 'Build - Command-Line'
...
# Command-Line: Build
The `packer build` Packer command takes a template and runs all the builds within
it in order to generate a set of artifacts. The various builds specified within
a template are executed in parallel, unless otherwise specified. And the
The `packer build` Packer command takes a template and runs all the builds
within it in order to generate a set of artifacts. The various builds specified
within a template are executed in parallel, unless otherwise specified. And the
artifacts that are created will be outputted at the end of the build.
## Options
* `-color=false` - Disables colorized output. Enabled by default.
- `-color=false` - Disables colorized output. Enabled by default.
* `-debug` - Disables parallelization and enables debug mode. Debug mode flags
the builders that they should output debugging information. The exact behavior
of debug mode is left to the builder. In general, builders usually will stop
between each step, waiting for keyboard input before continuing. This will allow
the user to inspect state and so on.
- `-debug` - Disables parallelization and enables debug mode. Debug mode flags
the builders that they should output debugging information. The exact
behavior of debug mode is left to the builder. In general, builders usually
will stop between each step, waiting for keyboard input before continuing.
This will allow the user to inspect state and so on.
* `-except=foo,bar,baz` - Builds all the builds except those with the given
comma-separated names. Build names by default are the names of their builders,
unless a specific `name` attribute is specified within the configuration.
- `-except=foo,bar,baz` - Builds all the builds except those with the given
comma-separated names. Build names by default are the names of their
builders, unless a specific `name` attribute is specified within
the configuration.
* `-force` - Forces a builder to run when artifacts from a previous build prevent
a build from running. The exact behavior of a forced build is left to the builder.
In general, a builder supporting the forced build will remove the artifacts from
the previous build. This will allow the user to repeat a build without having to
manually clean these artifacts beforehand.
- `-force` - Forces a builder to run when artifacts from a previous build
prevent a build from running. The exact behavior of a forced build is left
to the builder. In general, a builder supporting the forced build will
remove the artifacts from the previous build. This will allow the user to
repeat a build without having to manually clean these artifacts beforehand.
* `-only=foo,bar,baz` - Only build the builds with the given comma-separated
names. Build names by default are the names of their builders, unless a
specific `name` attribute is specified within the configuration.
- `-only=foo,bar,baz` - Only build the builds with the given
comma-separated names. Build names by default are the names of their
builders, unless a specific `name` attribute is specified within
the configuration.

View File

@ -1,33 +1,34 @@
---
layout: "docs"
page_title: "Fix - Command-Line"
description: |-
The `packer fix` Packer command takes a template and finds backwards incompatible parts of it and brings it up to date so it can be used with the latest version of Packer. After you update to a new Packer release, you should run the fix command to make sure your templates work with the new release.
---
description: |
The `packer fix` Packer command takes a template and finds backwards
incompatible parts of it and brings it up to date so it can be used with the
latest version of Packer. After you update to a new Packer release, you should
run the fix command to make sure your templates work with the new release.
layout: docs
page_title: 'Fix - Command-Line'
...
# Command-Line: Fix
The `packer fix` Packer command takes a template and finds backwards incompatible
parts of it and brings it up to date so it can be used with the latest version
of Packer. After you update to a new Packer release, you should run the
fix command to make sure your templates work with the new release.
The `packer fix` Packer command takes a template and finds backwards
incompatible parts of it and brings it up to date so it can be used with the
latest version of Packer. After you update to a new Packer release, you should
run the fix command to make sure your templates work with the new release.
The fix command will output the changed template to standard out, so you
should redirect standard using standard OS-specific techniques if you want to
save it to a file. For example, on Linux systems, you may want to do this:
The fix command will output the changed template to standard out, so you should
redirect standard using standard OS-specific techniques if you want to save it
to a file. For example, on Linux systems, you may want to do this:
```
$ packer fix old.json > new.json
```
\$ packer fix old.json &gt; new.json
If fixing fails for any reason, the fix command will exit with a non-zero
exit status. Error messages appear on standard error, so if you're redirecting
If fixing fails for any reason, the fix command will exit with a non-zero exit
status. Error messages appear on standard error, so if you're redirecting
output, you'll still see error messages.
-> **Even when Packer fix doesn't do anything** to the template,
the template will be outputted to standard out. Things such as configuration
key ordering and indentation may be changed. The output format however, is
pretty-printed for human readability.
-&gt; **Even when Packer fix doesn't do anything** to the template, the template
will be outputted to standard out. Things such as configuration key ordering and
indentation may be changed. The output format however, is pretty-printed for
human readability.
The full list of fixes that the fix command performs is visible in the
help output, which can be seen via `packer fix -h`.
The full list of fixes that the fix command performs is visible in the help
output, which can be seen via `packer fix -h`.

View File

@ -1,33 +1,35 @@
---
layout: "docs"
page_title: "Inspect - Command-Line"
description: |-
The `packer inspect` Packer command takes a template and outputs the various components a template defines. This can help you quickly learn about a template without having to dive into the JSON itself. The command will tell you things like what variables a template accepts, the builders it defines, the provisioners it defines and the order they'll run, and more.
---
description: |
The `packer inspect` Packer command takes a template and outputs the various
components a template defines. This can help you quickly learn about a template
without having to dive into the JSON itself. The command will tell you things
like what variables a template accepts, the builders it defines, the
provisioners it defines and the order they'll run, and more.
layout: docs
page_title: 'Inspect - Command-Line'
...
# Command-Line: Inspect
The `packer inspect` Packer command takes a template and outputs the various components
a template defines. This can help you quickly learn about a template without
having to dive into the JSON itself.
The command will tell you things like what variables a template accepts,
the builders it defines, the provisioners it defines and the order they'll
run, and more.
The `packer inspect` Packer command takes a template and outputs the various
components a template defines. This can help you quickly learn about a template
without having to dive into the JSON itself. The command will tell you things
like what variables a template accepts, the builders it defines, the
provisioners it defines and the order they'll run, and more.
This command is extra useful when used with
[machine-readable output](/docs/command-line/machine-readable.html) enabled.
The command outputs the components in a way that is parseable by machines.
This command is extra useful when used with [machine-readable
output](/docs/command-line/machine-readable.html) enabled. The command outputs
the components in a way that is parseable by machines.
The command doesn't validate the actual configuration of the various
components (that is what the `validate` command is for), but it will
validate the syntax of your template by necessity.
The command doesn't validate the actual configuration of the various components
(that is what the `validate` command is for), but it will validate the syntax of
your template by necessity.
## Usage Example
Given a basic template, here is an example of what the output might
look like:
Given a basic template, here is an example of what the output might look like:
```text
``` {.text}
$ packer inspect template.json
Variables and their defaults:

View File

@ -1,24 +1,27 @@
---
layout: "docs"
page_title: "Packer Command-Line"
description: |-
Packer is controlled using a command-line interface. All interaction with Packer is done via the `packer` tool. Like many other command-line tools, the `packer` tool takes a subcommand to execute, and that subcommand may have additional options as well. Subcommands are executed with `packer SUBCOMMAND`, where "SUBCOMMAND" is obviously the actual command you wish to execute.
---
description: |
Packer is controlled using a command-line interface. All interaction with Packer
is done via the `packer` tool. Like many other command-line tools, the `packer`
tool takes a subcommand to execute, and that subcommand may have additional
options as well. Subcommands are executed with `packer SUBCOMMAND`, where
"SUBCOMMAND" is obviously the actual command you wish to execute.
layout: docs
page_title: 'Packer Command-Line'
...
# Packer Command-Line
Packer is controlled using a command-line interface. All interaction with
Packer is done via the `packer` tool. Like many other command-line tools,
the `packer` tool takes a subcommand to execute, and that subcommand may
have additional options as well. Subcommands are executed with
`packer SUBCOMMAND`, where "SUBCOMMAND" is obviously the actual command you wish
to execute.
Packer is controlled using a command-line interface. All interaction with Packer
is done via the `packer` tool. Like many other command-line tools, the `packer`
tool takes a subcommand to execute, and that subcommand may have additional
options as well. Subcommands are executed with `packer SUBCOMMAND`, where
"SUBCOMMAND" is obviously the actual command you wish to execute.
If you run `packer` by itself, help will be displayed showing all available
subcommands and a brief synopsis of what they do. In addition to this, you can
run any `packer` command with the `-h` flag to output more detailed help for
a specific subcommand.
run any `packer` command with the `-h` flag to output more detailed help for a
specific subcommand.
In addition to the documentation available on the command-line, each command
is documented on this website. You can find the documentation for a specific
In addition to the documentation available on the command-line, each command is
documented on this website. You can find the documentation for a specific
subcommand using the navigation to the left.

View File

@ -1,30 +1,33 @@
---
layout: "docs"
page_title: "Machine-Readable Output - Command-Line"
description: |-
By default, the output of Packer is very human-readable. It uses nice formatting, spacing, and colors in order to make Packer a pleasure to use. However, Packer was built with automation in mind. To that end, Packer supports a fully machine-readable output setting, allowing you to use Packer in automated environments.
---
description: |
By default, the output of Packer is very human-readable. It uses nice
formatting, spacing, and colors in order to make Packer a pleasure to use.
However, Packer was built with automation in mind. To that end, Packer supports
a fully machine-readable output setting, allowing you to use Packer in automated
environments.
layout: docs
page_title: 'Machine-Readable Output - Command-Line'
...
# Machine-Readable Output
By default, the output of Packer is very human-readable. It uses nice
formatting, spacing, and colors in order to make Packer a pleasure to use.
However, Packer was built with automation in mind. To that end, Packer
supports a fully machine-readable output setting, allowing you to use
Packer in automated environments.
However, Packer was built with automation in mind. To that end, Packer supports
a fully machine-readable output setting, allowing you to use Packer in automated
environments.
The machine-readable output format is easy to use and read and was made
with Unix tools in mind, so it is awk/sed/grep/etc. friendly.
The machine-readable output format is easy to use and read and was made with
Unix tools in mind, so it is awk/sed/grep/etc. friendly.
## Enabling
The machine-readable output format can be enabled by passing the
`-machine-readable` flag to any Packer command. This immediately enables
all output to become machine-readable on stdout. Logging, if enabled,
continues to appear on stderr. An example of the output is shown
below:
`-machine-readable` flag to any Packer command. This immediately enables all
output to become machine-readable on stdout. Logging, if enabled, continues to
appear on stderr. An example of the output is shown below:
```text
``` {.text}
$ packer -machine-readable version
1376289459,,version,0.2.4
1376289459,,version-prerelease,
@ -32,54 +35,52 @@ $ packer -machine-readable version
1376289459,,ui,say,Packer v0.2.4.dev (eed6ece+CHANGES)
```
The format will be covered in more detail later. But as you can see,
the output immediately becomes machine-friendly. Try some other commands
with the `-machine-readable` flag to see!
The format will be covered in more detail later. But as you can see, the output
immediately becomes machine-friendly. Try some other commands with the
`-machine-readable` flag to see!
## Format
The machine readable format is a line-oriented, comma-delimited text
format. This makes it extremely easy to parse using standard Unix tools such
as awk or grep in addition to full programming languages like Ruby or
Python.
The machine readable format is a line-oriented, comma-delimited text format.
This makes it extremely easy to parse using standard Unix tools such as awk or
grep in addition to full programming languages like Ruby or Python.
The format is:
```text
``` {.text}
timestamp,target,type,data...
```
Each component is explained below:
* **timestamp** is a Unix timestamp in UTC of when the message was
printed.
- **timestamp** is a Unix timestamp in UTC of when the message was printed.
* **target** is the target of the following output. This is empty if
the message is related to Packer globally. Otherwise, this is generally
a build name so you can relate output to a specific build while parallel
builds are running.
- **target** is the target of the following output. This is empty if the
message is related to Packer globally. Otherwise, this is generally a build
name so you can relate output to a specific build while parallel builds
are running.
* **type** is the type of machine-readable message being outputted. There
are a set of standard types which are covered later, but each component
of Packer (builders, provisioners, etc.) may output their own custom types
as well, allowing the machine-readable output to be infinitely flexible.
- **type** is the type of machine-readable message being outputted. There are
a set of standard types which are covered later, but each component of
Packer (builders, provisioners, etc.) may output their own custom types as
well, allowing the machine-readable output to be infinitely flexible.
* **data** is zero or more comma-seperated values associated with the prior
type. The exact amount and meaning of this data is type-dependent, so you
must read the documentation associated with the type to understand fully.
- **data** is zero or more comma-seperated values associated with the
prior type. The exact amount and meaning of this data is type-dependent, so
you must read the documentation associated with the type to
understand fully.
Within the format, if data contains a comma, it is replaced with
`%!(PACKER_COMMA)`. This was preferred over an escape character such as
`\'` because it is more friendly to tools like awk.
`%!(PACKER_COMMA)`. This was preferred over an escape character such as `\'`
because it is more friendly to tools like awk.
Newlines within the format are replaced with their respective standard
escape sequence. Newlines become a literal `\n` within the output. Carriage
returns become a literal `\r`.
Newlines within the format are replaced with their respective standard escape
sequence. Newlines become a literal `\n` within the output. Carriage returns
become a literal `\r`.
## Message Types
The set of machine-readable message types can be found in the
[machine-readable format](/docs/machine-readable/index.html)
complete documentation section. This section contains documentation
on all the message types exposed by Packer core as well as all the
components that ship with Packer by default.
The set of machine-readable message types can be found in the [machine-readable
format](/docs/machine-readable/index.html) complete documentation section. This
section contains documentation on all the message types exposed by Packer core
as well as all the components that ship with Packer by default.

View File

@ -1,51 +1,98 @@
---
layout: "docs"
page_title: "Push - Command-Line"
description: |-
The `packer push` Packer command takes a template and pushes it to a build service that will automatically build this Packer template.
---
description: |
The `packer push` command uploads a template and other required files to the
Atlas build service, which will run your packer build for you.
layout: docs
page_title: 'Push - Command-Line'
...
# Command-Line: Push
The `packer push` Packer command takes a template and pushes it to a Packer
build service such as [HashiCorp's Atlas](https://atlas.hashicorp.com). The
build service will automatically build your Packer template and expose the
artifacts.
The `packer push` command uploads a template and other required files to the
Atlas service, which will run your packer build for you. [Learn more about
Packer in Atlas.](https://atlas.hashicorp.com/help/packer/features)
External build services such as HashiCorp's Atlas make it easy to iterate on
Packer templates, especially when the builder you are running may not be easily
accessable (such as developing `qemu` builders on Mac or Windows).
Running builds remotely makes it easier to iterate on packer builds that are not
supported on your operating system, for example, building docker or QEMU while
developing on Mac or Windows. Also, the hard work of building VMs is offloaded
to dedicated servers with more CPU, memory, and network resources.
!> The Packer build service will receive the raw copy of your Packer template
when you push. **If you have sensitive data in your Packer template, you should
move that data into Packer variables or environment variables!**
When you use push to run a build in Atlas, you may also want to store your build
artifacts in Atlas. In order to do that you will also need to configure the
[Atlas post-processor](/docs/post-processors/atlas.html). This is optional, and
both the post-processor and push commands can be used independently.
For the `push` command to work, the [push configuration](/docs/templates/push.html)
must be completed within the template.
!&gt; The push command uploads your template and other files, like provisioning
scripts, to Atlas. Take care not to upload files that you don't intend to, like
secrets or large binaries. **If you have secrets in your Packer template, you
should [move them into environment
variables](https://packer.io/docs/templates/user-variables.html).**
Most push behavior is [configured in your packer
template](/docs/templates/push.html). You can override or supplement your
configuration using the options below.
## Options
* `-message` - A message to identify the purpose or changes in this Packer
template much like a VCS commit message. This message will be passed to the
Packer build service. This option is also available as a short option `-m`.
- `-message` - A message to identify the purpose or changes in this Packer
template much like a VCS commit message. This message will be passed to the
Packer build service. This option is also available as a short option `-m`.
* `-token` - An access token for authenticating the push to the Packer build
service such as Atlas. This can also be specified within the push
configuration in the template.
- `-token` - Your access token for the Atlas API.
* `-name` - The name of the build in the service. This typically
looks like `hashicorp/precise64`.
-&gt; Login to Atlas to [generate an Atlas
Token](https://atlas.hashicorp.com/settings/tokens). The most convenient way to
configure your token is to set it to the `ATLAS_TOKEN` environment variable, but
you can also use `-token` on the command line.
- `-name` - The name of the build in the service. This typically looks like
`hashicorp/precise64`, which follows the form `<username>/<buildname>`. This
must be specified here or in your template.
- `-var` - Set a variable in your packer template. This option can be used
multiple times. This is useful for setting version numbers for your build.
- `-var-file` - Set template variables from a file.
## Examples
Push a Packer template:
```shell
``` {.shell}
$ packer push -m "Updating the apache version" template.json
```
Push a Packer template with a custom token:
```shell
``` {.shell}
$ packer push -token ABCD1234 template.json
```
## Limits
`push` is limited to 5gb upload when pushing to Atlas. To be clear, packer *can*
build artifacts larger than 5gb, and Atlas *can* store artifacts larger than
5gb. However, the initial payload you push to *start* the build cannot exceed
5gb. If your boot ISO is larger than 5gb (for example if you are building OSX
images), you will need to put your boot ISO in an external web service and
download it during the packer run.
## Building Private `.iso` and `.dmg` Files
If you want to build a private `.iso` file you can upload the `.iso` to a secure
file hosting service like [Amazon
S3](http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html),
[Google Cloud
Storage](https://cloud.google.com/storage/docs/gsutil/commands/signurl), or
[Azure File
Service](https://msdn.microsoft.com/en-us/library/azure/dn194274.aspx) and
download it at build time using a signed URL. You should convert `.dmg` files to
`.iso` and follow a similar procedure.
Once you have added [variables in your packer
template](/docs/templates/user-variables.html) you can specify credentials or
signed URLs using Atlas environment variables, or via the `-var` flag when you
run `push`.
![Configure your signed URL in the Atlas build variables
menu](/assets/images/packer-signed-urls.png)

View File

@ -1,20 +1,24 @@
---
layout: "docs"
page_title: "Validate - Command-Line"
description: |-
The `packer validate` Packer command is used to validate the syntax and configuration of a template. The command will return a zero exit status on success, and a non-zero exit status on failure. Additionally, if a template doesn't validate, any error messages will be outputted.
---
description: |
The `packer validate` Packer command is used to validate the syntax and
configuration of a template. The command will return a zero exit status on
success, and a non-zero exit status on failure. Additionally, if a template
doesn't validate, any error messages will be outputted.
layout: docs
page_title: 'Validate - Command-Line'
...
# Command-Line: Validate
The `packer validate` Packer command is used to validate the syntax and configuration
of a [template](/docs/templates/introduction.html). The command will return
a zero exit status on success, and a non-zero exit status on failure. Additionally,
if a template doesn't validate, any error messages will be outputted.
The `packer validate` Packer command is used to validate the syntax and
configuration of a [template](/docs/templates/introduction.html). The command
will return a zero exit status on success, and a non-zero exit status on
failure. Additionally, if a template doesn't validate, any error messages will
be outputted.
Example usage:
```text
``` {.text}
$ packer validate my-template.json
Template validation failed. Errors are shown below.
@ -25,5 +29,5 @@ Errors validating build 'vmware'. 1 error(s) occurred:
## Options
* `-syntax-only` - Only the syntax of the template is checked. The configuration
is not validated.
- `-syntax-only` - Only the syntax of the template is checked. The
configuration is not validated.

View File

@ -1,167 +1,170 @@
---
layout: "docs"
page_title: "Custom Builder - Extend Packer"
description: |-
Packer Builders are the components of Packer responsible for creating a machine, bringing it to a point where it can be provisioned, and then turning that provisioned machine into some sort of machine image. Several builders are officially distributed with Packer itself, such as the AMI builder, the VMware builder, etc. However, it is possible to write custom builders using the Packer plugin interface, and this page documents how to do that.
---
description: |
Packer Builders are the components of Packer responsible for creating a machine,
bringing it to a point where it can be provisioned, and then turning that
provisioned machine into some sort of machine image. Several builders are
officially distributed with Packer itself, such as the AMI builder, the VMware
builder, etc. However, it is possible to write custom builders using the Packer
plugin interface, and this page documents how to do that.
layout: docs
page_title: 'Custom Builder - Extend Packer'
...
# Custom Builder Development
Packer Builders are the components of Packer responsible for creating a machine,
bringing it to a point where it can be provisioned, and then turning
that provisioned machine into some sort of machine image. Several builders
are officially distributed with Packer itself, such as the AMI builder, the
VMware builder, etc. However, it is possible to write custom builders using
the Packer plugin interface, and this page documents how to do that.
bringing it to a point where it can be provisioned, and then turning that
provisioned machine into some sort of machine image. Several builders are
officially distributed with Packer itself, such as the AMI builder, the VMware
builder, etc. However, it is possible to write custom builders using the Packer
plugin interface, and this page documents how to do that.
Prior to reading this page, it is assumed you have read the page on
[plugin development basics](/docs/extend/developing-plugins.html).
Prior to reading this page, it is assumed you have read the page on [plugin
development basics](/docs/extend/developing-plugins.html).
~> **Warning!** This is an advanced topic. If you're new to Packer, we
\~&gt; **Warning!** This is an advanced topic. If you're new to Packer, we
recommend getting a bit more comfortable before you dive into writing plugins.
## The Interface
The interface that must be implemented for a builder is the `packer.Builder`
interface. It is reproduced below for easy reference. The actual interface
in the source code contains some basic documentation as well explaining
what each method should do.
interface. It is reproduced below for easy reference. The actual interface in
the source code contains some basic documentation as well explaining what each
method should do.
```go
``` {.go}
type Builder interface {
Prepare(...interface{}) error
Run(ui Ui, hook Hook, cache Cache) (Artifact, error)
Cancel()
Prepare(...interface{}) error
Run(ui Ui, hook Hook, cache Cache) (Artifact, error)
Cancel()
}
```
### The "Prepare" Method
The `Prepare` method for each builder is called prior to any runs with
the configuration that was given in the template. This is passed in as
an array of `interface{}` types, but is generally `map[string]interface{}`. The prepare
The `Prepare` method for each builder is called prior to any runs with the
configuration that was given in the template. This is passed in as an array of
`interface{}` types, but is generally `map[string]interface{}`. The prepare
method is responsible for translating this configuration into an internal
structure, validating it, and returning any errors.
For multiple parameters, they should be merged together into the final
configuration, with later parameters overwriting any previous configuration.
The exact semantics of the merge are left to the builder author.
configuration, with later parameters overwriting any previous configuration. The
exact semantics of the merge are left to the builder author.
For decoding the `interface{}` into a meaningful structure, the
[mapstructure](https://github.com/mitchellh/mapstructure) library is recommended.
Mapstructure will take an `interface{}` and decode it into an arbitrarily
complex struct. If there are any errors, it generates very human friendly
errors that can be returned directly from the prepare method.
[mapstructure](https://github.com/mitchellh/mapstructure) library is
recommended. Mapstructure will take an `interface{}` and decode it into an
arbitrarily complex struct. If there are any errors, it generates very human
friendly errors that can be returned directly from the prepare method.
While it is not actively enforced, **no side effects** should occur from
running the `Prepare` method. Specifically, don't create files, don't launch
virtual machines, etc. Prepare's purpose is solely to configure the builder
and validate the configuration.
While it is not actively enforced, **no side effects** should occur from running
the `Prepare` method. Specifically, don't create files, don't launch virtual
machines, etc. Prepare's purpose is solely to configure the builder and validate
the configuration.
In addition to normal configuration, Packer will inject a `map[string]interface{}`
with a key of `packer.DebugConfigKey` set to boolean `true` if debug mode
is enabled for the build. If this is set to true, then the builder
should enable a debug mode which assists builder developers and advanced
users to introspect what is going on during a build. During debug
builds, parallelism is strictly disabled, so it is safe to request input
from stdin and so on.
In addition to normal configuration, Packer will inject a
`map[string]interface{}` with a key of `packer.DebugConfigKey` set to boolean
`true` if debug mode is enabled for the build. If this is set to true, then the
builder should enable a debug mode which assists builder developers and advanced
users to introspect what is going on during a build. During debug builds,
parallelism is strictly disabled, so it is safe to request input from stdin and
so on.
### The "Run" Method
`Run` is where all the interesting stuff happens. Run is executed, often
in parallel for multiple builders, to actually build the machine, provision
it, and create the resulting machine image, which is returned as an
implementation of the `packer.Artifact` interface.
`Run` is where all the interesting stuff happens. Run is executed, often in
parallel for multiple builders, to actually build the machine, provision it, and
create the resulting machine image, which is returned as an implementation of
the `packer.Artifact` interface.
The `Run` method takes three parameters. These are all very useful. The
`packer.Ui` object is used to send output to the console. `packer.Hook` is
used to execute hooks, which are covered in more detail in the hook section
below. And `packer.Cache` is used to store files between multiple Packer
runs, and is covered in more detail in the cache section below.
`packer.Ui` object is used to send output to the console. `packer.Hook` is used
to execute hooks, which are covered in more detail in the hook section below.
And `packer.Cache` is used to store files between multiple Packer runs, and is
covered in more detail in the cache section below.
Because builder runs are typically a complex set of many steps, the
[multistep](https://github.com/mitchellh/multistep) library is recommended
to bring order to the complexity. Multistep is a library which allows you to
separate your logic into multiple distinct "steps" and string them together.
It fully supports cancellation mid-step and so on. Please check it out, it is
how the built-in builders are all implemented.
[multistep](https://github.com/mitchellh/multistep) library is recommended to
bring order to the complexity. Multistep is a library which allows you to
separate your logic into multiple distinct "steps" and string them together. It
fully supports cancellation mid-step and so on. Please check it out, it is how
the built-in builders are all implemented.
Finally, as a result of `Run`, an implementation of `packer.Artifact` should
be returned. More details on creating a `packer.Artifact` are covered in the
artifact section below. If something goes wrong during the build, an error
can be returned, as well. Note that it is perfectly fine to produce no artifact
and no error, although this is rare.
Finally, as a result of `Run`, an implementation of `packer.Artifact` should be
returned. More details on creating a `packer.Artifact` are covered in the
artifact section below. If something goes wrong during the build, an error can
be returned, as well. Note that it is perfectly fine to produce no artifact and
no error, although this is rare.
### The "Cancel" Method
The `Run` method is often run in parallel. The `Cancel` method can be
called at any time and requests cancellation of any builder run in progress.
This method should block until the run actually stops.
The `Run` method is often run in parallel. The `Cancel` method can be called at
any time and requests cancellation of any builder run in progress. This method
should block until the run actually stops.
Cancels are most commonly triggered by external interrupts, such as the
user pressing `Ctrl-C`. Packer will only exit once all the builders clean up,
so it is important that you architect your builder in a way that it is quick
to respond to these cancellations and clean up after itself.
Cancels are most commonly triggered by external interrupts, such as the user
pressing `Ctrl-C`. Packer will only exit once all the builders clean up, so it
is important that you architect your builder in a way that it is quick to
respond to these cancellations and clean up after itself.
## Creating an Artifact
The `Run` method is expected to return an implementation of the
`packer.Artifact` interface. Each builder must create their own
implementation. The interface is very simple and the documentation on the
interface is quite clear.
`packer.Artifact` interface. Each builder must create their own implementation.
The interface is very simple and the documentation on the interface is quite
clear.
The only part of an artifact that may be confusing is the `BuilderId`
method. This method must return an absolutely unique ID for the builder.
In general, I follow the practice of making the ID contain my GitHub username
and then the platform it is building for. For example, the builder ID of
the VMware builder is "mitchellh.vmware" or something similar.
The only part of an artifact that may be confusing is the `BuilderId` method.
This method must return an absolutely unique ID for the builder. In general, I
follow the practice of making the ID contain my GitHub username and then the
platform it is building for. For example, the builder ID of the VMware builder
is "mitchellh.vmware" or something similar.
Post-processors use the builder ID value in order to make some assumptions
about the artifact results, so it is important it never changes.
Post-processors use the builder ID value in order to make some assumptions about
the artifact results, so it is important it never changes.
Other than the builder ID, the rest should be self-explanatory by reading
the [packer.Artifact interface documentation](#).
Other than the builder ID, the rest should be self-explanatory by reading the
[packer.Artifact interface documentation](#).
## Provisioning
Packer has built-in support for provisioning, but the moment when provisioning
runs must be invoked by the builder itself, since only the builder knows
when the machine is running and ready for communication.
runs must be invoked by the builder itself, since only the builder knows when
the machine is running and ready for communication.
When the machine is ready to be provisioned, run the `packer.HookProvision`
hook, making sure the communicator is not nil, since this is required for
provisioners. An example of calling the hook is shown below:
```go
``` {.go}
hook.Run(packer.HookProvision, ui, comm, nil)
```
At this point, Packer will run the provisioners and no additional work
is necessary.
At this point, Packer will run the provisioners and no additional work is
necessary.
-> **Note:** Hooks are still undergoing thought around their
general design and will likely change in a future version. They aren't
fully "baked" yet, so they aren't documented here other than to tell you
how to hook in provisioners.
-&gt; **Note:** Hooks are still undergoing thought around their general design
and will likely change in a future version. They aren't fully "baked" yet, so
they aren't documented here other than to tell you how to hook in provisioners.
## Caching Files
It is common for some builders to deal with very large files, or files that
take a long time to generate. For example, the VMware builder has the capability
to download the operating system ISO from the internet. This is timely process,
so it would be convenient to cache the file. This sort of caching is a core
part of Packer that is exposed to builders.
It is common for some builders to deal with very large files, or files that take
a long time to generate. For example, the VMware builder has the capability to
download the operating system ISO from the internet. This is timely process, so
it would be convenient to cache the file. This sort of caching is a core part of
Packer that is exposed to builders.
The cache interface is `packer.Cache`. It behaves much like a Go
[RWMutex](http://golang.org/pkg/sync/#RWMutex). The builder requests a "lock"
on certain cache keys, and is given exclusive access to that key for the
duration of the lock. This locking mechanism allows multiple builders to
share cache data even though they're running in parallel.
[RWMutex](http://golang.org/pkg/sync/#RWMutex). The builder requests a "lock" on
certain cache keys, and is given exclusive access to that key for the duration
of the lock. This locking mechanism allows multiple builders to share cache data
even though they're running in parallel.
For example, both the VMware and VirtualBox builders support downloading an
operating system ISO from the internet. Most of the time, this ISO is identical.
The locking mechanisms of the cache allow one of the builders to download it
only once, but allow both builders to share the downloaded file.
The [documentation for packer.Cache](#) is
very detailed in how it works.
The [documentation for packer.Cache](#) is very detailed in how it works.

Some files were not shown because too many files have changed in this diff Show More